feat: add structured logging for persistence and export operations
- Introduced logging for CRUD operations in repositories to enhance visibility into database interactions. - Implemented timing logs for BaseRepository and UnitOfWork lifecycle events to track performance. - Added export logging for size and duration without exposing sensitive content. - Promoted logging levels for specific operations to improve clarity and reduce noise in logs. - Established a framework for consistent logging practices across persistence and export functionalities.
This commit is contained in:
168
docs/sprints/sprint_logging_gap_remediation_p1.md
Normal file
168
docs/sprints/sprint_logging_gap_remediation_p1.md
Normal file
@@ -0,0 +1,168 @@
|
||||
# Sprint: Logging Gap Remediation (P1 - Runtime/Inputs)
|
||||
|
||||
> **Size**: M | **Owner**: Platform | **Prerequisites**: log_timing + get_logger already in place
|
||||
> **Phase**: Observability - Runtime Diagnostics
|
||||
|
||||
---
|
||||
|
||||
## Open Issues & Prerequisites
|
||||
|
||||
> ⚠️ **Review Date**: 2026-01-03 — Verification complete, scope needs owner/priority confirmation.
|
||||
|
||||
### Blocking Issues
|
||||
|
||||
| ID | Issue | Status | Resolution |
|
||||
|----|-------|--------|------------|
|
||||
| **B1** | Log level policy for invalid input (warn vs info vs debug) | ✅ | WARN with redaction |
|
||||
| **B2** | PII redaction rules for UUIDs and URLs in logs | Pending | Align with security guidance |
|
||||
|
||||
### Design Gaps to Address
|
||||
|
||||
| ID | Gap | Resolution |
|
||||
|----|-----|------------|
|
||||
| G1 | Stub-missing logs could be noisy in gRPC client mixins | Add rate-limited or once-per-session logging |
|
||||
| G2 | Timing vs. count metrics for long-running CPU tasks | Standardize on `log_timing` + optional result_count |
|
||||
|
||||
### Prerequisite Verification
|
||||
|
||||
| Prerequisite | Status | Notes |
|
||||
|--------------|--------|-------|
|
||||
| `log_timing` helper available | ✅ | `src/noteflow/infrastructure/logging/timing.py` |
|
||||
| `log_state_transition` available | ✅ | `src/noteflow/infrastructure/logging/transitions.py` |
|
||||
|
||||
---
|
||||
|
||||
## Validation Status (2026-01-03)
|
||||
|
||||
### RESOLVED SINCE TRIAGE
|
||||
|
||||
| Component | Status | Notes |
|
||||
|-----------|--------|-------|
|
||||
| Ollama availability logging | Resolved | `src/noteflow/infrastructure/summarization/ollama_provider.py` uses `log_timing` |
|
||||
| Cloud LLM API timing/logging | Resolved | `src/noteflow/infrastructure/summarization/cloud_provider.py` uses `log_timing` |
|
||||
| Google Calendar request timing | Resolved | `src/noteflow/infrastructure/calendar/google_adapter.py` uses `log_timing` |
|
||||
| OAuth refresh timing | Resolved | `src/noteflow/infrastructure/calendar/oauth_manager.py` uses `log_timing` |
|
||||
| Webhook delivery start/finish | Resolved | `src/noteflow/infrastructure/webhooks/executor.py` info logs |
|
||||
| Database engine + migrations | Resolved | `src/noteflow/infrastructure/persistence/database.py` info logs |
|
||||
| Diarization full timing | Resolved | `src/noteflow/infrastructure/diarization/engine.py` uses `log_timing` |
|
||||
| Diarization job timeout logging | Resolved | `src/noteflow/grpc/_mixins/diarization/_status.py` |
|
||||
| Meeting state transitions | Resolved | `src/noteflow/application/services/meeting_service.py` |
|
||||
| Streaming cleanup | Resolved | `src/noteflow/grpc/_mixins/streaming/_cleanup.py` |
|
||||
|
||||
### NOT IMPLEMENTED
|
||||
|
||||
| Component | Status | Notes |
|
||||
|-----------|--------|-------|
|
||||
| NER warmup timing/logs | Not implemented | `src/noteflow/application/services/ner_service.py` uses `run_in_executor` without logs |
|
||||
| ASR `transcribe_async` timing | Not implemented | `src/noteflow/infrastructure/asr/engine.py` lacks duration/RTF logs |
|
||||
| Segmenter state transitions | Not implemented | `src/noteflow/infrastructure/asr/segmenter.py` no transition logs |
|
||||
| Silent UUID parsing (workspace) | Not implemented | `src/noteflow/grpc/_mixins/meeting.py` returns None on ValueError |
|
||||
| Silent meeting-id parsing | Not implemented | `src/noteflow/grpc/_mixins/converters/_id_parsing.py` returns None on ValueError |
|
||||
| Silent calendar datetime parsing | Not implemented | `src/noteflow/infrastructure/triggers/calendar.py` returns None on ValueError |
|
||||
| Settings fallback logging | Not implemented | `_get_llm_settings`, `_get_webhook_settings`, `diarization_job_ttl_seconds` |
|
||||
| gRPC client stub missing logs | Not implemented | `src/noteflow/grpc/_client_mixins/*.py` return None silently |
|
||||
| Rust gRPC connection tracing | Not implemented | `client/src-tauri/src/grpc/client/core.rs` no start/finish timing |
|
||||
|
||||
**Downstream impact**: Runtime visibility gaps for user-facing latency, failure diagnosis, and client connection issues.
|
||||
|
||||
---
|
||||
|
||||
## Objective
|
||||
|
||||
Close remaining high-impact logging gaps for runtime operations and input validation to reduce debugging time and improve failure diagnosis across Python gRPC services and the Tauri client.
|
||||
|
||||
---
|
||||
|
||||
## Key Decisions
|
||||
|
||||
| Decision | Choice | Rationale |
|
||||
|----------|--------|-----------|
|
||||
| **Timing utility** | Use `log_timing` | Consistent duration metrics and structured fields |
|
||||
| **Invalid input logging** | Warn-level with redaction | Catch client errors without leaking sensitive data |
|
||||
| **Stub-missing logging** | Rate-limited (once per client instance) | Avoid log spam while preserving visibility |
|
||||
|
||||
---
|
||||
|
||||
## What Already Exists
|
||||
|
||||
| Asset | Location | Implication |
|
||||
|-------|----------|-------------|
|
||||
| `log_timing` helper | `src/noteflow/infrastructure/logging/timing.py` | Use for executor + network timing |
|
||||
| `log_state_transition` | `src/noteflow/infrastructure/logging/transitions.py` | Reuse for state-machine transitions |
|
||||
| Existing log_timing usage | `ollama_provider.py`, `cloud_provider.py`, `google_adapter.py` | Follow established patterns |
|
||||
|
||||
---
|
||||
|
||||
## Scope
|
||||
|
||||
| Task | Effort | Notes |
|
||||
|------|--------|-------|
|
||||
| **Application Layer** | | |
|
||||
| Add NER warmup + extraction timing logs | S | Use `log_timing` around `run_in_executor` |
|
||||
| **Infrastructure Layer** | | |
|
||||
| Add ASR `transcribe_async` duration + RTF logging | M | Include audio duration and model size |
|
||||
| Add segmenter state transition logs | S | Use `log_state_transition` or structured info logs |
|
||||
| Add settings fallback warning logs | S | `_get_llm_settings`, `_get_webhook_settings`, `diarization_job_ttl_seconds` |
|
||||
| **API Layer** | | |
|
||||
| Log invalid workspace UUID parsing (WARN + redaction) | S | `src/noteflow/grpc/_mixins/meeting.py` |
|
||||
| Log invalid meeting_id parsing (WARN + redaction) | S | `src/noteflow/grpc/_mixins/converters/_id_parsing.py` |
|
||||
| Log calendar datetime parse failures (WARN + redaction) | S | `src/noteflow/infrastructure/triggers/calendar.py` |
|
||||
| gRPC client mixins log missing stub (rate-limited) | S | `src/noteflow/grpc/_client_mixins/*.py` |
|
||||
| **Client Layer** | | |
|
||||
| Add tracing for gRPC connect attempts | S | `client/src-tauri/src/grpc/client/core.rs` |
|
||||
|
||||
**Total Effort**: M (2-4 hours)
|
||||
|
||||
---
|
||||
|
||||
## Deliverables
|
||||
|
||||
### Backend
|
||||
|
||||
**Application Layer**:
|
||||
- [ ] `src/noteflow/application/services/ner_service.py` — add warmup/extraction timing logs
|
||||
|
||||
**Infrastructure Layer**:
|
||||
- [ ] `src/noteflow/infrastructure/asr/engine.py` — log transcription duration + RTF
|
||||
- [ ] `src/noteflow/infrastructure/asr/segmenter.py` — log state transitions
|
||||
- [ ] `src/noteflow/infrastructure/summarization/cloud_provider.py` — log settings fallback
|
||||
- [ ] `src/noteflow/infrastructure/webhooks/executor.py` — log settings fallback
|
||||
|
||||
**API Layer**:
|
||||
- [ ] `src/noteflow/grpc/_mixins/meeting.py` — log invalid workspace UUID parse (WARN + redaction)
|
||||
- [ ] `src/noteflow/grpc/_mixins/converters/_id_parsing.py` — log invalid meeting_id parse (WARN + redaction)
|
||||
- [ ] `src/noteflow/infrastructure/triggers/calendar.py` — log datetime parse errors (WARN + redaction)
|
||||
- [ ] `src/noteflow/grpc/_client_mixins/*.py` — log missing stub (rate-limited)
|
||||
- [ ] `src/noteflow/grpc/_mixins/diarization_job.py` — log settings fallback
|
||||
|
||||
### Client
|
||||
|
||||
- [ ] `client/src-tauri/src/grpc/client/core.rs` — log connection attempt duration + endpoint
|
||||
|
||||
---
|
||||
|
||||
## Test Strategy
|
||||
|
||||
### Core test cases
|
||||
|
||||
- **Application**: `caplog` validates NER warmup logs appear when lazy-load path is taken
|
||||
- **Infrastructure**: `caplog` validates ASR timing log fields include duration and audio length
|
||||
- **API**: invalid UUID parsing emits warning and aborts/returns safely
|
||||
- **Client**: basic unit test or log snapshot for connection start/failure paths
|
||||
|
||||
---
|
||||
|
||||
## Quality Gates
|
||||
|
||||
- [ ] Added logs use structured fields and follow existing logging patterns
|
||||
- [ ] No new `# type: ignore` or `Any` introduced
|
||||
- [ ] Targeted tests for new logging paths where practical
|
||||
- [ ] `ruff check` + `mypy` pass (backend)
|
||||
- [ ] `npm run lint:rs` pass (client)
|
||||
|
||||
---
|
||||
|
||||
## Post-Sprint
|
||||
|
||||
- [ ] Evaluate if logging should be sampled for high-frequency segmenter transitions
|
||||
- [ ] Consider centralized log suppression for repeated invalid client inputs
|
||||
144
docs/sprints/sprint_logging_gap_remediation_p2.md
Normal file
144
docs/sprints/sprint_logging_gap_remediation_p2.md
Normal file
@@ -0,0 +1,144 @@
|
||||
# Sprint: Logging Gap Remediation (P2 - Persistence/Exports)
|
||||
|
||||
> **Size**: L | **Owner**: Platform | **Prerequisites**: P1 logging gaps resolved
|
||||
> **Phase**: Observability - Data & Lifecycle
|
||||
|
||||
---
|
||||
|
||||
## Open Issues & Prerequisites
|
||||
|
||||
> ⚠️ **Review Date**: 2026-01-03 — Verification complete, scope needs prioritization.
|
||||
|
||||
### Blocking Issues
|
||||
|
||||
| ID | Issue | Status | Resolution |
|
||||
|----|-------|--------|------------|
|
||||
| **B1** | Log volume for repository CRUD operations | Pending | Decide sampling/level policy |
|
||||
| **B2** | Sensitive data in repository logs | Pending | Redaction and field allowlist |
|
||||
|
||||
### Design Gaps to Address
|
||||
|
||||
| ID | Gap | Resolution |
|
||||
|----|-----|------------|
|
||||
| G1 | Consistent DB timing strategy across BaseRepository and UoW | Add `log_timing` helpers or per-method timing |
|
||||
| G2 | Export logs should include size without dumping content | Log byte count + segment count only |
|
||||
|
||||
### Prerequisite Verification
|
||||
|
||||
| Prerequisite | Status | Notes |
|
||||
|--------------|--------|-------|
|
||||
| Logging helpers available | ✅ | `log_timing`, `get_logger` |
|
||||
| State transition logger | ✅ | `log_state_transition` |
|
||||
|
||||
---
|
||||
|
||||
## Validation Status (2026-01-03)
|
||||
|
||||
### PARTIALLY IMPLEMENTED
|
||||
|
||||
| Component | Status | Notes |
|
||||
|-----------|--------|-------|
|
||||
| DB migrations lifecycle logs | Partial | Migration start/end logged; repo/UoW still silent |
|
||||
| Audio writer open logging | Partial | Open/flush errors logged, but thread lifecycle unlogged |
|
||||
|
||||
### NOT IMPLEMENTED
|
||||
|
||||
| Component | Status | Notes |
|
||||
|-----------|--------|-------|
|
||||
| BaseRepository query timing | Not implemented | `src/noteflow/infrastructure/persistence/repositories/_base.py` |
|
||||
| UnitOfWork lifecycle logs | Not implemented | `src/noteflow/infrastructure/persistence/unit_of_work.py` |
|
||||
| Repository CRUD logging | Not implemented | `meeting_repo.py`, `segment_repo.py`, `summary_repo.py`, etc. |
|
||||
| Asset deletion no-op logging | Not implemented | `src/noteflow/infrastructure/persistence/repositories/asset_repo.py` |
|
||||
| Export timing/logging | Not implemented | `pdf.py`, `markdown.py`, `html.py` |
|
||||
| Diarization session close log level | Not implemented | `src/noteflow/infrastructure/diarization/session.py` uses debug |
|
||||
| Background task lifecycle logs | Not implemented | `src/noteflow/grpc/_mixins/diarization/_jobs.py` task creation missing |
|
||||
|
||||
**Downstream impact**: Limited visibility into DB performance, export latency, and lifecycle cleanup.
|
||||
|
||||
---
|
||||
|
||||
## Objective
|
||||
|
||||
Add structured logging for persistence, export, and lifecycle operations so DB performance issues and long-running exports are diagnosable without ad-hoc debugging.
|
||||
|
||||
---
|
||||
|
||||
## Key Decisions
|
||||
|
||||
| Decision | Choice | Rationale |
|
||||
|----------|--------|-----------|
|
||||
| **Repository logging level** | INFO for mutations, DEBUG for reads | Avoid log noise while capturing state changes |
|
||||
| **Timing strategy** | `log_timing` around DB write batches | Consistent duration metrics without per-row spam |
|
||||
| **Export logging** | Log sizes and durations only | Avoid dumping user content |
|
||||
|
||||
---
|
||||
|
||||
## What Already Exists
|
||||
|
||||
| Asset | Location | Implication |
|
||||
|-------|----------|-------------|
|
||||
| Migration logging | `src/noteflow/infrastructure/persistence/database.py` | Reuse for DB lifecycle logs |
|
||||
| Log helpers | `src/noteflow/infrastructure/logging/*` | Standardize on structured logging |
|
||||
|
||||
---
|
||||
|
||||
## Scope
|
||||
|
||||
| Task | Effort | Notes |
|
||||
|------|--------|-------|
|
||||
| **Infrastructure Layer** | | |
|
||||
| Add BaseRepository timing wrappers | M | `_execute_*` methods emit duration |
|
||||
| Add UnitOfWork lifecycle logs | S | __aenter__/commit/rollback/exit |
|
||||
| Add CRUD mutation logs in repositories | L | Create/Update/Delete summary logs |
|
||||
| Add asset deletion no-op log | S | log when directory missing |
|
||||
| Add export timing logs | M | PDF/Markdown/HTML export duration + size |
|
||||
| Promote diarization session close to INFO | S | `session.py` |
|
||||
| Log diarization job task creation | S | `grpc/_mixins/diarization/_jobs.py` |
|
||||
| Add audio flush thread lifecycle logs | S | `infrastructure/audio/writer.py` |
|
||||
|
||||
**Total Effort**: L (4-8 hours)
|
||||
|
||||
---
|
||||
|
||||
## Deliverables
|
||||
|
||||
### Backend
|
||||
|
||||
**Infrastructure Layer**:
|
||||
- [ ] `src/noteflow/infrastructure/persistence/repositories/_base.py` — timing logs for DB operations
|
||||
- [ ] `src/noteflow/infrastructure/persistence/unit_of_work.py` — session/commit/rollback logs
|
||||
- [ ] `src/noteflow/infrastructure/persistence/repositories/*_repo.py` — mutation logging
|
||||
- [ ] `src/noteflow/infrastructure/persistence/repositories/asset_repo.py` — no-op delete log
|
||||
- [ ] `src/noteflow/infrastructure/export/pdf.py` — duration + byte-size log
|
||||
- [ ] `src/noteflow/infrastructure/export/markdown.py` — export count log
|
||||
- [ ] `src/noteflow/infrastructure/export/html.py` — export count log
|
||||
- [ ] `src/noteflow/infrastructure/diarization/session.py` — info-level close log
|
||||
- [ ] `src/noteflow/grpc/_mixins/diarization/_jobs.py` — background task creation log
|
||||
- [ ] `src/noteflow/infrastructure/audio/writer.py` — flush thread lifecycle logs
|
||||
|
||||
---
|
||||
|
||||
## Test Strategy
|
||||
|
||||
### Core test cases
|
||||
|
||||
- **Repositories**: `caplog` validates mutation logging for create/update/delete
|
||||
- **UnitOfWork**: log emitted on commit/rollback paths
|
||||
- **Exports**: ensure logs include duration and output size (bytes/segments)
|
||||
- **Lifecycle**: diarization session close emits info log
|
||||
|
||||
---
|
||||
|
||||
## Quality Gates
|
||||
|
||||
- [ ] Logging includes structured fields and avoids payload content
|
||||
- [ ] No new `# type: ignore` or `Any` introduced
|
||||
- [ ] `pytest` passes for touched modules
|
||||
- [ ] `ruff check` + `mypy` pass
|
||||
|
||||
---
|
||||
|
||||
## Post-Sprint
|
||||
|
||||
- [ ] Assess performance impact of repo timing logs
|
||||
- [ ] Consider opt-in logging for high-volume read paths
|
||||
612
docs/triage.md
612
docs/triage.md
@@ -1,165 +1,555 @@
|
||||
According to a document from **January 3, 2026** (your `repomix-output.md` snapshot), there are several concrete places where your **Python gRPC backend (`src/`)** and your **frontend (React `client/src` + Tauri `client/src-tauri`)** can be “correct but not actually wired together,” which would exactly match your symptom: *the Docker backend never shows logs that look like they were triggered by the Tauri client*.
|
||||
# Technical Debt Triage
|
||||
|
||||
Below is a “where it breaks / why / what to fix” checklist, with the most likely root causes first.
|
||||
This document tracks known issues, technical debt, and areas needing improvement.
|
||||
|
||||
---
|
||||
|
||||
## 1) Your Tauri recording pipeline can start without ever connecting to the backend
|
||||
## Insufficient Logging - Comprehensive Audit
|
||||
|
||||
### What I see
|
||||
**Discovered:** 2025-12-31
|
||||
**Last Verified:** 2026-01-03
|
||||
**Sprint Docs:**
|
||||
- `docs/sprints/sprint_logging_gap_remediation_p1.md`
|
||||
- `docs/sprints/sprint_logging_gap_remediation_p2.md`
|
||||
|
||||
* The Tauri command `start_recording` calls `stream_manager.start_streaming(meeting_id, app)` but does **not** ensure a gRPC connection exists before doing so. There is no `grpc_client.connect(...)` in that visible code path.
|
||||
* `StreamManager::setup_streaming` immediately does `let mut grpc_client = self.client.get_client()?;` and then calls `.stream_transcription(...)` on it.
|
||||
* `GrpcClient::get_client()` returns `Error::NotConnected` if there is no internal client/channel.
|
||||
|
||||
### Why that explains “no backend logs”
|
||||
|
||||
If the React side doesn’t explicitly call the Tauri `connect` command *before* starting a recording, the streaming setup fails locally with `NotConnected` and **no TCP connection is ever attempted**, so your Docker backend sees nothing (therefore no logs).
|
||||
|
||||
### Fix
|
||||
|
||||
Pick one (I’d do both #1 + #2):
|
||||
|
||||
1. **Frontend guarantee:** In the UI flow where you start recording, force an `await api.connect(serverUrl)` (or connect-on-startup), and block recording UI until connected.
|
||||
|
||||
2. **Backend guarantee in Rust:** Make `start_recording` auto-connect if needed:
|
||||
|
||||
* If `!state.grpc_client.is_connected()`, call `state.grpc_client.connect(None).await?` before `start_streaming`.
|
||||
|
||||
Also ensure you surface the Tauri connect error event in the UI (see #4). The connect command emits error info when it fails.
|
||||
**Impact:** Caused ~1 hour of debugging when Ollama 120s timeout appeared as migration hang
|
||||
**Total Issues Found:** 100+
|
||||
|
||||
---
|
||||
|
||||
## 2) You have mock/simulated paths that can completely bypass the backend
|
||||
## 1. Network/External Service Connections
|
||||
|
||||
### What I see
|
||||
### 1.1 CRITICAL: Ollama Availability Check - Silent 120s Timeout
|
||||
|
||||
* `initializeAPI` imports both `cachedAPI` and `mockAPI`.
|
||||
* Your preferences model includes `simulate_transcription?: boolean`.
|
||||
* There is a `MockTranscriptionStream` whose `send(_chunk)` is literally a no-op (so audio you “send” goes nowhere).
|
||||
**File:** `src/noteflow/infrastructure/summarization/ollama_provider.py:101-115`
|
||||
|
||||
### Why that explains “no backend logs”
|
||||
```python
|
||||
@property
|
||||
def is_available(self) -> bool:
|
||||
try:
|
||||
client = self._get_client()
|
||||
client.list() # Silent 120-second timeout!
|
||||
return True
|
||||
except (ConnectionError, TimeoutError, ...):
|
||||
return False
|
||||
```
|
||||
|
||||
If your Tauri build ever selects `mockAPI` (or a simulated transcription mode) instead of the real Tauri→Rust→gRPC flow, the UI can appear to function while **never touching the Docker backend**.
|
||||
**Status (2026-01-03):** Resolved — `log_timing` added around availability check.
|
||||
|
||||
### Fix
|
||||
|
||||
* Make the active API implementation extremely explicit at runtime (log it in the UI console + show a dev badge).
|
||||
* In Tauri builds, ensure `initializeAPI` never picks `mockAPI` unless you intentionally enable it.
|
||||
* Default `simulate_transcription` to `false` in production builds, and make it obvious when enabled.
|
||||
**Fix:**
|
||||
```python
|
||||
@property
|
||||
def is_available(self) -> bool:
|
||||
try:
|
||||
logger.info("Checking Ollama availability at %s (timeout: %.0fs)...", self._host, self._timeout)
|
||||
client = self._get_client()
|
||||
client.list()
|
||||
logger.info("Ollama server is available")
|
||||
return True
|
||||
except TimeoutError:
|
||||
logger.warning("Ollama server timeout at %s after %.0fs", self._host, self._timeout)
|
||||
return False
|
||||
except (ConnectionError, RuntimeError, OSError) as e:
|
||||
logger.debug("Ollama server unreachable at %s: %s", self._host, e)
|
||||
return False
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3) Your “source of truth” for server host/port is scattered and easy to mis-point
|
||||
### 1.2 Cloud Summarization API Calls - No Request Logging
|
||||
|
||||
### What I see (multiple defaults)
|
||||
**File:** `src/noteflow/infrastructure/summarization/cloud_provider.py:238-282`
|
||||
|
||||
Backend/Python side:
|
||||
```python
|
||||
def _call_openai(self, user_prompt: str, system_prompt: str) -> tuple[str, int | None]:
|
||||
try:
|
||||
response = client.chat.completions.create(...) # No timing logged
|
||||
except TimeoutError as e:
|
||||
raise SummarizationTimeoutError(...) # No duration logged
|
||||
```
|
||||
|
||||
* Default host/port are `"localhost"` and `50051`.
|
||||
* The server binds to an IPv6-any address: `address = f"[::]:{self._port}"` (and `add_insecure_port(address)`).
|
||||
**Status (2026-01-03):** Resolved — `log_timing` wraps OpenAI/Anthropic calls and response logging added.
|
||||
|
||||
Tauri/Rust side:
|
||||
|
||||
* Default saved preferences are `server_host: "localhost"`, `server_port: "50051"`.
|
||||
* There’s also an env-var override `NOTEFLOW_SERVER_ADDRESS`.
|
||||
* There is a test that expects the normalized default URL `http://localhost:50051`.
|
||||
|
||||
### Where this breaks in Docker + desktop apps
|
||||
|
||||
* **Docker port publish:** `localhost:50051` only works if your container publishes that port to the host (`-p 50051:50051`).
|
||||
* **IPv6 “localhost” gotcha:** Many systems resolve `localhost` to `::1` first. Docker port publishing is not consistently reachable over IPv6 on all platforms/configs. Your backend binding is IPv6-friendly inside the container, but the *host-side published port* may still be IPv4-only depending on OS/Docker. (This is a common “it works in one place but not another” cause.)
|
||||
|
||||
### Fix
|
||||
|
||||
* For debugging, set the client host explicitly to **`127.0.0.1`** instead of `localhost` (or try both).
|
||||
* Consider changing the Python server bind from `[::]:port` to `0.0.0.0:port` for simpler cross-platform Docker port publishing.
|
||||
* Make the Tauri UI show the exact URL it is attempting and whether it’s coming from preferences vs env var.
|
||||
**Fix:** Add `logger.info("Initiating OpenAI API call: model=%s", self._model)` before call, log duration after.
|
||||
|
||||
---
|
||||
|
||||
## 4) Connection and streaming events must match *exactly*, or the UI won’t reflect reality
|
||||
### 1.3 Google Calendar API - No Request Logging
|
||||
|
||||
### What I see
|
||||
**File:** `src/noteflow/infrastructure/calendar/google_adapter.py:76-91`
|
||||
|
||||
* The connect command emits a connection change event and emits error info on failure.
|
||||
* Streaming emits events using **string literals** `"transcript_update"` and `"stream_health"`.
|
||||
* The JS side has a Tauri adapter that imports `TauriCommands` and `TauriEvents` from `tauri-constants`.
|
||||
* The cached adapter references a `startTauriEventBridge` and checks `isTauriEnvironment`.
|
||||
```python
|
||||
async with httpx.AsyncClient() as client:
|
||||
response = await client.get(url, params=params, headers=headers) # No logging
|
||||
```
|
||||
|
||||
### Why that matters
|
||||
**Status (2026-01-03):** Resolved — request timing logged via `log_timing`.
|
||||
|
||||
Even if gRPC is working, if your frontend isn’t listening to the *actual* event names being emitted, you’ll see “nothing happens,” and you’ll likely never trigger downstream actions that depend on “connected/streaming” state.
|
||||
|
||||
### Fix
|
||||
|
||||
* Start the Tauri event bridge **immediately** on app boot in Tauri mode (not lazily after some UI action).
|
||||
* Put all event names in one canonical place and generate them (avoid “string literal in Rust” vs “enum in TS”).
|
||||
**Fix:** Log request start, duration, and response status.
|
||||
|
||||
---
|
||||
|
||||
## 5) Some “expected backend activity” won’t exist because identity/workspace is local-only
|
||||
### 1.4 OAuth Token Refresh - Missing Timing
|
||||
|
||||
### What I see
|
||||
**File:** `src/noteflow/infrastructure/calendar/oauth_manager.py:211-222`
|
||||
|
||||
Your Tauri identity/workspace commands do not call the backend; they read/write local state:
|
||||
```python
|
||||
async def refresh_tokens(...) -> OAuthTokens:
|
||||
response = await client.post(token_url, data=data) # No timing
|
||||
```
|
||||
|
||||
* `get_current_user`, `list_workspaces`, `switch_workspace` are local to Tauri state.
|
||||
|
||||
### Why that matters
|
||||
|
||||
If you’re using UI actions like “switch workspace” as your “did the backend get hit?” indicator: it won’t. Those actions never touch Docker.
|
||||
**Status (2026-01-03):** Resolved — refresh timing logged via `log_timing`.
|
||||
|
||||
---
|
||||
|
||||
## 6) Backend identity interceptor expects metadata your Rust client may not send
|
||||
### 1.5 Webhook Delivery - Missing Initial Request Log
|
||||
|
||||
### What I see
|
||||
**File:** `src/noteflow/infrastructure/webhooks/executor.py:107-237`
|
||||
|
||||
Backend interceptor looks for metadata keys like:
|
||||
```python
|
||||
async def deliver(...) -> WebhookDelivery:
|
||||
for attempt in range(1, max_retries + 1):
|
||||
_logger.debug("Webhook delivery attempt %d/%d", attempt, max_retries) # DEBUG only
|
||||
```
|
||||
|
||||
* `"x-user-id"`, `"x-workspace-id"`, `"x-request-id"`
|
||||
|
||||
### Why it matters
|
||||
|
||||
If your backend’s logging or routing relies on those headers, and the Tauri client never attaches them, you could end up with:
|
||||
|
||||
* missing context in logs, or
|
||||
* backend rejecting calls (depending on how the interceptor/handlers behave beyond the snippet).
|
||||
|
||||
### Fix
|
||||
|
||||
If you intend to use those headers:
|
||||
|
||||
* Add a tonic interceptor on the Rust client to inject them into every request (from the local identity/workspace state).
|
||||
**Status (2026-01-03):** Resolved — info log at delivery start + completion.
|
||||
|
||||
---
|
||||
|
||||
## 7) You may be expecting “per-RPC logs” that the backend simply doesn’t emit
|
||||
### 1.6 Database Connection Creation - No Logging
|
||||
|
||||
From the Python server snippet we can see it binds and starts, but nothing shown indicates a “log each RPC method call” interceptor.
|
||||
**File:** `src/noteflow/infrastructure/persistence/database.py:85-116`
|
||||
|
||||
So it’s possible your Tauri app *is* hitting the backend and you still see “no logs,” unless your service methods log.
|
||||
```python
|
||||
def create_engine_and_session_factory(...):
|
||||
engine = sa_create_async_engine(database_url, pool_size=pool_size, ...)
|
||||
# No logging of connection parameters
|
||||
```
|
||||
|
||||
### Fix
|
||||
|
||||
Temporarily add a server interceptor that logs:
|
||||
|
||||
* method name
|
||||
* peer address
|
||||
* status code
|
||||
* duration
|
||||
|
||||
That’s the fastest way to confirm traffic.
|
||||
**Status (2026-01-03):** Resolved — engine creation logged with masked URL.
|
||||
|
||||
---
|
||||
|
||||
## The “most likely” single root cause given your symptom
|
||||
### 1.7 Rust gRPC Client Connection - No Tracing
|
||||
|
||||
If I had to bet based strictly on the code: **you’re starting recording/streaming without ever calling `connect` first**, and because `get_client()` hard-fails with `NotConnected`, the backend never sees any traffic.
|
||||
**File:** `client/src-tauri/src/grpc/client/core.rs:174-197`
|
||||
|
||||
The second most likely is: **your Tauri app is using `mockAPI` / simulated transcription** (or something like `cachedAPI` in a mode that doesn’t actually call into the Rust commands), so there’s simply nothing to log.
|
||||
```rust
|
||||
async fn perform_connect(&self) -> Result<ServerInfo> {
|
||||
let channel = endpoint.connect().await // No tracing before/after
|
||||
.map_err(|e| Error::Connection(...))?;
|
||||
```
|
||||
|
||||
**Status (2026-01-03):** Not implemented — see P1 sprint.
|
||||
|
||||
---
|
||||
|
||||
If you want, I can also produce a **concrete “call graph”** from “User clicks Start Recording” → TS API method → Tauri invoke command string → Rust command → gRPC RPC name, but your repomix snapshot elides large bodies of the TS adapters (so I can’t see the actual `invoke(...)` command strings). The big wiring issues above are still valid and are enough to explain “no backend logs” by themselves.
|
||||
## 2. Blocking/Long-Running Operations
|
||||
|
||||
### 2.1 NER Service - Silent Model Warmup
|
||||
|
||||
**File:** `src/noteflow/application/services/ner_service.py:185-197`
|
||||
|
||||
```python
|
||||
await loop.run_in_executor(
|
||||
None,
|
||||
lambda: self._ner_engine.extract("warm up"), # No logging
|
||||
)
|
||||
```
|
||||
|
||||
**Status (2026-01-03):** Not implemented — see P1 sprint.
|
||||
|
||||
---
|
||||
|
||||
### 2.2 ASR Transcription - No Duration Logging
|
||||
|
||||
**File:** `src/noteflow/infrastructure/asr/engine.py:156-177`
|
||||
|
||||
```python
|
||||
async def transcribe_async(...) -> list[AsrResult]:
|
||||
return await loop.run_in_executor(None, ...) # No timing
|
||||
```
|
||||
|
||||
**Status (2026-01-03):** Not implemented — see P1 sprint.
|
||||
|
||||
---
|
||||
|
||||
### 2.3 Diarization - Missing Blocking Operation Logging
|
||||
|
||||
**File:** `src/noteflow/infrastructure/diarization/engine.py:299-347`
|
||||
|
||||
```python
|
||||
def diarize_full(...) -> Sequence[SpeakerTurn]:
|
||||
logger.debug("Running offline diarization on %.2fs audio", ...) # DEBUG only
|
||||
annotation = self._offline_pipeline(waveform, ...) # No end logging
|
||||
```
|
||||
|
||||
**Status (2026-01-03):** Resolved — `log_timing` wraps diarization.
|
||||
|
||||
---
|
||||
|
||||
### 2.4 Diarization Job Timeout - No Pre-Timeout Context
|
||||
|
||||
**File:** `src/noteflow/grpc/_mixins/diarization/_jobs.py:173-186`
|
||||
|
||||
```python
|
||||
async with asyncio.timeout(DIARIZATION_TIMEOUT_SECONDS):
|
||||
updated_count = await self.refine_speaker_diarization(...)
|
||||
# No logging of timeout value before entering block
|
||||
```
|
||||
|
||||
**Status (2026-01-03):** Resolved — timeout value logged in job handler.
|
||||
|
||||
---
|
||||
|
||||
## 3. Error Handling - Silent Failures
|
||||
|
||||
### 3.1 Silent ValueError Returns
|
||||
|
||||
**Files:**
|
||||
- `src/noteflow/grpc/_mixins/meeting.py:64-67` - workspace UUID parse
|
||||
- `src/noteflow/grpc/_mixins/converters.py:76-79` - meeting ID parse
|
||||
- `src/noteflow/grpc/_mixins/diarization/_jobs.py:84-87` - meeting ID validation
|
||||
- `src/noteflow/infrastructure/triggers/calendar.py:141-144` - datetime parse
|
||||
|
||||
```python
|
||||
try:
|
||||
UUID(workspace_id)
|
||||
except ValueError:
|
||||
return None # Silent failure, no logging
|
||||
```
|
||||
|
||||
**Status (2026-01-03):** Not implemented — add WARN + redaction (P1 sprint).
|
||||
|
||||
---
|
||||
|
||||
### 3.2 Silent Settings Fallbacks
|
||||
|
||||
**Files:**
|
||||
- `src/noteflow/infrastructure/webhooks/executor.py:56-65`
|
||||
- `src/noteflow/infrastructure/summarization/ollama_provider.py:44-48`
|
||||
- `src/noteflow/infrastructure/summarization/cloud_provider.py:48-52`
|
||||
- `src/noteflow/grpc/_mixins/diarization_job.py:63-66`
|
||||
|
||||
```python
|
||||
except Exception:
|
||||
return DEFAULT_VALUES # No logging that fallback occurred
|
||||
```
|
||||
|
||||
**Status (2026-01-03):** Not implemented — add warning logs (P1 sprint).
|
||||
|
||||
---
|
||||
|
||||
### 3.3 gRPC Client Stub Unavailable - Silent Returns
|
||||
|
||||
**Files:** `src/noteflow/grpc/_client_mixins/*.py` (multiple locations)
|
||||
|
||||
```python
|
||||
if not self._stub:
|
||||
return None # No logging of connection issue
|
||||
```
|
||||
|
||||
**Status (2026-01-03):** Not implemented — add rate-limited warn log (P1 sprint).
|
||||
|
||||
---
|
||||
|
||||
## 4. State Transitions and Lifecycle
|
||||
|
||||
### 4.1 Meeting State Changes Not Logged
|
||||
|
||||
**Status (2026-01-03):** Resolved — meeting service logs transitions.
|
||||
|
||||
---
|
||||
|
||||
### 4.2 Diarization Job State - Missing Previous State
|
||||
|
||||
**File:** `src/noteflow/grpc/_mixins/diarization/_jobs.py:147-171`
|
||||
|
||||
```python
|
||||
await repo.diarization_jobs.update_status(job_id, JOB_STATUS_RUNNING, ...)
|
||||
**Status (2026-01-03):** Resolved — state transitions logged.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 4.3 Segmenter State Machine - No Transition Logging
|
||||
|
||||
**File:** `src/noteflow/infrastructure/asr/segmenter.py:121-127`
|
||||
|
||||
```python
|
||||
if is_speech:
|
||||
self._state = SegmenterState.SPEECH # No logging of IDLE -> SPEECH
|
||||
```
|
||||
|
||||
**Status (2026-01-03):** Not implemented — see P1 sprint.
|
||||
|
||||
---
|
||||
|
||||
### 4.4 Stream Cleanup - No Logging
|
||||
|
||||
**File:** `src/noteflow/grpc/_mixins/streaming/_cleanup.py:14-34`
|
||||
|
||||
```python
|
||||
def cleanup_stream_resources(host, meeting_id):
|
||||
# Multiple cleanup operations, no completion log
|
||||
host._active_streams.discard(meeting_id)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 4.5 Diarization Session Close - DEBUG Only
|
||||
|
||||
**File:** `src/noteflow/infrastructure/diarization/session.py:145-159`
|
||||
|
||||
**Status (2026-01-03):** Not implemented — see P2 sprint.
|
||||
|
||||
```python
|
||||
def close(self) -> None:
|
||||
logger.debug("Session %s closed", self.meeting_id) # Should be INFO
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 4.6 Background Task Spawning - No Task ID
|
||||
|
||||
**File:** `src/noteflow/grpc/_mixins/diarization/_jobs.py:130-132`
|
||||
|
||||
**Status (2026-01-03):** Not implemented — see P2 sprint.
|
||||
|
||||
```python
|
||||
task = asyncio.create_task(self._run_diarization_job(job_id, num_speakers))
|
||||
self._diarization_tasks[job_id] = task # No logging of task creation
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 4.7 Audio Flush Thread - No Start/End Logging
|
||||
|
||||
**File:** `src/noteflow/infrastructure/audio/writer.py:135-157`
|
||||
|
||||
**Status (2026-01-03):** Not implemented — see P2 sprint.
|
||||
|
||||
```python
|
||||
self._flush_thread.start() # No logging
|
||||
# ...
|
||||
def _periodic_flush_loop(self):
|
||||
while not self._stop_flush.wait(...):
|
||||
# No entry/exit logging for loop
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Database Operations
|
||||
|
||||
### 5.1 BaseRepository - No Query Timing
|
||||
|
||||
**File:** `src/noteflow/infrastructure/persistence/repositories/_base.py`
|
||||
|
||||
**Status (2026-01-03):** Not implemented — see P2 sprint.
|
||||
|
||||
All methods (`_execute_scalar`, `_execute_scalars`, `_add_and_flush`, `_delete_and_flush`, `_add_all_and_flush`, `_execute_update`, `_execute_delete`) have no timing or logging.
|
||||
|
||||
---
|
||||
|
||||
### 5.2 Unit of Work - No Transaction Logging
|
||||
|
||||
**File:** `src/noteflow/infrastructure/persistence/unit_of_work.py:220-296`
|
||||
|
||||
**Status (2026-01-03):** Not implemented — see P2 sprint.
|
||||
|
||||
|
||||
---
|
||||
|
||||
### 5.3 Repository CRUD Operations - No Logging
|
||||
|
||||
**Files:**
|
||||
- `meeting_repo.py` - create, update, delete, list_all
|
||||
- `segment_repo.py` - add_batch, update_embedding, update_speaker
|
||||
- `summary_repo.py` - save (upsert with cascades)
|
||||
- `diarization_job_repo.py` - create, mark_running_as_failed, prune_completed
|
||||
- `entity_repo.py` - save_batch, delete_by_meeting
|
||||
- `webhook_repo.py` - create, add_delivery
|
||||
- `integration_repo.py` - set_secrets
|
||||
- `usage_event_repo.py` - add_batch, delete_before
|
||||
- `preferences_repo.py` - set_bulk
|
||||
|
||||
**Status (2026-01-03):** Not implemented — see P2 sprint.
|
||||
|
||||
---
|
||||
|
||||
## 6. File System Operations
|
||||
|
||||
### 6.1 Meeting Directory Creation - Not Logged
|
||||
|
||||
**File:** `src/noteflow/infrastructure/audio/writer.py:109-111`
|
||||
|
||||
**Status (2026-01-03):** Resolved — audio writer open logs meeting and dir.
|
||||
|
||||
```python
|
||||
self._meeting_dir.mkdir(parents=True, exist_ok=True) # No logging
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 6.2 Manifest Read/Write - Not Logged
|
||||
|
||||
**File:** `src/noteflow/infrastructure/audio/writer.py:122-123`
|
||||
|
||||
**Status (2026-01-03):** Partially implemented — open logged, manifest write still unlogged (P2 sprint).
|
||||
|
||||
```python
|
||||
manifest_path.write_text(json.dumps(manifest, indent=2)) # No logging
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 6.3 Asset Deletion - Silent No-Op
|
||||
|
||||
**File:** `src/noteflow/infrastructure/persistence/repositories/asset_repo.py:49-51`
|
||||
|
||||
**Status (2026-01-03):** Not implemented — see P2 sprint.
|
||||
|
||||
```python
|
||||
if meeting_dir.exists():
|
||||
shutil.rmtree(meeting_dir)
|
||||
logger.info("Deleted meeting assets at %s", meeting_dir)
|
||||
# No log when directory doesn't exist
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Export Operations
|
||||
|
||||
### 7.1 PDF Export - No Timing
|
||||
|
||||
**File:** `src/noteflow/infrastructure/export/pdf.py:161-186`
|
||||
|
||||
```python
|
||||
def export(self, meeting, segments) -> bytes:
|
||||
pdf_bytes = weasy_html(string=html_content).write_pdf() # No timing
|
||||
return pdf_bytes
|
||||
```
|
||||
|
||||
**Status (2026-01-03):** Not implemented — see P2 sprint.
|
||||
|
||||
---
|
||||
|
||||
### 7.2 Markdown/HTML Export - No Logging
|
||||
|
||||
**Files:** `markdown.py:37-89`, `html.py:158-187`
|
||||
|
||||
**Status (2026-01-03):** Not implemented — see P2 sprint.
|
||||
|
||||
No logging of export operations.
|
||||
|
||||
---
|
||||
|
||||
## 8. Initialization Sequences
|
||||
|
||||
### 8.1 Lazy Model Loading - Not Logged at Load Time
|
||||
|
||||
**Files:**
|
||||
- `NerEngine._ensure_loaded()` - spaCy model load
|
||||
- `DiarizationEngine` - pyannote model load
|
||||
- `OllamaSummarizer._get_client()` - client creation
|
||||
|
||||
**Status (2026-01-03):** Partially implemented — some model loads logged, NER warmup not logged (P1 sprint).
|
||||
|
||||
---
|
||||
|
||||
### 8.2 Singleton Creation - Silent
|
||||
|
||||
**File:** `src/noteflow/infrastructure/metrics/collector.py:168-178`
|
||||
|
||||
**Status (2026-01-03):** Not implemented — out of P1/P2 scope unless needed.
|
||||
|
||||
```python
|
||||
def get_metrics_collector() -> MetricsCollector:
|
||||
global _metrics_collector
|
||||
if _metrics_collector is None:
|
||||
_metrics_collector = MetricsCollector() # No logging
|
||||
return _metrics_collector
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 8.3 Provider Registration - DEBUG Level
|
||||
|
||||
**File:** `src/noteflow/application/services/summarization_service.py:119-127`
|
||||
|
||||
**Status (2026-01-03):** Partially implemented — still debug in factory registration; consider if INFO needed.
|
||||
|
||||
```python
|
||||
def register_provider(self, mode, provider):
|
||||
logger.debug("Registered %s provider", mode.value) # Should be INFO at startup
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Summary Statistics
|
||||
|
||||
| Category | Issue Count | Severity |
|
||||
|----------|-------------|----------|
|
||||
| Network/External Services | 7 | CRITICAL (mostly resolved) |
|
||||
| Blocking/Long-Running | 4 | HIGH (partially unresolved) |
|
||||
| Error Handling | 10+ | HIGH (partially unresolved) |
|
||||
| State Transitions | 7 | MEDIUM (partially unresolved) |
|
||||
| Database Operations | 30+ | MEDIUM (unresolved) |
|
||||
| File System | 3 | LOW (partially unresolved) |
|
||||
| Export | 3 | LOW (unresolved) |
|
||||
| Initialization | 5 | MEDIUM (partially unresolved) |
|
||||
| **Total** | **100+** | - |
|
||||
|
||||
---
|
||||
|
||||
## Recommended Logging Pattern
|
||||
|
||||
For all async/blocking operations:
|
||||
|
||||
```python
|
||||
logger.info("Starting <operation>: context=%s", context)
|
||||
start = time.perf_counter()
|
||||
try:
|
||||
result = await some_operation()
|
||||
elapsed_ms = (time.perf_counter() - start) * 1000
|
||||
logger.info("<Operation> completed: result_count=%d, duration_ms=%.2f", len(result), elapsed_ms)
|
||||
except TimeoutError:
|
||||
elapsed_ms = (time.perf_counter() - start) * 1000
|
||||
logger.error("<Operation> timeout after %.2fms", elapsed_ms)
|
||||
raise
|
||||
except Exception as e:
|
||||
elapsed_ms = (time.perf_counter() - start) * 1000
|
||||
logger.error("<Operation> failed after %.2fms: %s", elapsed_ms, e)
|
||||
raise
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Priority Fixes
|
||||
|
||||
### P0 - Fix Immediately
|
||||
1. (Resolved) Ollama `is_available` timeout logging
|
||||
2. (Resolved) Summarization factory timing
|
||||
3. (Resolved) Database migration progress logging
|
||||
|
||||
### P1 - Fix This Sprint
|
||||
4. (Resolved) All external HTTP calls (calendar, OAuth, webhooks)
|
||||
5. All `run_in_executor` calls (ASR, NER, diarization)
|
||||
6. Silent ValueError returns
|
||||
|
||||
### P2 - Fix Next Sprint
|
||||
7. Repository CRUD logging
|
||||
8. State transition logging (segmenter + diarization session)
|
||||
9. Background task lifecycle logging
|
||||
|
||||
---
|
||||
|
||||
## Resolved Issues
|
||||
|
||||
- ~~Server-side state volatility~~ → Diarization jobs persisted to DB
|
||||
- ~~Hardcoded directory paths~~ → `asset_path` column added to meetings
|
||||
- ~~Synchronous blocking in async gRPC~~ → `run_in_executor` for diarization
|
||||
- ~~Summarization consent not persisted~~ → Stored in `user_preferences` table
|
||||
- ~~VU meter update throttling~~ → 20fps throttle implemented
|
||||
- ~~Webhook infrastructure missing~~ → Full webhook subsystem implemented
|
||||
- ~~Integration/OAuth token storage~~ → `IntegrationSecretModel` for secure storage
|
||||
|
||||
Reference in New Issue
Block a user