- Moved all hookify configuration files from `.claude/` to `.claude/hooks/` subdirectory for better organization - Added four new blocking hooks to prevent common error handling anti-patterns: - `block-broad-exception-handler`: Prevents catching generic `Exception` with only logging - `block-datetime-now-fallback`: Blocks returning `datetime.now()` as fallback on parse failures to prevent data corruption - `block-default
11 KiB
CLAUDE.md
This file provides guidance to Claude Code when working with code in this repository.
🚨 CRITICAL: PARALLEL EXECUTION AFTER SWARM INIT
MANDATORY RULE: Once swarm is initialized with memory, ALL subsequent operations MUST be parallel:
- TodoWrite → Always batch 5-10+ todos in ONE call
- Task spawning → Spawn ALL agents in ONE message
- File operations → Batch ALL reads/writes together
- NEVER operate sequentially after swarm init
🚨 CRITICAL: CONCURRENT EXECUTION FOR ALL ACTIONS
ABSOLUTE RULE: ALL operations MUST be concurrent/parallel in a single message:
🔴 MANDATORY CONCURRENT PATTERNS:
- TodoWrite: ALWAYS batch ALL todos in ONE call (5-10+ todos minimum)
- Task tool: ALWAYS spawn ALL agents in ONE message with full instructions
- File operations: ALWAYS batch ALL reads/writes/edits in ONE message
- Bash commands: ALWAYS batch ALL terminal operations in ONE message
- Memory operations: ALWAYS batch ALL memory store/retrieve in ONE message
⚡ GOLDEN RULE: "1 MESSAGE = ALL RELATED OPERATIONS"
Examples of CORRECT concurrent execution:
# ✅ CORRECT: Everything in ONE message
TodoWrite { todos: [10+ todos with all statuses/priorities] }
Task("Agent 1 with full instructions and hooks")
Task("Agent 2 with full instructions and hooks")
Task("Agent 3 with full instructions and hooks")
Read("file1.js")
Read("file2.js")
Read("file3.js")
Write("output1.js", content)
Write("output2.js", content)
Bash("npm install")
Bash("npm test")
Bash("npm run build")
Examples of WRONG sequential execution:
# ❌ WRONG: Multiple messages (NEVER DO THIS)
Message 1: TodoWrite { todos: [single todo] }
Message 2: Task("Agent 1")
Message 3: Task("Agent 2")
Message 4: Read("file1.js")
Message 5: Write("output1.js")
Message 6: Bash("npm install")
# This is 6x slower and breaks coordination!
🎯 CONCURRENT EXECUTION CHECKLIST:
Before sending ANY message, ask yourself:
- ✅ Are ALL related TodoWrite operations batched together?
- ✅ Are ALL Task spawning operations in ONE message?
- ✅ Are ALL file operations (Read/Write/Edit) batched together?
- ✅ Are ALL bash commands grouped in ONE message?
- ✅ Are ALL memory operations concurrent?
If ANY answer is "No", you MUST combine operations into a single message!
🚀 CRITICAL: Claude Code Does ALL Real Work
🎯 CLAUDE CODE IS THE ONLY EXECUTOR
ABSOLUTE RULE: Claude Code performs ALL actual work:
✅ Claude Code ALWAYS Handles:
- 🔧 ALL file operations (Read, Write, Edit, MultiEdit, Glob, Grep)
- 💻 ALL code generation and programming tasks
- 🖥️ ALL bash commands and system operations
- 🏗️ ALL actual implementation work
- 🔍 ALL project navigation and code analysis
- 📝 ALL TodoWrite and task management
- 🔄 ALL git operations (commit, push, merge)
- 📦 ALL package management (npm, pip, etc.)
- 🧪 ALL testing and validation
- 🔧 ALL debugging and troubleshooting
🚀 CRITICAL: Parallel Execution & Batch Operations
🚨 MANDATORY RULE #1: BATCH EVERYTHING
When using swarms, you MUST use BatchTool for ALL operations:
- NEVER send multiple messages for related operations
- ALWAYS combine multiple tool calls in ONE message
- PARALLEL execution is MANDATORY, not optional
⚡ THE GOLDEN RULE OF SWARMS
If you need to do X operations, they should be in 1 message, not X messages
🚨 MANDATORY TODO AND TASK BATCHING
CRITICAL RULE FOR TODOS AND TASKS:
- TodoWrite MUST ALWAYS include ALL todos in ONE call (5-10+ todos)
- Task tool calls MUST be batched - spawn multiple agents in ONE message
- NEVER update todos one by one - this breaks parallel coordination
- NEVER spawn agents sequentially - ALL agents spawn together
📦 BATCH TOOL EXAMPLES
✅ CORRECT - Everything in ONE Message:
# Single Message with BatchTool
# MCP coordination setup
mcp__claude-flow__swarm_init { topology: "mesh", maxAgents: 6 }
mcp__claude-flow__agent_spawn { type: "researcher" }
mcp__claude-flow__agent_spawn { type: "coder" }
mcp__claude-flow__agent_spawn { type: "analyst" }
mcp__claude-flow__agent_spawn { type: "tester" }
mcp__claude-flow__agent_spawn { type: "coordinator" }
# Claude Code execution - ALL in parallel
Task("You are researcher agent. MUST coordinate via hooks...")
Task("You are coder agent. MUST coordinate via hooks...")
Task("You are analyst agent. MUST coordinate via hooks...")
Task("You are tester agent. MUST coordinate via hooks...")
TodoWrite { todos: [5-10 todos with all priorities and statuses] }
# File operations in parallel
Bash "mkdir -p app/{src,tests,docs}"
Write "app/package.json"
Write "app/README.md"
Write "app/src/index.js"
❌ WRONG - Multiple Messages (NEVER DO THIS):
# This is 6x slower and breaks parallel coordination!
Message 1: mcp__claude-flow__swarm_init
Message 2: Task("researcher agent")
Message 3: Task("coder agent")
Message 4: TodoWrite({ todo: "single todo" })
Message 5: Bash "mkdir src"
Message 6: Write "package.json"
🎯 BATCH OPERATIONS BY TYPE
Todo and Task Operations (Single Message):
- TodoWrite → ALWAYS include 5-10+ todos in ONE call
- Task agents → Spawn ALL agents with full instructions in ONE message
- Agent coordination → ALL Task calls must include coordination hooks
- Status updates → Update ALL todo statuses together
- NEVER split todos or Task calls across messages!
File Operations (Single Message):
- Read 10 files? → One message with 10 Read calls
- Write 5 files? → One message with 5 Write calls
- Edit 1 file many times? → One MultiEdit call
Swarm Operations (Single Message):
- Need 8 agents? → One message with swarm_init + 8 agent_spawn calls
- Multiple memories? → One message with all memory_usage calls
- Task + monitoring? → One message with task_orchestrate + swarm_monitor
Command Operations (Single Message):
- Multiple directories? → One message with all mkdir commands
- Install + test + lint? → One message with all npm commands
- Git operations? → One message with all git commands
Project Overview
NoteFlow is an intelligent meeting notetaker: local-first audio capture + navigable recall + evidence-linked summaries. It is a client-server system built around a gRPC API for bidirectional audio streaming and transcription. The repository includes:
- Python backend (
src/noteflow/) — gRPC server, domain logic, infrastructure adapters - Tauri + React desktop client (
client/) — Rust for IPC, React UI (Vite)
The gRPC schema is the shared contract between backend and client; keep proto changes in sync across Python, Rust, and TypeScript.
Quick Orientation
| Entry Point | Description |
|---|---|
python -m noteflow.grpc.server |
Backend server (src/noteflow/grpc/server/__main__.py) |
cd client && npm run dev |
Web UI (Vite) |
cd client && npm run tauri dev |
Desktop Tauri dev (requires Rust toolchain) |
src/noteflow/grpc/proto/noteflow.proto |
Protobuf schema |
client/src/api/tauri-adapter.ts |
TypeScript → Rust IPC bridge |
client/src-tauri/src/commands/ |
Rust Tauri command handlers |
Specialized Documentation
For detailed development guidance, see the specialized documentation files:
- Python Backend Development →
src/noteflow/CLAUDE.md - Client Development (TypeScript + Rust) →
client/CLAUDE.md
Build and Development Commands
# Install Python backend (editable with dev dependencies)
python -m pip install -e ".[dev]"
# Install client dependencies
cd client && npm install
# Run gRPC server
python -m noteflow.grpc.server --help
# Run client UI
cd client && npm run dev # Web UI (Vite)
cd client && npm run tauri dev # Desktop (requires Rust)
# Tests
pytest # Full Python suite
cd client && npm run test # Client tests
cd client && npm run test:e2e # E2E tests
# Docker (hot-reload enabled)
docker compose up -d postgres # PostgreSQL with pgvector
python scripts/dev_watch_server.py # Auto-reload server
Forbidden Docker Operations (without explicit permission)
docker compose build/up/down/restartdocker stop/docker kill- Any command that would interrupt the hot-reload server
Quality Commands (Makefile)
Always use Makefile targets instead of running tools directly.
make quality # Run ALL quality checks (TS + Rust + Python)
make quality-py # Python: lint + type-check + test-quality
make quality-ts # TypeScript: type-check + lint + test-quality
make quality-rs # Rust: clippy + lint
make fmt # Format all code (Biome + rustfmt)
make e2e # Playwright tests (requires frontend on :5173)
Repository Structure
├── src/noteflow/ # Python backend
│ ├── domain/ # Domain entities and ports
│ ├── application/ # Use cases and services
│ ├── infrastructure/ # External implementations
│ ├── grpc/ # gRPC server and client
│ ├── cli/ # Command-line tools
│ └── config/ # Configuration
├── client/ # Tauri + React desktop client
│ ├── src/ # React/TypeScript frontend
│ ├── src-tauri/ # Rust backend
│ └── e2e/ # End-to-end tests
├── tests/ # Python test suites
├── docs/ # Documentation
├── scripts/ # Development scripts
└── docker/ # Docker configurations
Key Cross-Cutting Concerns
gRPC Schema Synchronization
When modifying src/noteflow/grpc/proto/noteflow.proto:
- Python: Regenerate stubs and run
python scripts/patch_grpc_stubs.py - Rust: Rebuild Tauri (
cd client && npm run tauri:build) - TypeScript: Update types in
client/src/api/types/
Code Quality Standards
- Python: Basedpyright strict mode, Ruff linting, 100-char line limit
- TypeScript: Biome linting, strict mode, single quotes
- Rust: rustfmt formatting, clippy linting
Testing Philosophy
- Python: pytest with async support, testcontainers for integration
- TypeScript: Vitest unit tests, Playwright E2E
- Rust: cargo test with integration test support
Development Workflow
- Setup: Install Python and client dependencies
- Database: Start PostgreSQL with
docker compose up -d postgres - Backend: Run
python -m noteflow.grpc.server - Frontend: Run
cd client && npm run tauri dev - Quality: Run
make qualitybefore committing - Tests: Run full test suite with
pytest && cd client && npm run test:all
Known Issues & Technical Debt
See docs/triage.md for tracked technical debt.
See docs/sprints/ for feature implementation plans.