# CLAUDE.md This file provides guidance to Claude Code when working with code in this repository. --- ## ๐Ÿšจ CRITICAL: PARALLEL EXECUTION AFTER SWARM INIT **MANDATORY RULE**: Once swarm is initialized with memory, ALL subsequent operations MUST be parallel: - **TodoWrite** โ†’ Always batch 5-10+ todos in ONE call - **Task spawning** โ†’ Spawn ALL agents in ONE message - **File operations** โ†’ Batch ALL reads/writes together - **NEVER** operate sequentially after swarm init --- ## ๐Ÿšจ CRITICAL: CONCURRENT EXECUTION FOR ALL ACTIONS **ABSOLUTE RULE**: ALL operations MUST be concurrent/parallel in a single message: ### ๐Ÿ”ด MANDATORY CONCURRENT PATTERNS: - **TodoWrite**: ALWAYS batch ALL todos in ONE call (5-10+ todos minimum) - **Task tool**: ALWAYS spawn ALL agents in ONE message with full instructions - **File operations**: ALWAYS batch ALL reads/writes/edits in ONE message - **Bash commands**: ALWAYS batch ALL terminal operations in ONE message - **Memory operations**: ALWAYS batch ALL memory store/retrieve in ONE message ### โšก GOLDEN RULE: "1 MESSAGE = ALL RELATED OPERATIONS" **Examples of CORRECT concurrent execution:** ```bash # โœ… CORRECT: Everything in ONE message TodoWrite { todos: [10+ todos with all statuses/priorities] } Task("Agent 1 with full instructions and hooks") Task("Agent 2 with full instructions and hooks") Task("Agent 3 with full instructions and hooks") Read("file1.js") Read("file2.js") Read("file3.js") Write("output1.js", content) Write("output2.js", content) Bash("npm install") Bash("npm test") Bash("npm run build") ``` **Examples of WRONG sequential execution:** ```bash # โŒ WRONG: Multiple messages (NEVER DO THIS) Message 1: TodoWrite { todos: [single todo] } Message 2: Task("Agent 1") Message 3: Task("Agent 2") Message 4: Read("file1.js") Message 5: Write("output1.js") Message 6: Bash("npm install") # This is 6x slower and breaks coordination! ``` ### ๐ŸŽฏ CONCURRENT EXECUTION CHECKLIST: Before sending ANY message, ask yourself: - โœ… Are ALL related TodoWrite operations batched together? - โœ… Are ALL Task spawning operations in ONE message? - โœ… Are ALL file operations (Read/Write/Edit) batched together? - โœ… Are ALL bash commands grouped in ONE message? - โœ… Are ALL memory operations concurrent? If ANY answer is "No", you MUST combine operations into a single message! --- ## ๐Ÿš€ CRITICAL: Claude Code Does ALL Real Work ### ๐ŸŽฏ CLAUDE CODE IS THE ONLY EXECUTOR **ABSOLUTE RULE**: Claude Code performs ALL actual work: โœ… **Claude Code ALWAYS Handles:** - ๐Ÿ”ง ALL file operations (Read, Write, Edit, MultiEdit, Glob, Grep) - ๐Ÿ’ป ALL code generation and programming tasks - ๐Ÿ–ฅ๏ธ ALL bash commands and system operations - ๐Ÿ—๏ธ ALL actual implementation work - ๐Ÿ” ALL project navigation and code analysis - ๐Ÿ“ ALL TodoWrite and task management - ๐Ÿ”„ ALL git operations (commit, push, merge) - ๐Ÿ“ฆ ALL package management (npm, pip, etc.) - ๐Ÿงช ALL testing and validation - ๐Ÿ”ง ALL debugging and troubleshooting --- ## ๐Ÿš€ CRITICAL: Parallel Execution & Batch Operations ### ๐Ÿšจ MANDATORY RULE #1: BATCH EVERYTHING When using swarms, you MUST use BatchTool for ALL operations: - **NEVER** send multiple messages for related operations - **ALWAYS** combine multiple tool calls in ONE message - **PARALLEL** execution is MANDATORY, not optional ### โšก THE GOLDEN RULE OF SWARMS If you need to do X operations, they should be in 1 message, not X messages ### ๐Ÿšจ MANDATORY TODO AND TASK BATCHING **CRITICAL RULE FOR TODOS AND TASKS:** - **TodoWrite** MUST ALWAYS include ALL todos in ONE call (5-10+ todos) - **Task** tool calls MUST be batched - spawn multiple agents in ONE message - **NEVER** update todos one by one - this breaks parallel coordination - **NEVER** spawn agents sequentially - ALL agents spawn together ### ๐Ÿ“ฆ BATCH TOOL EXAMPLES **โœ… CORRECT - Everything in ONE Message:** ```bash # Single Message with BatchTool # MCP coordination setup mcp__claude-flow__swarm_init { topology: "mesh", maxAgents: 6 } mcp__claude-flow__agent_spawn { type: "researcher" } mcp__claude-flow__agent_spawn { type: "coder" } mcp__claude-flow__agent_spawn { type: "analyst" } mcp__claude-flow__agent_spawn { type: "tester" } mcp__claude-flow__agent_spawn { type: "coordinator" } # Claude Code execution - ALL in parallel Task("You are researcher agent. MUST coordinate via hooks...") Task("You are coder agent. MUST coordinate via hooks...") Task("You are analyst agent. MUST coordinate via hooks...") Task("You are tester agent. MUST coordinate via hooks...") TodoWrite { todos: [5-10 todos with all priorities and statuses] } # File operations in parallel Bash "mkdir -p app/{src,tests,docs}" Write "app/package.json" Write "app/README.md" Write "app/src/index.js" ``` **โŒ WRONG - Multiple Messages (NEVER DO THIS):** ```bash # This is 6x slower and breaks parallel coordination! Message 1: mcp__claude-flow__swarm_init Message 2: Task("researcher agent") Message 3: Task("coder agent") Message 4: TodoWrite({ todo: "single todo" }) Message 5: Bash "mkdir src" Message 6: Write "package.json" ``` ### ๐ŸŽฏ BATCH OPERATIONS BY TYPE **Todo and Task Operations (Single Message):** - **TodoWrite** โ†’ ALWAYS include 5-10+ todos in ONE call - **Task agents** โ†’ Spawn ALL agents with full instructions in ONE message - **Agent coordination** โ†’ ALL Task calls must include coordination hooks - **Status updates** โ†’ Update ALL todo statuses together - **NEVER** split todos or Task calls across messages! **File Operations (Single Message):** - **Read 10 files?** โ†’ One message with 10 Read calls - **Write 5 files?** โ†’ One message with 5 Write calls - **Edit 1 file many times?** โ†’ One MultiEdit call **Swarm Operations (Single Message):** - **Need 8 agents?** โ†’ One message with swarm_init + 8 agent_spawn calls - **Multiple memories?** โ†’ One message with all memory_usage calls - **Task + monitoring?** โ†’ One message with task_orchestrate + swarm_monitor **Command Operations (Single Message):** - **Multiple directories?** โ†’ One message with all mkdir commands - **Install + test + lint?** โ†’ One message with all npm commands - **Git operations?** โ†’ One message with all git commands --- ## Project Overview NoteFlow is an intelligent meeting notetaker: local-first audio capture + navigable recall + evidence-linked summaries. It is a client-server system built around a gRPC API for bidirectional audio streaming and transcription. The repository includes: - **Python backend** (`src/noteflow/`) โ€” gRPC server, domain logic, infrastructure adapters - **Tauri + React desktop client** (`client/`) โ€” Rust for IPC, React UI (Vite) The gRPC schema is the shared contract between backend and client; keep proto changes in sync across Python, Rust, and TypeScript. --- ## Quick Orientation | Entry Point | Description | |-------------|-------------| | `python -m noteflow.grpc.server` | Backend server (`src/noteflow/grpc/server/__main__.py`) | | `cd client && npm run dev` | Web UI (Vite) | | `cd client && npm run tauri dev` | Desktop Tauri dev (requires Rust toolchain) | | `src/noteflow/grpc/proto/noteflow.proto` | Protobuf schema | | `client/src/api/tauri-adapter.ts` | TypeScript โ†’ Rust IPC bridge | | `client/src-tauri/src/commands/` | Rust Tauri command handlers | --- ## Specialized Documentation For detailed development guidance, see the specialized documentation files: - **Python Backend Development** โ†’ `src/noteflow/CLAUDE.md` - **Client Development (TypeScript + Rust)** โ†’ `client/CLAUDE.md` --- ## Build and Development Commands ```bash # Install Python backend (editable with dev dependencies) python -m pip install -e ".[dev]" # Install client dependencies cd client && npm install # Run gRPC server python -m noteflow.grpc.server --help # Run client UI cd client && npm run dev # Web UI (Vite) cd client && npm run tauri dev # Desktop (requires Rust) # Tests pytest # Full Python suite cd client && npm run test # Client tests cd client && npm run test:e2e # E2E tests # Docker (hot-reload enabled) docker compose up -d postgres # PostgreSQL with pgvector python scripts/dev_watch_server.py # Auto-reload server ``` ### Forbidden Docker Operations (without explicit permission) - `docker compose build` / `up` / `down` / `restart` - `docker stop` / `docker kill` - Any command that would interrupt the hot-reload server --- ## Quality Commands (Makefile) **Always use Makefile targets instead of running tools directly.** ```bash make quality # Run ALL quality checks (TS + Rust + Python) make quality-py # Python: lint + type-check + test-quality make quality-ts # TypeScript: type-check + lint + test-quality make quality-rs # Rust: clippy + lint make fmt # Format all code (Biome + rustfmt) make e2e # Playwright tests (requires frontend on :5173) ``` --- ## Repository Structure ``` โ”œโ”€โ”€ src/noteflow/ # Python backend โ”‚ โ”œโ”€โ”€ domain/ # Domain entities and ports โ”‚ โ”œโ”€โ”€ application/ # Use cases and services โ”‚ โ”œโ”€โ”€ infrastructure/ # External implementations โ”‚ โ”œโ”€โ”€ grpc/ # gRPC server and client โ”‚ โ”œโ”€โ”€ cli/ # Command-line tools โ”‚ โ””โ”€โ”€ config/ # Configuration โ”œโ”€โ”€ client/ # Tauri + React desktop client โ”‚ โ”œโ”€โ”€ src/ # React/TypeScript frontend โ”‚ โ”œโ”€โ”€ src-tauri/ # Rust backend โ”‚ โ””โ”€โ”€ e2e/ # End-to-end tests โ”œโ”€โ”€ tests/ # Python test suites โ”œโ”€โ”€ docs/ # Documentation โ”œโ”€โ”€ scripts/ # Development scripts โ””โ”€โ”€ docker/ # Docker configurations ``` --- ## Key Cross-Cutting Concerns ### gRPC Schema Synchronization When modifying `src/noteflow/grpc/proto/noteflow.proto`: 1. **Python**: Regenerate stubs and run `python scripts/patch_grpc_stubs.py` 2. **Rust**: Rebuild Tauri (`cd client && npm run tauri:build`) 3. **TypeScript**: Update types in `client/src/api/types/` ### Code Quality Standards - **Python**: Basedpyright strict mode, Ruff linting, 100-char line limit - **TypeScript**: Biome linting, strict mode, single quotes - **Rust**: rustfmt formatting, clippy linting ### Testing Philosophy - **Python**: pytest with async support, testcontainers for integration - **TypeScript**: Vitest unit tests, Playwright E2E - **Rust**: cargo test with integration test support --- ## Development Workflow 1. **Setup**: Install Python and client dependencies 2. **Database**: Start PostgreSQL with `docker compose up -d postgres` 3. **Backend**: Run `python -m noteflow.grpc.server` 4. **Frontend**: Run `cd client && npm run tauri dev` 5. **Quality**: Run `make quality` before committing 6. **Tests**: Run full test suite with `pytest && cd client && npm run test:all` --- ## Known Issues & Technical Debt See `docs/triage.md` for tracked technical debt. See `docs/sprints/` for feature implementation plans.