Files
openagent/.claude/CLAUDE.md
Thomas Marchand cb36560526 Rename host MCP to workspace MCP for clarity
- Rename binary from host-mcp to workspace-mcp
- Rename src/bin/host_mcp.rs to src/bin/workspace_mcp.rs
- Update tool prefix from host_* to workspace_*
- Update MCP registration name from "host" to "workspace"
- Add "Builtin" tag to UI for workspace and desktop MCPs
- Update documentation (CLAUDE.md, INSTALL.md, docs-site)

The "workspace MCP" name better reflects that it runs in the
workspace's execution context - inside containers for container
workspaces, on host for host workspaces.
2026-01-17 08:56:09 +00:00

8.2 KiB
Raw Blame History

Open Agent Project Guide

Open Agent is a managed control plane for OpenCode-based agents. The backend does not run model inference or autonomous logic; it delegates execution to an OpenCode server and focuses on orchestration, telemetry, and workspace/library management.

Architecture Summary

  • Backend (Rust/Axum): mission orchestration, workspace/container management, MCP registry, Library sync.
  • OpenCode Client: src/opencode/ and src/agents/opencode.rs (thin wrapper).
  • Dashboards: dashboard/ (Next.js) and ios_dashboard/ (SwiftUI).

Core Concepts

  • Library: Git-backed config repo (skills, commands, agents, tools, rules, MCPs). src/library/. The default template is at github.com/Th0rgal/openagent-library-template.
  • Workspaces: Host or container environments with their own skills, tools, and plugins. src/workspace.rs manages workspace lifecycle and syncs skills/tools to .opencode/.
  • Missions: Agent selection + workspace + conversation. Execution is delegated to OpenCode and streamed to the UI.

Scoping Model

  • Global: Auth, providers, MCPs (run on HOST machine), agents, commands, rules
  • Per-Workspace: Skills, tools, plugins/hooks, installed software (container only), file isolation
  • Per-Mission: Agent selection, workspace selection, conversation history

MCPs can be global because and run as child processes on the host or workspace (run inside the container). It depends on the kind of MCP.

Container Execution Model (Important!)

When a mission runs in a container workspace, bash commands execute inside the container, not on the host. Here's why:

  1. OpenCode's built-in Bash tool is disabled for container workspaces
  2. Agents use workspace_bash from the "workspace MCP" instead
  3. The "workspace MCP" command is wrapped in systemd-nspawn at startup
  4. Therefore all workspace_bash commands run inside the container with container networking

The "workspace MCP" is named this way because it runs in the workspace's execution context - for container workspaces, that means inside the container.

See src/workspace.rs lines 590-640 (nspawn wrapping) and 714-720 (tool configuration).

Contrast with workspace exec API: The /api/workspaces/:id/exec endpoint also runs commands inside containers (via nspawn), but is subject to HTTP timeouts. The mission system uses SSE streaming with no timeout.

Design Guardrails

  • Do not reintroduce autonomous agent logic (budgeting, task splitting, verification, model selection). OpenCode handles execution.
  • Keep the backend a thin orchestrator: Start Mission → Stream Events → Store Logs.
  • Avoid embedding provider-specific logic in the backend. Provider auth is managed via OpenCode config + dashboard flows.

Timeout Philosophy

Open Agent is a pure pass-through frontend to OpenCode. We intentionally do NOT impose any timeouts on the SSE event stream from OpenCode. All timeout handling is delegated to OpenCode, which manages tool execution timeouts internally.

Why no timeouts in Open Agent?

  • Long-running tools (vision analysis, large file operations, web scraping) should complete naturally
  • Users can abort missions manually via the dashboard if needed
  • Avoids artificial timeout mismatches between Open Agent and OpenCode
  • OpenCode remains the single source of truth for execution limits

What this means:

  • The SSE stream in src/opencode/mod.rs runs indefinitely until OpenCode sends a MessageComplete event or closes the connection
  • The only timeout applied is for initial HTTP connection establishment (DEFAULT_REQUEST_TIMEOUT)
  • If a mission appears stuck, check OpenCode logs first—any timeout errors originate from OpenCode or downstream clients (like Conductor), not Open Agent

Common Entry Points

  • src/api/routes.rs API routing and server startup.
  • src/api/control.rs mission control session, SSE streaming.
  • src/api/mission_runner.rs per-mission execution loop.
  • src/workspace.rs workspace lifecycle + OpenCode config generation.
  • src/opencode/ OpenCode HTTP + SSE client.

Local Dev

The backend must be deployed on a remote linux server and ran in debug mode (release is too slow).

# Backend (url is https://agent-backend.thomas.md but remote is 95.216.112.253)
export OPENCODE_BASE_URL="http://127.0.0.1:4096"
cargo run --debug

# Dashboard
cd dashboard
bun install
bun dev

Debugging Missions

Missions are persisted in a SQLite database with full event logging, enabling detailed post-mortem analysis.

Database location: ~/.openagent/missions/missions.db (or missions-dev.db in dev mode)

Retrieve events via API:

GET /api/control/missions/{mission_id}/events

Query parameters:

  • types=<type1>,<type2> filter by event type
  • limit=<n> max events to return
  • offset=<n> pagination offset

Event types captured:

  • user_message user inputs
  • thinking agent reasoning tokens
  • tool_call tool invocations (name + input)
  • tool_result tool outputs
  • assistant_message agent responses
  • mission_status_changed status transitions
  • error execution errors

Example: Retrieve tool calls for a mission:

curl "http://localhost:3000/api/control/missions/<mission_id>/events?types=tool_call,tool_result" \
  -H "Authorization: Bearer <token>"

Code entry points: src/api/mission_store/ handles persistence; src/api/control.rs exposes the events endpoint.

Dashboard Data Fetching (SWR)

The dashboard uses SWR for data fetching with stale-while-revalidate caching. This provides instant UI updates from cache while revalidating in the background.

Common SWR Keys

Use consistent keys to enable cache sharing across components:

Key Fetcher Used In
'stats' getStats Overview page (3s polling)
'workspaces' listWorkspaces Overview, Workspaces page
'missions' listMissions Recent tasks sidebar, History page
'workspace-templates' listWorkspaceTemplates Workspaces page
'library-skills' listLibrarySkills Workspaces page
'ai-providers' listAIProviders Settings page
'health' getHealth Settings page
'system-components' getSystemComponents Server connection card
'opencode-agents' getVisibleAgents New mission dialog
'openagent-config' getOpenAgentConfig New mission dialog
'tools' listTools Tools page

Usage Patterns

Basic fetch (no polling):

const { data, isLoading, error } = useSWR('workspaces', listWorkspaces, {
  revalidateOnFocus: false,
});

With polling:

const { data } = useSWR('stats', getStats, {
  refreshInterval: 3000,
  revalidateOnFocus: false,
});

After mutations (revalidate cache):

const { mutate } = useSWR('missions', listMissions);
// After deleting a mission:
await deleteMission(id);
mutate(); // Revalidates from server

Optimistic updates:

mutate(missions.filter(m => m.id !== deletedId), false); // Update cache without revalidation

Guidelines

  • Always use revalidateOnFocus: false unless you need tab-focus refresh
  • Use the same SWR key when multiple components need the same data
  • Prefer mutate() after mutations instead of manual state updates
  • SWR returns undefined (not null) when data hasn't loaded - use ?? null or ?? [] as needed

Production Deployment

For deploying Open Agent on a VPS or dedicated server, see INSTALL.md.

This covers:

  • systemd services for OpenCode and Open Agent
  • nginx/Caddy reverse proxy with SSL (Let's Encrypt)
  • DNS setup and domain configuration
  • Authentication configuration
  • Library git repo setup

If asked to deploy Open Agent: Read INSTALL.md first. It contains an "AI Agents" section at the top listing prerequisites to collect from the user (server IP, domain, SSH access, Library repo URL).

Notes

  • OpenCode config files are generated per workspace; do not keep static opencode.json in the repo.
  • Container workspaces require root and Ubuntu/Debian tooling (systemd-nspawn).