Files
lightrag-mcp/README.md

2.0 KiB

LightRAG MCP Server

An MCP server exposing the LightRAG Server API as tools, resources, and prompts for coding agents.

Features

  • Retrieval tools: query_data, query, query_stream, query_stream_chunks
  • Ingestion tools: ingest_text, ingest_texts, upload_document
  • Freshness tools: scan_documents, scan_and_wait, pipeline_status, wait_for_idle, track_status
  • Memory tool: ingest_memory for lessons, preferences, decisions, structures, functions, relationships
  • Graph tools: entity/relation CRUD, entity existence check, label search, graph export
  • Health tool: health
  • Macro tool: refresh_and_query (scan -> wait idle -> query_data -> query)
  • Resources: health, pipeline status, documents, graph, labels (list + popular), status counts
  • Prompts: evidence-first answering, refresh-then-query, record project memory

Quickstart

# Create a venv and install
python -m venv .venv
. .venv/bin/activate
pip install -e .

# Run the MCP server (SSE, recommended for Claude Code)
export MCP_TRANSPORT=sse
export MCP_HOST=127.0.0.1
export MCP_PORT=8150
export LIGHTRAG_BASE_URL=http://127.0.0.1:9621
export MCP_DISABLE_DNS_REBINDING=true
export MCP_ALLOWED_HOSTS='["192.168.50.185:*","192.168.50.151:*"]'
export MCP_ALLOWED_ORIGINS='["http://192.168.50.185:*","http://192.168.50.151:*"]'
lightrag-mcp

# Smoke test (health + optional retrieval)
lightrag-mcp-smoke --query "What is this project?" --format pretty

Configuration

  • LIGHTRAG_BASE_URL (default http://127.0.0.1:9621)
  • LIGHTRAG_TIMEOUT_S (default 60)
  • LIGHTRAG_POLL_INTERVAL_S (default 1)
  • LIGHTRAG_POLL_TIMEOUT_S (default 120)
  • MCP_TRANSPORT (default streamable-http)
  • MCP_HOST (default 127.0.0.1)
  • MCP_PORT (default 8000)
  • MCP_SERVER_NAME (default LightRAG MCP)

Notes

  • query_stream collects the streaming response and returns it as a single string.
  • query_stream_chunks returns chunked output and reports progress to clients that support progress events.
  • refresh_and_query is a convenience macro for evidence-first workflows.