- Enable BuildKit syntax directive - Cache UV and Bun package downloads - Update docs for cache optimization - Improve rebuild efficiency
3.6 KiB
LightRAG Docker Deployment
A lightweight Knowledge Graph Retrieval-Augmented Generation system with multiple LLM backend support.
🚀 Preparation
Clone the repository:
# Linux/MacOS
git clone https://github.com/HKUDS/LightRAG.git
cd LightRAG
# Windows PowerShell
git clone https://github.com/HKUDS/LightRAG.git
cd LightRAG
Configure your environment:
# Linux/MacOS
cp .env.example .env
# Edit .env with your preferred configuration
# Windows PowerShell
Copy-Item .env.example .env
# Edit .env with your preferred configuration
LightRAG can be configured using environment variables in the .env file:
Server Configuration
HOST: Server host (default: 0.0.0.0)PORT: Server port (default: 9621)
LLM Configuration
LLM_BINDING: LLM backend to use (lollms/ollama/openai)LLM_BINDING_HOST: LLM server host URLLLM_MODEL: Model name to use
Embedding Configuration
EMBEDDING_BINDING: Embedding backend (lollms/ollama/openai)EMBEDDING_BINDING_HOST: Embedding server host URLEMBEDDING_MODEL: Embedding model name
RAG Configuration
MAX_ASYNC: Maximum async operationsMAX_TOKENS: Maximum token sizeEMBEDDING_DIM: Embedding dimensions
🐳 Docker Deployment
Docker instructions work the same on all platforms with Docker Desktop installed.
Build Optimization
The Dockerfile uses BuildKit cache mounts to significantly improve build performance:
- Automatic cache management: BuildKit is automatically enabled via
# syntax=docker/dockerfile:1directive - Faster rebuilds: Only downloads changed dependencies when
uv.lockorbun.lockfiles are modified - Efficient package caching: UV and Bun package downloads are cached across builds
- No manual configuration needed: Works out of the box in Docker Compose and GitHub Actions
Start LightRAG server:
docker compose up -d
LightRAG Server uses the following paths for data storage:
data/
├── rag_storage/ # RAG data persistence
└── inputs/ # Input documents
Updates
To update the Docker container:
docker compose pull
docker compose down
docker compose up
Offline deployment
Software packages requiring transformers, torch, or cuda will is not preinstalled in the dokcer images. Consequently, document extraction tools such as Docling, as well as local LLM models like Hugging Face and LMDeploy, can not be used in an off line enviroment. These high-compute-resource-demanding services should not be integrated into LightRAG. Docling will be decoupled and deployed as a standalone service.
📦 Build Docker Images
For local development and testing
# Build and run with Docker Compose (BuildKit automatically enabled)
docker compose up --build
# Or explicitly enable BuildKit if needed
DOCKER_BUILDKIT=1 docker compose up --build
Note: BuildKit is automatically enabled by the # syntax=docker/dockerfile:1 directive in the Dockerfile, ensuring optimal caching performance.
For production release
multi-architecture build and push:
# Use the provided build script
./docker-build-push.sh
The build script will:
- Check Docker registry login status
- Create/use buildx builder automatically
- Build for both AMD64 and ARM64 architectures
- Push to GitHub Container Registry (ghcr.io)
- Verify the multi-architecture manifest
Prerequisites:
Before building multi-architecture images, ensure you have:
- Docker 20.10+ with Buildx support
- Sufficient disk space (20GB+ recommended for offline image)
- Registry access credentials (if pushing images)