From 90f341d614bf393aae48080ba0f400da70e656eb Mon Sep 17 00:00:00 2001 From: Christian Clauss Date: Fri, 28 Nov 2025 10:31:52 +0100 Subject: [PATCH] Fix typos discovered by codespell --- README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index 68d61eee..bd73f895 100644 --- a/README.md +++ b/README.md @@ -214,7 +214,7 @@ For a streaming response implementation example, please see `examples/lightrag_o **Note 2**: Only `lightrag_openai_demo.py` and `lightrag_openai_compatible_demo.py` are officially supported sample codes. Other sample files are community contributions that haven't undergone full testing and optimization. -## Programing with LightRAG Core +## Programming with LightRAG Core > ⚠️ **If you would like to integrate LightRAG into your project, we recommend utilizing the REST API provided by the LightRAG Server**. LightRAG Core is typically intended for embedded applications or for researchers who wish to conduct studies and evaluations. @@ -313,7 +313,7 @@ A full list of LightRAG init parameters: | **vector_db_storage_cls_kwargs** | `dict` | Additional parameters for vector database, like setting the threshold for nodes and relations retrieval | cosine_better_than_threshold: 0.2(default value changed by env var COSINE_THRESHOLD) | | **enable_llm_cache** | `bool` | If `TRUE`, stores LLM results in cache; repeated prompts return cached responses | `TRUE` | | **enable_llm_cache_for_entity_extract** | `bool` | If `TRUE`, stores LLM results in cache for entity extraction; Good for beginners to debug your application | `TRUE` | -| **addon_params** | `dict` | Additional parameters, e.g., `{"language": "Simplified Chinese", "entity_types": ["organization", "person", "location", "event"]}`: sets example limit, entiy/relation extraction output language | language: English` | +| **addon_params** | `dict` | Additional parameters, e.g., `{"language": "Simplified Chinese", "entity_types": ["organization", "person", "location", "event"]}`: sets example limit, entity/relation extraction output language | language: English` | | **embedding_cache_config** | `dict` | Configuration for question-answer caching. Contains three parameters: `enabled`: Boolean value to enable/disable cache lookup functionality. When enabled, the system will check cached responses before generating new answers. `similarity_threshold`: Float value (0-1), similarity threshold. When a new question's similarity with a cached question exceeds this threshold, the cached answer will be returned directly without calling the LLM. `use_llm_check`: Boolean value to enable/disable LLM similarity verification. When enabled, LLM will be used as a secondary check to verify the similarity between questions before returning cached answers. | Default: `{"enabled": False, "similarity_threshold": 0.95, "use_llm_check": False}` | @@ -364,7 +364,7 @@ class QueryParam: max_total_tokens: int = int(os.getenv("MAX_TOTAL_TOKENS", "30000")) """Maximum total tokens budget for the entire query context (entities + relations + chunks + system prompt).""" - # History mesages is only send to LLM for context, not used for retrieval + # History messages are only sent to LLM for context, not used for retrieval conversation_history: list[dict[str, str]] = field(default_factory=list) """Stores past conversation history to maintain context. Format: [{"role": "user/assistant", "content": "message"}]. @@ -1568,7 +1568,7 @@ Langfuse provides a drop-in replacement for the OpenAI client that automatically pip install lightrag-hku pip install lightrag-hku[observability] -# Or install from souce code with debug mode enabled +# Or install from source code with debug mode enabled pip install -e . pip install -e ".[observability]" ```