Logo
Explore Help
Sign In
vasceannie/lightrag
1
0
Fork 0
You've already forked lightrag
Code Issues Pull Requests Actions 1 Packages Projects Releases Wiki Activity
Files
598eecd06d9e91df8d0e77ef0637d895dbf97564
lightrag/examples/unofficial-sample
History
yangdx 598eecd06d Refactor: Rename llm_model_max_token_size to summary_max_tokens
This commit renames the parameter 'llm_model_max_token_size' to 'summary_max_tokens' for better clarity, as it specifically controls the token limit for entity relation summaries.
2025-07-28 00:49:08 +08:00
..
copy_llm_cache_to_another_storage.py
feat: Flatten LLM cache structure for improved recall efficiency
2025-07-02 16:11:53 +08:00
lightrag_bedrock_demo.py
Remove deprected demo code
2025-05-14 01:56:26 +08:00
lightrag_cloudflare_demo.py
Refactor: Rename llm_model_max_token_size to summary_max_tokens
2025-07-28 00:49:08 +08:00
lightrag_hf_demo.py
Remove deprected demo code
2025-05-14 01:56:26 +08:00
lightrag_llamaindex_direct_demo.py
Remove deprected demo code
2025-05-14 01:56:26 +08:00
lightrag_llamaindex_litellm_demo.py
feat: Integrate Opik for Enhanced Observability in LlamaIndex LLM Interactions
2025-05-20 17:47:05 +02:00
lightrag_llamaindex_litellm_opik_demo.py
feat: Integrate Opik for Enhanced Observability in LlamaIndex LLM Interactions
2025-05-20 17:47:05 +02:00
lightrag_lmdeploy_demo.py
Remove deprected demo code
2025-05-14 01:56:26 +08:00
lightrag_nvidia_demo.py
Remove deprected demo code
2025-05-14 01:56:26 +08:00
lightrag_openai_neo4j_milvus_redis_demo.py
Refactor: Rename llm_model_max_token_size to summary_max_tokens
2025-07-28 00:49:08 +08:00
Powered by Gitea Version: 1.25.4 Page: 39ms Template: 1ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API