Compare commits

..

385 Commits

Author SHA1 Message Date
Rakshit
af4520df29 Merge branch 'main' into docs/azure-instance-name-clarification 2025-11-05 21:37:57 +05:30
constanttime
fabce6f823 docs: clarify Azure instanceName format for speech services
- Add Azure OpenAI STT/TTS examples to librechat.example.yaml
- Clarify that instanceName should be the <NAME> part only (e.g., 'my-instance')
- Not the full URL (e.g., 'my-instance.cognitiveservices.azure.com')
- Add note that full domain format is still supported for backward compatibility

Examples:
- Correct: instanceName: 'my-instance'
- Also works: instanceName: 'my-instance.cognitiveservices.azure.com'

Related to #10283
2025-11-05 21:35:45 +05:30
Danny Avila
0f4222a908 🪞 fix: Prevent Revoked Blob URLs in Uploaded Images (FileRow) (#10361) 2025-11-05 10:28:06 -05:00
Rakshit
772b706e20 🎙️ fix: Azure OpenAI Speech-to-Text 400 Bad Request Error (#10355) 2025-11-05 10:27:34 -05:00
Danny Avila
06fcf79d56 🛂 feat: Social Login by Provider ID First then Email (#10358) 2025-11-05 09:20:35 -05:00
Eduardo Cruz Guedes
c9e1127b85 🌅 docs: Add OpenAI Image Gen Env Vars (#10335) 2025-11-04 13:52:47 -05:00
Max Sanna
14e4941367 📎 fix: Document Uploads for Custom Endpoints (#10336)
* Fixed upload to provider for custom endpoints + unit tests

* fix: add support back for agents to be able to use Upload to Provider with supported providers

* ci: add test for agents endpoint still recognizing document supported providers

* chore: address ESLint suggestions

* Improved unit tests

* Linting error on unit tests fixed

---------

Co-authored-by: Dustin Healy <dustinhealy1@gmail.com>
2025-11-04 13:40:24 -05:00
Theo N. Truong
ce7e6edad8 🔄 refactor: MCP Registry System with Distributed Caching (#10191)
* refactor: Restructure MCP registry system with caching

- Split MCPServersRegistry into modular components:
  - MCPServerInspector: handles server inspection and health checks
  - MCPServersInitializer: manages server initialization logic
  - MCPServersRegistry: simplified registry coordination
- Add distributed caching layer:
  - ServerConfigsCacheRedis: Redis-backed configuration cache
  - ServerConfigsCacheInMemory: in-memory fallback cache
  - RegistryStatusCache: distributed leader election state
- Add promise utilities (withTimeout) replacing Promise.race patterns
- Add comprehensive cache integration tests for all cache implementations
- Remove unused MCPManager.getAllToolFunctions method

* fix: Update OAuth flow to include user-specific headers

* chore: Update Jest configuration to ignore additional test files

- Added patterns to ignore files ending with .helper.ts and .helper.d.ts in testPathIgnorePatterns for cleaner test runs.

* fix: oauth headers in callback

* chore: Update Jest testPathIgnorePatterns to exclude helper files

- Modified testPathIgnorePatterns in package.json to ignore files ending with .helper.ts and .helper.d.ts for cleaner test execution.

* ci: update test mocks

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-10-31 15:00:21 -04:00
github-actions[bot]
961f87cfda 🌍 i18n: Update translation.json with latest translations (#10323)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-10-31 14:36:32 -04:00
Danny Avila
9b4c4cafb6 🧠 refactor: Improve Reasoning Component Structure and UX (#10320)
* refactor: Reasoning components with independent toggle buttons

- Refactored ThinkingButton to remove unnecessary state and props.
- Updated ContentParts to simplify content rendering and remove hover handling.
- Improved Reasoning component to include independent toggle functionality for each THINK part.
- Adjusted styles for better layout consistency and user experience.

* refactor: isolate hover effects for Reasoning

- Updated ThinkingButton to improve hover effects and layout consistency.
- Refactored Reasoning component to include a new wrapper class for better styling.
- Adjusted icon visibility and transitions for a smoother user experience.

* fix: Prevent rendering of empty messages in Chat component

- Added a check to skip rendering if the message text is only whitespace, improving the user interface by avoiding empty containers.

* chore: Replace div with fragment in Thinking component for cleaner markup

* chore: move Thinking component to Content Parts directory

* refactor: prevent rendering of whitespace-only text in Part component only for edge cases
2025-10-31 13:05:12 -04:00
Marco Beretta
c0f1cfcaba 💡 feat: Improve Reasoning Content UI, copy-to-clipboard, and error handling (#10278)
*  feat: Refactor error handling and improve loading states in MessageContent component

*  feat: Enhance Thinking and ContentParts components with improved hover functionality and clipboard support

* fix: Adjust padding in Thinking and ContentParts components for consistent layout

*  feat: Add response label and improve message editing UI with contextual indicators

*  feat: Add isEditing prop to Feedback and Fork components for improved editing state handling

* refactor: Remove isEditing prop from Feedback and Fork components for cleaner state management

* refactor: Migrate state management from Recoil to Jotai for font size and show thinking features

* refactor: Separate ToggleSwitch into RecoilToggle and JotaiToggle components for improved clarity and state management

* refactor: Remove unnecessary comments in ToggleSwitch and MessageContent components for cleaner code

* chore: reorder import statements in Thinking.tsx

* chore: reorder import statement in EditTextPart.tsx

* chore: reorder import statement

* chore: Reorganize imports in ToggleSwitch.tsx

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-10-30 17:14:38 -04:00
Federico Ruggi
ea45d0b9c6 🏷️ fix: Add user ID to MCP tools cache keys (#10201)
* add user id to mcp tools cache key

* tests

* clean up redundant tests

* remove unused imports
2025-10-30 17:09:56 -04:00
Theo N. Truong
8f4705f683 👑 feat: Distributed Leader Election with Redis for Multi-instance Coordination (#10189)
* 🔧 refactor: Move GLOBAL_PREFIX_SEPARATOR to cacheConfig for consistency

* 👑 feat: Implement distributed leader election using Redis
2025-10-30 17:08:04 -04:00
Danny Avila
1e53ffa7ea v0.8.1-rc1 (#10316)
*  v0.8.1-rc1

* chore: Update CONFIG_VERSION to 1.3.1
2025-10-30 16:36:54 -04:00
github-actions[bot]
65281464fc 🌍 i18n: Update translation.json with latest translations (#10315)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-10-30 16:23:37 -04:00
Peter
658921af88 🔗 fix: Provided Azure Base URL Construction for Responses API (#10289)
* fix: response api works with azure base url configured

* add unit test

---------

Co-authored-by: Peter Rothlaender <peter.rothlaender@ginkgo.com>
2025-10-30 14:57:03 -04:00
Sean McGrath
ce6456c39f 🎨 fix: Update artifacts Tailwind to official CDN (#10301)
Co-authored-by: Sean McGrath <sean.mcgrath@holmesgroup.com>
2025-10-30 14:49:00 -04:00
Danny Avila
d904b281f1 🦙 fix: Ollama Custom Headers (#10314)
* 🦙 fix: Ollama Custom Headers

* chore: Correct import order for resolveHeaders in OllamaClient.js

* fix: Improve error logging for Ollama API model fetch failure

* ci: update Ollama model fetch tests

* ci: Add unit test for passing headers and user object to Ollama fetchModels
2025-10-30 14:48:10 -04:00
github-actions[bot]
5e35b7d09d 🌍 i18n: Update translation.json with latest translations (#10298)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-10-29 16:41:55 -04:00
Danny Avila
6adb425780 🔄 refactor: Max tokens handling in Agent Initialization (#10299)
* Refactored the logic for determining max output tokens in the agent initialization process.
* Changed variable names for clarity, updating from `maxTokens` to `maxOutputTokens` to better reflect their purpose.
* Adjusted calculations for `maxContextTokens` to use the new `maxOutputTokens` variable.
2025-10-29 16:41:27 -04:00
Danny Avila
e6aeec9f25 🎚️ feat: Reasoning Parameters for Custom Endpoints (#10297) 2025-10-29 13:41:35 -04:00
Danny Avila
861ef98d29 📫 refactor: OpenID Email Claim Fallback (#10296)
* 📫 refactor: Enhance OpenID email Fallback

* Updated email retrieval logic to use preferred_username or upn if email is not available.
* Adjusted logging and user data assignment to reflect the new email handling approach.
* Ensured email domain validation checks the correct email source.

* 🔄 refactor: Update Email Domain Validation Logic

* Modified `isEmailDomainAllowed` function to return true for falsy emails and missing domain restrictions.
* Added new test cases to cover scenarios with and without domain restrictions.
* Ensured proper validation when domain restrictions are present.
2025-10-29 12:57:43 -04:00
poornapragnyah
05c706137e ✂️ fix: Trim Reasoning Tags from Titles and Delete Button Visibility (#10285)
* fix: Sanitize LLM titles by stripping <think> tags and fix modal overflow

* chore: linting

* chore: Simplify title sanitization by removing unnecessary variable assignment and import order

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-10-29 12:48:58 -04:00
github-actions[bot]
9fbc2afe40 🌍 i18n: Update translation.json with latest translations (#10282)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-10-29 12:38:43 -04:00
Danny Avila
8adef91cf5 🎚️ fix: Default Max Output Tokens for Claude 4+ Models (#10293) 2025-10-29 12:28:01 -04:00
Danny Avila
70ff6e94f2 🪢 feat: Add Langfuse Tracing Support (#10292)
* 📦 feat: `@librechat/agents` v2.4.87 for LangFuse Support

* 📦 chore: update @librechat/agents to v2.4.88 in package.json and package-lock.json

* 📦 chore: update @librechat/agents to v2.4.89

* feat: Add runName configuration to AgentClient and Memory agent for improved tracing
2025-10-29 12:23:09 -04:00
Danny Avila
0e05ff484f 🔄 refactor: OAI Image Edit Proxy, Speech Settings Handling, Import Query Data Usage (#10281)
* chore: correct startupConfig usage in ImportConversations component

* refactor: properly process configured speechToText and textToSpeech settings in getCustomConfigSpeech

* refactor: proxy configuration by utilizing HttpsProxyAgent for OpenAI Image Edits
2025-10-28 09:36:03 -04:00
github-actions[bot]
250209858a 🌍 i18n: Update translation.json with latest translations (#10274)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-10-28 09:00:19 -04:00
Daniel Paulus
9e77f835a6 🎛️ feat: Custom Environment Variable Support to RAG API Helm Chart (#10245)
* Possibility to add extra env values to the deployment

* Fix: Custom environment variables should be placed after the predefined environment variables
2025-10-28 08:37:04 -04:00
Danny Avila
7973cb42ef 🔃 refactor: Clear MCP only on Model Spec Selection without MCP Servers (#10273) 2025-10-27 20:33:51 -04:00
Dustin Healy
0446d0e190 fix: Address Accessibility Issues (#10260)
* chore: add i18n localization comment for AlwaysMakeProd component

* feat: enhance accessibility by adding aria-label and aria-labelledby to Switch component

* feat: add aria-labels for accessibility in Agent and Assistant avatar buttons

* fix: add switch aria-labels for accessibility in various components

* feat: add aria-labels and localization keys for accessibility in DataTable, DataTableColumnHeader, and OGDialogTemplate components

* chore: refactor out nested ternary

* feat: add aria-label to DataTable filter button for My Files modal

* feat: add aria-labels for Buttons and localization strings

* feat: add aria-labels to Checkboxes in Agent Builder

* feat: enhance accessibility by adding aria-label and aria-labelledby to Checkbox component

* feat: add aria-label to FileSearchCheckbox in Agent Builder

* feat: add aria-label to Prompts text input area

* feat: enhance accessibility by adding aria-label and aria-labelledby to TextAreaAutosize component

* feat: remove improper role: "list" prop from List in Conversations.tsx to enhance accessibility and stop aria rules conflicting within react-virtualized component

* feat: enhance accessibility by allowing tab navigation and adding ring highlights for conversation title editing accept/reject buttons

* feat: add aria-label to Copy Link button in the conversation share modal

* feat: add title to QR code svg in conversation share modal to  describe the image content

* feat: enhance accessibility by making Agent Avatar upload keyboard navigable and round out highlight border on focus

* feat: enhance accessibility by adding aria attributes around alerting users with screen readers to invalid email address inputs in the Agent Builder

* feat: add aria-labels to buttons in Advanced panel of Agent Builder

* feat: enhance accessibility by making FileUpload and Clear All buttons in PresetItems keyboard navigable

* feat: enchance accessiblity by indexing view and delete button aria-labels in shared links management modal to their specific chat titles

* feat: add border highlighting on focus for AnimatedSearchInput

* feat: add category description to aria-labels for prompts in ListCard

* feat: add proper scoping to rows and columns in table headers

* feat: add localized aria-labelling to EditTextPart's TextAreaAutosize component and base dynamic paramters panel components and their supporting translation keys

* feat: add localized aria-labels and aria-labelledBy to Checkbox components without them

* feat: add localized aria-labeledBy for endpoint settings Sliders

* feat: add localized aria-labels for TextareaAutosize components

* chore: remove unused i18n string

* feat: add localized aria-label for BookmarkForm Checkbox

* fix: add stopPropagation onKeyDown for Preview and Edit menu items in prompts that was causing the prompts to inadvertently be sent when triggered with keyboard navigation when Auto-send Prompts was toggled on

* fix: switch TableCell to TableHead for title cells according to harvard issue #789

* fix: add more descriptive localization key for file filter button in DataTable

* chore: remove self-explanatory code comment from RenameForm

* fix: remove stray bg-yellow highlight that was left in during debugging

* fix: add aria-label to model configurator panel back button

* fix: undo incorrect hoist of tool name split for aria-label and span in MCPInput

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-10-27 19:46:43 -04:00
Danny Avila
33d6b337bc 📛 feat: Chat Badges via Model Specs (#10272)
* refactor: remove `useChatContext` from `useSelectMention`, explicitly pass `conversation` object

* feat: ephemeral agents via model specs

* refactor: Sync Jotai state with ephemeral agent state, also when Ephemeral Agent has no MCP servers selected

* refactor: move `useUpdateEphemeralAgent` to store and clean up imports

* refactor: reorder imports and invalidate queries for mcpConnectionStatus in event handler

* refactor: replace useApplyModelSpecEffects with useApplyModelSpecAgents and update event handlers to use new agent template logic

* ci: update useMCPSelect test to verify mcpValues sync with empty ephemeralAgent.mcp
2025-10-27 19:46:30 -04:00
github-actions[bot]
64df54528d 🌍 i18n: Update translation.json with latest translations (#10259)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-10-27 19:45:37 -04:00
Max Sanna
d46dde4e01 👫 fix: Update Entra ID group retrieval to use getMemberGroups and add pagination support (#10199) 2025-10-26 21:58:29 -04:00
Federico Ruggi
13b784a3e6 🧼 fix: Sanitize MCP Server Selection Against Config (#10243)
* filter out unavailable servers

* bump render time

* Fix import path for useGetStartupConfig

* refactor: Change configuredServers to use Set for improved filtering of available MCPs

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-10-26 21:48:23 -04:00
Danny Avila
90e610ceda 🎪 refactor: Allow Last Model Spec Selection without Prioritizing (#10258)
* refactor: Default Model Spec Retrieval Logic, allowing last selected spec on new chat if last selection was a spec

* chore: Replace hardcoded 'new' conversation ID with Constants.NEW_CONVO for consistency

* chore: remove redundant condition for model spec preset selection in useNewConvo hook
2025-10-26 21:37:55 -04:00
github-actions[bot]
cbbbde3681 🌍 i18n: Update translation.json with latest translations (#10229)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-10-26 21:32:38 -04:00
Sebastien Bruel
05c9195197 🛠️ fix: Agent Tools Modal on First-Time Agent Creation (#10234) 2025-10-26 21:30:05 -04:00
Danny Avila
9495520f6f 📦 chore: update vite to v6.4.1 and @playwright/test to v1.56.1 (#10227)
* 📦 chore: update vite to v6.4.1

* 📦 chore: update @playwright/test to v1.56.1
2025-10-22 22:22:57 +02:00
Sebastien Bruel
87d7ee4b0e 🌐 feat: Configurable Domain and Port for Vite Dev Server (#10180) 2025-10-22 22:04:49 +02:00
Danny Avila
d8d5d59d92 ♻️ refactor: Message Cache Clearing Logic into Reusable Helper (#10226) 2025-10-22 22:02:29 +02:00
Danny Avila
e3d33fed8d 📦 chore: update @librechat/agents to v2.4.86 (#10216) 2025-10-22 16:51:58 +02:00
github-actions[bot]
cbf52eabe3 🌍 i18n: Update translation.json with latest translations (#10175)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-10-22 09:53:21 +02:00
Danny Avila
36f0365fd4 🧮 feat: Enhance Model Pricing Coverage and Pattern Matching (#10173)
* updated gpt5-pro

it is here and on openrouter
https://platform.openai.com/docs/models/gpt-5-pro

* feat: Add gpt-5-pro pricing
- Implemented handling for the new gpt-5-pro model in the getValueKey function.
- Updated tests to ensure correct behavior for gpt-5-pro across various scenarios.
- Adjusted token limits and multipliers for gpt-5-pro in the tokens utility files.
- Enhanced model matching functionality to include gpt-5-pro variations.

* refactor: optimize model pricing and validation logic

- Added new model pricing entries for llama2, llama3, and qwen variants in tx.js.
- Updated tokenValues to include additional models and their pricing structures.
- Implemented validation tests in tx.spec.js to ensure all models resolve correctly to pricing.
- Refactored getValueKey function to improve model matching and resolution efficiency.
- Removed outdated model entries from tokens.ts to streamline pricing management.

* fix: add missing pricing

* chore: update model pricing for qwen and gemma variants

* chore: update model pricing and add validation for context windows

- Removed outdated model entries from tx.js and updated tokenValues with new models.
- Added a test in tx.spec.js to ensure all models with pricing have corresponding context windows defined in tokens.ts.
- Introduced 'command-text' model pricing in tokens.ts to maintain consistency across model definitions.

* chore: update model names and pricing for AI21 and Amazon models

- Refactored model names in tx.js for AI21 and Amazon models to remove versioning and improve consistency.
- Updated pricing values in tokens.ts to reflect the new model names.
- Added comprehensive tests in tx.spec.js to validate pricing for both short and full model names across AI21 and Amazon models.

* feat: add pricing and validation for Claude Haiku 4.5 model

* chore: increase default max context tokens to 18000 for agents

* feat: add Qwen3 model pricing and validation tests

* chore: reorganize and update Qwen model pricing in tx.js and tokens.ts

---------

Co-authored-by: khfung <68192841+khfung@users.noreply.github.com>
2025-10-19 15:23:27 +02:00
Federico Ruggi
589f119310 🩹 fix: Wrap Attempt to Reconnect OAuth MCP Servers (#10172) 2025-10-18 05:54:05 -04:00
Marco Beretta
d41b07c0af ♻️ refactor: Replace fontSize Recoil atom with Jotai (#10171)
* fix: reapply chat font size on load

* refactor: streamline font size handling in localStorage

* fix: update matchMedia mock to accurately reflect desktop and touchscreen capabilities

* refactor: implement Jotai for font size management and initialize on app load

- Replaced Recoil with Jotai for font size state management across components.
- Added a new `fontSize` atom to handle font size changes and persist them in localStorage.
- Implemented `initializeFontSize` function to apply saved font size on app load.
- Updated relevant components to utilize the new font size atom.

---------

Co-authored-by: ddooochii <ddooochii@gmail.com>
Co-authored-by: Danny Avila <danny@librechat.ai>
2025-10-18 05:50:34 -04:00
Sean McGrath
114deecc4e 🛠️ chore: Add @radix-ui/react-tooltip to Artifact Dependencies (#10112) 2025-10-16 16:26:14 -04:00
Danny Avila
f59daaeecc 📄 feat: Context Field for Anthropic Documents (PDF) (#10148)
* fix: Remove ephemeral cache control from document encoding function

* refactor: Improve document encoding types and add file context for anthropic messages api

- Added AnthropicDocumentBlock interface to define the structure for documents from the Anthropic provider.
- Updated encodeAndFormatDocuments function to utilize the new type and include optional context for filenames.
- Refactored DocumentResult to use a union type for various document formats, improving type safety and clarity.
2025-10-16 16:24:14 -04:00
Danny Avila
bc77bbd1ba 🪂 refactor: OCR Fallback for "Upload as Text" File Process (#10126) 2025-10-15 09:20:54 -04:00
Danny Avila
c602088178 📱 fix: Improve Mobile Chat Focus Detection and Navigation (#10125) 2025-10-15 08:12:32 -04:00
Danny Avila
3d1cedb85b 📡 refactor: Flush Redis Cache Script (#10087)
* 🔧 feat: Enhance Redis Configuration and Connection Handling in Cache Flush Utility

- Added support for Redis username, password, and CA certificate.
- Improved Redis client creation to handle both cluster and single instance configurations.
- Implemented a helper function to read the Redis CA certificate with error handling.
- Updated connection logic to include timeout and error handling for Redis connections.

* refactor: flush cache if redis URI is defined
2025-10-12 04:15:18 -04:00
Marlon
bf2567bc8f 🏷️ chore: update OpenAI models list in .env.example (#10085)
Remove deprecated OpenAI models and add latest GPT-5, o3/o4, and GPT-4.1 series models based on current API offerings as of October 2025.

Removed deprecated models:
- gpt-4.5-preview (deprecated July 2025)
- o1-preview, o1-mini (deprecated July/October 2025)
- gpt-4-vision-preview (shut down December 2024)
- Dated GPT-3.5 and GPT-4 variants (consolidated into base versions)

Added new flagship models:
- GPT-5 series: gpt-5, gpt-5-mini, gpt-5-nano
- o3/o4 reasoning models: o3, o4-mini, o3-pro, o3-mini
- GPT-4.1 series: gpt-4.1, gpt-4.1-mini, gpt-4.1-nano


Reorganized list with newest models first for better discoverability.

References:
- https://platform.openai.com/docs/models
- https://platform.openai.com/docs/deprecations
2025-10-12 04:13:17 -04:00
Federico Ruggi
5ce67b5b71 📮 feat: Custom OAuth Headers Support for MCP Server Config (#10014)
* add oauth_headers field to mcp options

* wrap fetch to pass oauth headers

* fix order

* consolidate headers passing

* fix tests
2025-10-11 11:17:12 -04:00
WhammyLeaf
cbd217efae 🏷️ feat: Add Custom Deployment Labels and Annotations for Helm (#10076) 2025-10-11 11:15:35 -04:00
François Leblanc
d990fe1d5f 📖 feat: Word Wrapping for Text and Markdown Code Blocks (#10055)
Enables word wrapping for text, markdown, and plaintext code blocks
to prevent horizontal scrolling on long lines. Improves readability
for prose content in fenced code blocks.
2025-10-11 10:39:33 -04:00
Sebastien Bruel
7c9a868d34 📝 feat: Add Markdown Rendering Support for Artifacts (#10049)
* Add Markdown rendering support for artifacts

* Add tests

* Remove custom code for mermaid

* Remove unnecessary dark mode hook

* refactor: optimize mermaid dependencies

- Added support for additional MIME types in artifact templates.
- Updated mermaidDependencies to include new packages: class-variance-authority, clsx, tailwind-merge, and @radix-ui/react-slot.
- Refactored zoom and refresh icons in MermaidDiagram component for improved clarity and maintainability.

* fix: add Markdown support for artifacts rendering

* feat: support 'text/md' as an additional MIME type for Markdown artifacts

* refactor: simplify markdownDependencies structure in artifacts utility

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-10-11 10:37:35 -04:00
Peter Nancarrow
e9a85d5c65 🗂️ feat: Add Optional Group Field to ModelSpecs Configuration (#9996)
* feat: Add group field to modelSpecs for flexible grouping

* resolve lint issues

* fix test

* docs: enhance modelSpecs group field documentation for clarity

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-10-11 07:55:06 -04:00
Karthikeyan N
f61afc1124 💸 chore: Update Gemini 2.5 Flash Lite Input Pricing (#10062)
* Update prompt value for gemini-2.5-flash-lite

New Input price (text, image, video)  -->	$0.10

https://ai.google.dev/gemini-api/docs/pricing

* Fix formatting of gemini-2.5-flash-lite values

changed 0.10 to 0.1 to follow standards
2025-10-11 07:53:28 -04:00
Marco Beretta
5566cc499e 🔗 fix: Add branch-specific shared links (targetMessageId) (#10016)
* feat: Enhance shared link functionality with target message support

* refactor: Remove comment on compound index in share schema

* chore: Reorganize imports in ShareButton component for clarity

* refactor: Integrate Recoil for latest message tracking in ShareButton component

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-10-10 08:42:05 -04:00
Dustin Healy
ded3f2e998 🕸️ fix: Upload to Provider Filetype Filtering for DragDropModal (#10064) 2025-10-10 07:48:31 -04:00
Dustin Healy
7792fcee17 👆🏼 fix: Agent Support for Upload to Provider in DragDropModal (#10063) 2025-10-10 07:43:56 -04:00
github-actions[bot]
f931731ef8 🌍 i18n: Update translation.json with latest translations (#10070)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-10-10 07:43:11 -04:00
Danny Avila
07d0abc9fd 🖼️ fix: Extract File Context & Persist Attachments (#10069)
- problem: `addImageUrls` had a side effect that was being leveraged before to populate both the `ocr` message field, now `fileContext`, and `client.options.attachments`, which would record the user's uploaded message attachments to the user message when saved to the database and returned at the end of the request lifecycle
- solution: created dedicated handling for file context, and made sure to populate `allFiles` with non-provider attachments
2025-10-10 05:35:37 -04:00
Danny Avila
fbe341a171 refactor: Latest Message Tracking with Robust Text Key Generation (#10059)
* chore: enhance logging for latest message actions in message components

* fix: Extract previous convoId from latest text in message helpers and process hooks

- Updated `useMessageHelpers` and `useMessageProcess` to extract `convoId` from the previous text key for improved message handling.
- Refactored `getLengthAndLastTenChars` to `getLengthAndLastNChars` for better flexibility in character length retrieval.
- Introduced `getLatestContentForKey` function to streamline content extraction from messages.

* chore: Enhance logging for clearing latest messages in conversation hooks

* refactor: Update message key formatting for improved URL parameter handling

- Modified `getLatestContentForKey` to change the format from `${text}-${i}` to `${text}&i=${i}` for better URL parameter structure.
- Adjusted `getTextKey` to increase character length retrieval from 12 to 16 in `getLengthAndLastNChars` for enhanced text processing.

* refactor: Simplify convoId extraction and enhance message formatting

- Updated `useMessageHelpers` and `useMessageProcess` to extract `convoId` using a new format for improved clarity.
- Refactored `getLatestContentForKey` to streamline content formatting and ensure consistent use of `Constants.COMMON_DIVIDER` for better message structure.
- Removed redundant length and last character extraction logic from `getLengthAndLastNChars` for cleaner code.

* chore: linting

* chore: Simplify pre-commit hook by removing unnecessary lines
2025-10-10 04:22:16 -04:00
Danny Avila
20282f32c8 📦 chore: Bump nodemailer to v7.0.9 (#10045)
* chore: add build script for client package to streamline builds

* chore: update nodemailer dependency to version 7.0.9
2025-10-09 03:45:10 -04:00
José Pedro Silva
6fa3db2969 👑 feat: Add OIDC Claim-Based Admin Role Assignment (#9170)
* feat: Add support for users to be admins when logging in using OpenID

* fix: Linting issues

* fix: whitespace

* chore: add unit tests for OIDC_ADMIN_ROLE

* refactor: Replace custom property retrieval function with lodash's get for improved readability and maintainability

* feat: Enhance OpenID role extraction and error handling in setupOpenId function

- Improved role validation to check for both array and string types.
- Added detailed error messages for missing or invalid role paths in tokens.
- Expanded unit tests to cover various scenarios for nested role extraction and error handling.

* fix: Improve error handling for role extraction in OpenID strategy

- Enhanced validation to check for invalid role types (array or string).
- Updated error messages for clarity when roles are missing or of incorrect type.
- Added unit tests to cover scenarios where roles return invalid types (object, number).

* feat: Implement user role demotion in OpenID strategy when admin role is absent from token

- Added logic to demote users from 'ADMIN' to 'USER' if the admin role is not present in the token.
- Enhanced logging to capture role changes for better traceability.
- Introduced unit tests to verify the demotion behavior and ensure correct handling when admin role environment variables are not configured.

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-10-09 03:35:22 -04:00
Dustin Healy
ff027e8243 👨‍🔧 fix: Direct Provider Attachment Support for Agents (#10035)
* fix: show direct upload option on applicable agents

* fix: allow agent file upload handler to process direct upload files (no tool_resource)
2025-10-09 03:31:04 -04:00
MarcAmick
e9b678dd6a ⚖️ fix: Add Configurable File Size Cap for Conversation Imports (#10012)
* Check file size of conversation being imported against a configured max size to prevent bringing down the application by uploading a large file

chore: remove non-english localization as needs to be added via locize

* feat: Implement file size validation for conversation imports to prevent oversized uploads

---------

Co-authored-by: Marc Amick <MarcAmick@jhu.edu>
Co-authored-by: Danny Avila <danny@librechat.ai>
2025-10-07 14:47:21 -04:00
github-actions[bot]
bb7a0274fa 🌍 i18n: Update translation.json with latest translations (#9995)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-10-07 14:14:18 -04:00
Marco Beretta
a5189052ec ️ fix: Accessibility, UI consistency, dialog & avatar refactors (#9975)
* 🔧 refactor: Improve accessibility and styling in ChatGroupItem and FilterPrompts components

* 🔧 fix: Add button type and keyboard accessibility to dropdown menu trigger in ChatGroupItem

* 🔧 fix(757): Enhance accessibility by updating aria-labels and adding localization for prompt groups

* 🔧 fix(618): Update version to 0.3.1 and enhance accessibility in InfoHoverCard component

* 🔧 fix(618): Update aria-label in InfoHoverCard to use dynamic text prop for improved accessibility

* 🔧 fix: Enhance accessibility by updating aria-labels and roles in Conversations components

* 🔧 fix(620): Enhance accessibility by adding tabIndex to Tabs.Content components in ArtifactTabs, Settings, and Speech components

* refactor: remove RevokeKeysButton component and update related components for accessibility

- Deleted RevokeKeysButton component.
- Updated SharedLinks and General components to use Label for accessibility.
- Enhanced Personalization component with aria-labelledby and aria-describedby attributes.
- Refactored ConversationModeSwitch to use ToggleSwitch for better state management.
- Improved AutoSendTextSelector with local state management and accessibility attributes.
- Replaced Switch components with ToggleSwitch in various Speech and TTS components for consistency.
- Added aria-labelledby attributes to Dropdown components for better accessibility.
- Updated translation.json to include new localization keys and improved existing ones.
- Enhanced Slider component to support aria attributes for better accessibility.

* 🔧 fix: Enhance user feedback for API key operations with success and error messages

* 🔧 fix: Update aria-labels in Avatar component for improved localization and accessibility

* 🔧 fix: Refactor handleFile and handleDrop functions for improved readability and maintainability
2025-10-07 14:12:49 -04:00
Danny Avila
bcd97aad2f 📎 feat: Direct Provider Attachment Support for Multimodal Content (#9994)
* 📎 feat: Direct Provider Attachment Support for Multimodal Content

* 📑 feat: Anthropic Direct Provider Upload (#9072)

* feat: implement Anthropic native PDF support with document preservation

- Add comprehensive debug logging throughout PDF processing pipeline
- Refactor attachment processing to separate image and document handling
- Create distinct addImageURLs(), addDocuments(), and processAttachments() methods
- Fix critical bugs in stream handling and parameter passing
- Add streamToBuffer utility for proper stream-to-buffer conversion
- Remove api/agents submodule from repository

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* chore: remove out of scope formatting changes

* fix: stop duplication of file in chat on end of response stream

* chore: bring back file search and ocr options

* chore: localize upload to provider string in file menu

* refactor: change createMenuItems args to fit new pattern introduced by anthropic-native-pdf-support

* feat: add cache point for pdfs processed by anthropic endpoint since they are unlikely to change and should benefit from caching

* feat: combine Upload Image into Upload to Provider since they both perform direct upload and change provider upload icon to reflect multimodal upload

* feat: add citations support according to docs

* refactor: remove redundant 'document' check since documents are handled properly by formatMessage in the agents repo now

* refactor: change upload logic so anthropic endpoint isn't exempted from normal upload path using Agents for consistency with the rest of the upload logic

* fix: include width and height in return from uploadLocalFile so images are correctly identified when going through an AgentUpload in addImageURLs

* chore: remove client specific handling since the direct provider stuff is handled by the agent client

* feat: handle documents in AgentClient so no need for change to agents repo

* chore: removed unused changes

* chore: remove auto generated comments from OG commit

* feat: add logic for agents to use direct to provider uploads if supported (currently just anthropic)

* fix: reintroduce role check to fix render error because of undefined value for Content Part

* fix: actually fix render bug by using proper isCreatedByUser check and making sure our mutation of formattedMessage.content is consistent

---------

Co-authored-by: Andres Restrepo <andres@thelinuxkid.com>
Co-authored-by: Claude <noreply@anthropic.com>

📁 feat: Send Attachments Directly to Provider (OpenAI) (#9098)

* refactor: change references from direct upload to direct attach to better reflect functionality

since we are just using base64 encoding strategy now rather than Files/File API for sending our attachments directly to the provider, the upload nomenclature no longer makes sense. direct_attach better describes the different methods of sending attachments to providers anyways even if we later introduce direct upload support

* feat: add upload to provider option for openai (and agent) ui

* chore: move anthropic pdf validator over to packages/api

* feat: simple pdf validation according to openai docs

* feat: add provider agnostic validatePdf logic to start handling multiple endpoints

* feat: add handling for openai specific documentPart formatting

* refactor: move require statement to proper place at top of file

* chore: add in openAI endpoint for the rest of the document handling logic

* feat: add direct attach support for azureOpenAI endpoint and agents

* feat: add pdf validation for azureOpenAI endpoint

* refactor: unify all the endpoint checks with isDocumentSupportedEndpoint

* refactor: consolidate Upload to Provider vs Upload image logic for clarity

* refactor: remove anthropic from anthropic_multimodal fileType since we support multiple providers now

🗂️ feat: Send Attachments Directly to Provider (Google) (#9100)

* feat: add validation for google PDFs and add google endpoint as a document supporting endpoint

* feat: add proper pdf formatting for google endpoints (requires PR #14 in agents)

* feat: add multimodal support for google endpoint attachments

* feat: add audio file svg

* fix: refactor attachments logic so multi-attachment messages work properly

* feat: add video file svg

* fix: allows for followup questions of uploaded multimodal attachments

* fix: remove incorrect final message filtering that was breaking Attachment component rendering

fix: manualy rename 'documents' to 'Documents' in git since it wasn't picked up due to case insensitivity in dir name

fix: add logic so filepicker for a google agent has proper filetype filtering

🛫 refactor: Move Encoding Logic to packages/api (#9182)

* refactor: move audio encode over to TS

* refactor: audio encoding now functional in LC again

* refactor: move video encode over to TS

* refactor: move document encode over to TS

* refactor: video encoding now functional in LC again

* refactor: document encoding now functional in LC again

* fix: extend file type options in AttachFileMenu to include 'google_multimodal' and update dependency array to include agent?.provider

* feat: only accept pdfs if responses api is enabled for openai convos

chore: address ESLint comments

chore: add missing audio mimetype

* fix: type safety for message content parts and improve null handling

* chore: reorder AttachFileMenuProps for consistency and clarity

* chore: import order in AttachFileMenu

* fix: improve null handling for text parts in parseTextParts function

* fix: remove no longer used unsupported capability error message for file uploads

* fix: OpenAI Direct File Attachment Format

* fix: update encodeAndFormatDocuments to support  OpenAI responses API and enhance document result types

* refactor: broaden providers supported for documents

* feat: enhance DragDrop context and modal to support document uploads based on provider capabilities

* fix: reorder import statements for consistency in video encoding module

---------

Co-authored-by: Dustin Healy <54083382+dustinhealy@users.noreply.github.com>
2025-10-06 17:30:16 -04:00
Danny Avila
9c77f53454 🔀 refactor: Only Cleanup Meili Sync if actually Synced 2025-10-05 22:41:40 -04:00
Danny Avila
31a283a4fe 🔍 feat: Add Serper as Scraper Provider and Firecrawl Version Support (#9984)
* 🔧 chore: Update @librechat/agents to v2.4.84 in package.json and package-lock.json

* feat: Serper as new scraperProvider for Web Search and add firecrawlVersion support

* fix: TWebSearchKeys and ensure unique API keys extraction

* chore: Add build:packages script to streamline package builds
2025-10-05 20:34:05 -04:00
Danny Avila
857c054a9a 🗑️ feat: Cleanup for Orphaned MeiliSearch Documents (#9980)
- Added a new function `deleteDocumentsWithoutUserField` to remove documents lacking the user field from the specified MeiliSearch index.
- Integrated this function into the `ensureFilterableAttributes` method to clean up orphaned documents when existing messages or conversations are found without a user field.

🔧 feat: Enhance Index Synchronization Logic

- Updated `ensureFilterableAttributes` to return an object indicating whether settings were updated and if orphaned documents were found.
- Integrated orphaned document cleanup directly into the index synchronization process without forcing a full re-sync unless settings were updated.
- Improved logging for clarity on index configuration updates and orphaned document handling.

🔧 feat: Improve Flow State Management in Index Synchronization

- Refactored flow state management logic to ensure cleanup occurs after synchronization, regardless of success or error.
- Enhanced logging for flow state cleanup to provide better visibility into the synchronization process.
- Streamlined the structure of the index synchronization function for improved readability.
2025-10-05 15:54:47 -04:00
Danny Avila
c9103a1708 🤖 feat: Add Z.AI GLM Context Window & Pricing (#9979)
* fix: update @librechat/agents to v2.4.83 to handle reasoning edge case encountered with GLM models

* feat: GLM Context Window & Pricing Support

* feat: Add support for glm4 model in token values and tests
2025-10-05 09:08:29 -04:00
Danny Avila
7288449011 🫴 refactor: Add Broader Support for GPT-OSS Naming (#9978) 2025-10-05 07:02:09 -04:00
alfo-dev
7897801fbc 🧱 fix: DALL-E Proxy Bypass (#9971) 2025-10-05 06:56:21 -04:00
Danny Avila
838fb53208 🔃 refactor: Decouple Effects from AppService, move to data-schemas (#9974)
* chore: linting for `loadCustomConfig`

* refactor: decouple CDN init and variable/health checks from AppService

* refactor: move AppService to packages/data-schemas

* chore: update AppConfig import path to use data-schemas

* chore: update JsonSchemaType import path to use data-schemas

* refactor: update UserController to import webSearchKeys and redefine FunctionTool typedef

* chore: remove AppService.js

* refactor: update AppConfig interface to use Partial<TCustomConfig> and make paths and fileStrategies optional

* refactor: update checkConfig function to accept Partial<TCustomConfig>

* chore: fix types

* refactor: move handleRateLimits to startup checks as is an effect

* test: remove outdated rate limit tests from AppService.spec and add new handleRateLimits tests in checks.spec
2025-10-05 06:37:57 -04:00
Danny Avila
9ff608e6af 📦 chore: fix packages/api peer dependencies (#9973) 2025-10-04 16:43:22 -04:00
Danny Avila
1b8a0bfaee ⚙️ chore: Resolve Build Warning, Package Cleanup, Robust Temp Chat Time (#9962)
* ⚙️ chore: Resolve Build Warning and `keyvMongo` types

* 🔄 chore: Update mongodb version to ^6.14.2 in package.json and package-lock.json

* chore: remove @langchain/openai dep

* 🔄 refactor: Change log level from warn to debug for missing endpoint config

* 🔄 refactor: Improve temp chat expiration date calculation in tests and implementation
2025-10-04 01:53:37 -04:00
Federico Ruggi
c0ed738aed 🚉 feat: MCP Registry Individual Server Init (2) (#9940)
* initialize servers sequentially

* adjust for exported properties that are not nullable anymore

* use underscore separator

* mock with set

* customize init timeout via env var

* refactor for readability, use loaded conns for tool functions

* address PR comments

* clean up fire-and-forget

* fix tests
2025-10-03 16:01:34 -04:00
Theo N. Truong
0e5bb6f98c 🔄 refactor: Migrate Cache Logic to TypeScript (#9771)
* Refactor: Moved Redis cache infra logic into `packages/api`
- Moved cacheFactory and redisClients from `api/cache` into `packages/api/src/cache` so that features in `packages/api` can use cache without importing backward from the backend.
- Converted all moved files into TS with proper typing.
- Created integration tests to run against actual Redis servers for redisClients and cacheFactory.
- Added a GitHub workflow to run integration tests for the cache feature.
- Bug fix: keyvRedisClient now implements the PING feature properly.

* chore: consolidate imports in getLogStores.js

* chore: reorder imports

* chore: re-add fs-extra as dev dep.

* chore: reorder imports in cacheConfig.ts, cacheFactory.ts, and keyvMongo.ts

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-10-02 09:33:58 -04:00
github-actions[bot]
341435fb25 🌍 i18n: Update translation.json with latest translations (#9932)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-10-01 23:31:23 -04:00
Danny Avila
dbe4dd96b4 🧹 chore: Cleanup Logger and Utility Imports (#9935)
* 🧹 chore: Update logger imports to use @librechat/data-schemas across multiple files and remove unused sleep function from queue.js (#9930)

* chore: Replace local isEnabled utility with @librechat/api import across multiple files, update test files

* chore: Replace local logger import with @librechat/data-schemas logger in countTokens.js and fork.js

* chore: Update logs volume path in docker-compose.yml to correct directory

* chore: import order of isEnabled in static.js
2025-10-01 23:30:47 -04:00
Danny Avila
b7d13cec6f v0.8.0 (#9929)
*  v0.8.0

* 🔧 chore: Update config version to 1.3.0

* 🔧 chore: Bump @librechat/api version to 1.4.1

* 🔧 chore: Update @librechat/client version to 0.3.1

* 🔧 chore: Bump librechat-data-provider version to 0.8.020

* 🔧 chore: Bump @librechat/data-schemas version to 0.0.23
2025-10-01 18:00:56 -04:00
Danny Avila
37321ea10d 👨‍🚀 chore: Add Newer OpenAI Models to Default List (#9926) 2025-10-01 10:06:35 -04:00
WhammyLeaf
17ab91f1fd 🔓 feat: Expose Env Field in Helm Deployment Template (#9890)
* Add support for extra secrets and config maps in helm deployment template

* expose single env field in values field instead of extra secret and config map fields

* a small fix
2025-10-01 09:32:19 -04:00
Danny Avila
4777bd22c5 Revert "🚉 feat: MCP Registry Individual Server Init (#9887)"
This reverts commit b8720a9b7a.
2025-09-30 09:39:19 -04:00
normunds-wipo
dfe236acb5 📂 fix: Allow text/xml mimetype (#9908) 2025-09-30 08:49:41 -04:00
Federico Ruggi
c5d1861acf 🔧 fix: Ensure getServerToolFunctions Handles Errors (#9895)
* ensure getServerToolFunctions handles errors

* remove reduntant test
2025-09-30 08:48:39 -04:00
Federico Ruggi
b8720a9b7a 🚉 feat: MCP Registry Individual Server Init (#9887)
* initialize servers sequentially

* adjust for exported properties that are not nullable anymore

* use underscore separator

* mock with set

* customize init timeout via env var
2025-09-29 21:24:41 -04:00
linnil1
0b2fde73e3 ❇️ feat: Add Gemini 2.5 Default Models & Pricing (#9892)
* feat: Add Gemini 2.5 models support

* feat: Remove deprecated Gemini models
2025-09-29 21:23:28 -04:00
Danny Avila
c19b8755a7 🤖 feat: Claude Sonnet 4.5, DeepSeek V3.2 Context & Pricing (#9894)
* feat: Add new Claude models to sharedAnthropicModels list

* chore: use correct claude aliases for default list

* chore: update deepseek model rates for accuracy

* chore: update @librechat/agents dependency to version 2.4.82
2025-09-29 21:09:26 -04:00
github-actions[bot]
f6e19d8034 🌍 i18n: Update translation.json with latest translations (#9869)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-29 09:12:50 -04:00
Danny Avila
c0eb19730a 🪙 refactor: Auth Token Retrieval with Sorting and Query Options (#9884) 2025-09-29 09:06:40 -04:00
Danny Avila
a1471c2f37 📧 fix: Case-Insensitive Domain Matching (#9868)
* chore: move domain related functions to `packages/api`

* fix: isEmailDomainAllowed for case-insensitive domain matching

- Added tests to validate case-insensitive matching for email domains in various scenarios.
- Updated isEmailDomainAllowed function to convert email domains to lowercase for consistent comparison.
- Improved handling of null/undefined entries in allowedDomains.

* ci: Mock isEmailDomainAllowed in samlStrategy tests

- Added a mock implementation for isEmailDomainAllowed to return true in samlStrategy tests, ensuring consistent behavior during test execution.

* ci: Update import of isEmailDomainAllowed in ldapStrategy tests

- Changed the import of isEmailDomainAllowed from the domains service to the api package for consistency and to reflect recent refactoring.
2025-09-27 21:20:19 -04:00
Danny Avila
712f0b3ca2 📌 fix: Exclude Pinned Keys from Cleanup and Fix MCP Pin State (#9867)
* fix: Prevent MCPSelect from rendering when not pinned and no values are available

* fix: Exclude 'pinned' keys from timestamped storage cleanup logic

* fix: Safeguard MCPSelect rendering by adding optional chaining for mcpValues
2025-09-27 17:21:48 -04:00
MyGitHub
062d813b21 ☸️ feat: Helm hostAliases Support For Custom DNS Mappings (#9857)
Add ability to configure hostAliases in Helm chart to redirect traffic to proxy servers or custom endpoints via /etc/hosts entries.

Co-authored-by: Feng Lu <feng.lu@kindredgroup.com>
2025-09-27 10:49:36 -04:00
Danny Avila
4b5b46604c 🔍 refactor: OCR Fully Optional with Defaults for "Upload as Text" (#9856)
* refactor: move `loadOCRConfig` from `packages/data-provider` to `packages/api` and return `undefined` if not explicitly configured

* fix: loadOCRConfig import from @librechat/api

* refactor: update defaultTextMimeTypes to support virtually all file types for text parsing

* fix: improve OCR capability check and error message for unsupported file types

* ci: remove unnecessary ocr expectation from AppService test
2025-09-26 11:56:11 -04:00
Danny Avila
3d7eaf0fcc 🌐 feat: OpenRouter Web Search (#9853)
* 🌐 feat: OpenRouter Web Search

- Added tests for handling web_search parameter with OpenRouter in various scenarios.
- Implemented logic to manage web_search in modelOptions and addParams/dropParams.
- Ensured correct configuration of llmConfig and modelKwargs for OpenRouter, including handling of plugins.
- Improved overall integration of OpenRouter with OpenAI API, ensuring expected behavior across different configurations.

* chore: bump @librechat/agents to v2.4.81
2025-09-26 09:35:41 -04:00
Danny Avila
823015160c 🕸️ refactor: Drop/Add web_search Param Handling for Custom Endpoints (#9852)
- Added tests to validate behavior of web_search parameter in getOpenAIConfig function.
- Implemented logic to handle web_search in addParams and dropParams, ensuring correct precedence and behavior.
- Ensured web_search does not appear in modelKwargs or llmConfig when not applicable.
- Improved overall configuration management for OpenAI API integration.
2025-09-26 08:56:39 -04:00
Theo N. Truong
3219734b9e 🔌 fix: Shared MCP Server Connection Management (#9822)
- Fixed a bug in reinitMCPServer where a user connection was created for an app-level server whenever this server is reinitialized
- Made MCPManager.getUserConnection to return an error if the connection is app-level
- Add MCPManager.getConnection to return either an app connection or a user connection based on the serverName
- Made MCPManager.appConnections public to avoid unnecessary wrapper methods.
2025-09-26 08:24:36 -04:00
Danny Avila
4f3683fd9a 👤 fix: Missing User Placeholder Fields for MCP Services (#9824) 2025-09-24 22:48:38 -04:00
Danny Avila
57f8b333bc 🕵️ refactor: Optimize Message Search Performance (#9818)
* 🕵️ feat: Enhance Index Sync and MeiliSearch filtering for User Field

- Implemented `ensureFilterableAttributes` function to configure MeiliSearch indexes for messages and conversations to filter by user.
- Updated sync logic to trigger a full re-sync if the user field is missing or index settings are modified.
- Adjusted search queries in Conversation and Message models to include user filtering.
- Ensured 'user' field is marked as filterable in MongoDB schema for both messages and conversations.

This update improves data integrity and search capabilities by ensuring user-related data is properly indexed and retrievable.

* fix: message processing in Search component to use linear list and not tree

* feat: Implement user filtering in MeiliSearch for shared links

* refactor: Optimize message search retrieval by batching database calls

* chore: Update MeiliSearch parameters type to use SearchParams for improved type safety
2025-09-24 16:27:34 -04:00
Danny Avila
f9aebeba92 🛡️ fix: Title Generation Skip Logic Based On Endpoint Config (#9811) 2025-09-24 10:21:19 -04:00
github-actions[bot]
b85950aa9a 🌍 i18n: Update translation.json with latest translations (#9789)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-24 07:13:08 -04:00
Danny Avila
bcec5bfceb 🆔 fix: Prioritize Immutable Sub Claim for OIDC User ID (#9788)
* add use of immutable claims to identify user object

* fix semicolons

* update email attribute on change

* replace ternary expressions

* fix semicolon

* chore: add typing

* chore: reorder fields in `findOpenIDUser`

* refactor: optimize user lookup logic in `findOpenIDUser` function to minimize database roundtrips

* refactor: integrate findOpenIDUser for improved user retrieval in refreshController

* refactor: improve error logging for invalid refresh tokens in refreshController

* ci: mock findUser correctly in openidStrategy tests

* test: add unit tests for findOpenIDUser function to enhance user retrieval logic

---------

Co-authored-by: Joachim Keltsch <joachim.keltsch@daimlertruck.com>
2025-09-23 14:46:53 -04:00
MyGitHub
e4f323e71a 🌐 feat: Helm DNS Configuration Support for Traffic Redirection (#9785)
This PR adds DNS configuration support to the LibreChat Helm chart, enabling users to redirect traffic to proxy servers or use custom DNS settings.

## What's Changed
- Added dnsPolicy and dnsConfig fields to deployment.yaml template
- Added DNS configuration options to values.yaml with comprehensive examples
- Created documentation and example configurations

## Use Cases
- Redirect AI service traffic (AWS Bedrock, OpenAI, etc.) to proxy servers
- Use corporate DNS servers for name resolution
- Control traffic routing through custom DNS configurations
- Enforce traffic through security gateways

## Configuration Example
```yaml
dnsPolicy: "None"
dnsConfig:
  nameservers:
    - "10.0.0.10"  # Custom DNS server for redirections
  searches:
    - "svc.cluster.local"
  options:
    - name: ndots
      value: "2"
```

## Testing Results
 Successfully tested with Docker Compose environment
 DNS resolution correctly redirects to configured IPs
 HTTP requests properly routed to proxy servers
 Tested with multiple domains (AWS Bedrock, OpenAI, SageMaker)

Test output:
- bedrock-runtime.us-east-1.amazonaws.com -> 172.25.0.10 ✓
- api.openai.com -> 172.25.0.10 ✓
- sagemaker-runtime.us-east-1.amazonaws.com -> 172.25.0.10 ✓

All DNS redirects working correctly with proxy server receiving traffic.

## Documentation
- Added comprehensive DNS_CONFIGURATION.md guide
- Included examples for common use cases
- Provided troubleshooting steps

## Backward Compatibility
This change is fully backward compatible. If dnsPolicy and dnsConfig are not specified, the default Kubernetes DNS behavior is maintained.

Fixes #[issue_number]

Co-authored-by: LibreChat User <user@example.com>
2025-09-23 10:41:58 -04:00
Ihsan Soydemir
d83826b604 🔐 feat: Support Multiple Roles in OPENID_REQUIRED_ROLE (#9171)
* feat: support multiple roles in OPENID_REQUIRED_ROLE

- Allow comma-separated roles in OPENID_REQUIRED_ROLE environment variable
- User needs ANY of the specified roles to login (OR logic)
- Maintain backward compatibility with single role configuration
- Add comprehensive test coverage for multiple role scenarios

* Add tests

* Fix linter

* Add missing closing brace

* Add new line

* Simplify tests

* Refresh OpenID verify callback in tests

* Fix OpenID spec and resolve linting errors

* test: Add backward compatibility test for single required role in OpenID strategy

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-09-23 10:39:34 -04:00
Jerum Hubbert
2153db2f5f 📊 feat: Helm Chart Port Flexibility and MongoDB Update (#9750)
- Update MongoDB chart dependency from v16.3.0 to v16.5.45
- Add explicit containerPort configuration with fallback to service.port
- Standardize port references in health probes to use explicit port numbers
- Add targetPort and containerPort fields to service configuration
- Include service type options as inline comment for better clarity

These changes improve the Helm chart's port management flexibility and
bring the MongoDB dependency up to date with the latest stable version.

Co-authored-by: Jerum Hubbert <jerum.hubbert@scientificgames.com>
2025-09-23 09:48:07 -04:00
Clay Rosenthal
de02892396 📦 fix: Helm Chart HPA Configuration Issues (#9770) 2025-09-23 09:44:40 -04:00
Sean McGrath
f61e057f7f 🔐 fix: MCP OAuth Token Persistence Race Condition and Refresh Auth Method (#9773)
* set supported endpoint auth method when token_url exists

* persist tokens immediately

* add token storage validation tests
2025-09-23 09:35:56 -04:00
Danny Avila
91e49d82aa 🔼 feat: Vercel App Attribution for LibreChat (#9769) 2025-09-22 16:15:15 -04:00
github-actions[bot]
880c7b43a1 🌍 i18n: Update translation.json with latest translations (#9764)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-22 14:58:35 -04:00
Danny Avila
c99a29f8da 📦 chore: update @librechat/agents to v2.4.80 (#9766) 2025-09-22 14:58:20 -04:00
Danny Avila
8a60e8990f 🤖 refactor: Side Panel Agent UI To Account For Ephemeral Agents (#9763)
* refactor: Remove unused imports and consolidate ephemeral agent logic

* refactor: Side Panel agent handling to account for ephemeral agents for UI

* refactor: Replace Constants.EPHEMERAL_AGENT_ID checks with isEphemeralAgent utility for consistency

* ci: AgentPanel tests with additional mock configurations and utility functions
2025-09-22 09:48:05 -04:00
Danny Avila
a6bf2b6ce3 🔐 refactor: Improve MCP Auth UX for Agent Panel (#9762)
* chore: Improve logging format for initial flow state creation

* refactor: MCP Tool Management with Improved Auth Handling and State Management

- Updated `CustomUserVarsSection` to include optional localization for placeholder text.
- Refactored `MCPToolSelectDialog` to streamline tool addition and management, including improved handling of authentication data.
- Introduced a new `addToolsToForm` function to encapsulate logic for adding tools to the form state.
- Enhanced `useRemoveMCPTool` hook to simplify tool removal logic and ensure proper state updates.
- Added loading state management for custom variable saving to improve user experience.

* refactor: Enhance MCP Tool Removal Logic and Integrate Toast Notifications

- Updated `MCPToolSelectDialog` to utilize the new `removeTool` function from the `useRemoveMCPTool` hook for improved tool removal handling.
- Refactored `useRemoveMCPTool` to accept options for toast notifications, allowing for more flexible user feedback during tool removal.
- Removed the previous inline tool removal logic to streamline the component's code and improve maintainability.

* refactor: Enhance user plugins mutation to invalidate MCP auth values on uninstall

* refactor: Replace refetchQueries with invalidateQueries for improved cache management

* chore: remove unused i18n key
2025-09-22 08:53:19 -04:00
github-actions[bot]
ff8dac570f 🌍 i18n: Update translation.json with latest translations (#9759)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-22 08:44:07 -04:00
Danny Avila
96870e0da0 refactor: MCP OAuth Polling with Gradual Backoff and Timeout Handling (#9752)
* refactor: Implement gradual backoff polling for oauth connection status with timeout handling

* refactor: Enhance OAuth polling with gradual backoff and timeout handling; update reconnection tracking

* refactor: reconnection timeout behavior in OAuthReconnectionManager and OAuthReconnectionTracker

- Implement tests to verify reconnection timeout handling, including tracking of reconnection states and cleanup of timed-out entries.
- Enhance existing methods in OAuthReconnectionManager and OAuthReconnectionTracker to support timeout checks and cleanup logic.
- Ensure proper handling of multiple servers with different timeout periods and edge cases for active states.

* chore: remove comment

* refactor: Enforce strict 3-minute OAuth timeout with updated polling intervals and improved timeout handling

* refactor: Remove unused polling logic and prevent duplicate polling for servers in MCP server manager

* refactor: Update localization key for no memories message in MemoryViewer

* refactor: Improve MCP tool initialization by handling server failures

- Introduced a mechanism to track failed MCP servers, preventing retries for unavailable servers.
- Added logging for failed tool creation attempts to enhance debugging and monitoring.

* refactor: Update reconnection timeout to enforce a strict 3-minute limit

* ci: Update reconnection timeout tests to reflect a strict 3-minute limit

* ci: Update reconnection timeout tests to enforce a strict 3-minute limit

* chore: Remove unused MCP connection timeout message
2025-09-21 22:58:19 -04:00
Danny Avila
f0599ad36c 🧬 refactor: Optimize MCP Tool Queries with Server-Centric Architecture
🧬 refactor: Optimize MCP Tool Queries with Server-Centric Architecture

refactor: optimize mcp tool queries by removing redundancy, making server-centric structure, enabling query only when expected, minimize looping/transforming query data, eliminating unused/compute-heavy methods

ci: MCP Server Tools Mocking in Agent Tests
2025-09-21 20:40:14 -04:00
Danny Avila
5b1a31ef4d 🔄 refactor: Optimize MCP Tool Initialization
🔄 refactor: Optimize MCP Tool Initialization

fix: update tool caching to use separated mcp logic

refactor: Replace `req.user` with `userId` in MCP handling functions

refactor: Replace `req` parameter with `userId` in file search tool functions

fix: Update user connection parameter to use object format in reinitMCPServer

refactor: Simplify MCP tool creation logic and improve handling of tool configurations to avoid capturing too much in closures

refactor: ensure MCP available tools are fetched from cache only when needed
2025-09-21 20:31:28 -04:00
Danny Avila
386900fb4f 🧰 refactor: Decouple MCP Tools from System Tools (#9748) 2025-09-21 07:56:40 -04:00
Federico Ruggi
9d2aba5df5 🛡️ fix: Handle Null MCPManager In OAuthReconnectionManager (#9740) 2025-09-20 11:06:23 -04:00
Thomas Joußen
a5195a57a4 🔐 fix: Handle Multiple Email Addresses in LDAP Auth (#9729) 2025-09-20 11:01:45 -04:00
Danny Avila
2489670f54 📂 refactor: File Read Operations (#9747)
* fix: axios response logging for text parsing, remove console logging, remove jsdoc

* refactor: error logging in logAxiosError function to handle various error types with type guards

* refactor: enhance text parsing with improved error handling and async file reading

* refactor: replace synchronous file reading with asynchronous methods for improved performance and memory management

* ci: update tests
2025-09-20 10:17:24 -04:00
Danny Avila
0352067da2 🎨 refactor: Improve Mermaid Artifacts Styling (#9742)
* 🎨 refactor: Improve Mermaid Artifacts Styling

* refactor: Replace ArtifactMarkdown with MermaidMarkdown
2025-09-20 08:19:44 -04:00
Danny Avila
fcaf55143d 🏷️ fix: Increment Tag Counters When Forking/Duplicating Conversations (#9737)
* fix: increment tag counters when forking/duplicating conversations

- Add bulkIncrementTagCounts to update existing tag counts in bulk
- Integrate tag count updates into importBatchBuilder.saveBatch() using Promise.all
- Update frontend mutations to directly update cache instead of invalidating queries
- Optimize bulkIncrementTagCounts to skip unnecessary database queries

Fixes issue where forked/duplicated conversations with bookmarks would not increment
tag counters, leading to negative counts when bookmarks were later removed.

* chore: reorder import statements for clarity in fork.spec.js
2025-09-19 22:02:09 -04:00
Danny Avila
aae3694b11 🛠️ chore: Typing and Remove Comments (#9732)
* chore: Update documentation for formatToolContent function, remove JSDoc types and duplicate comments

* chore: fix type errors due to attachment.filename in Attachment component
2025-09-19 16:22:53 -04:00
github-actions[bot]
68c9f668c1 🌍 i18n: Update translation.json with latest translations (#9726)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-19 13:26:57 -04:00
Danny Avila
8b2e1c6088 📝 fix: Prevent Deletion of User Input During AI Generation (#9719)
* bug: updating so text doesn't delete

* fix: preserve user input when typing during AI response

Previously, form.reset() would clear any text the user started typing while waiting for AI response. Now checks if textarea has new content before resetting, preventing text loss and improving UX.

* chore: revert useSubmitMessage changes

* fix: reduce debounce time for input handling in auto-save

* fix: improve debounce handling for auto-save input to enhance user experience

---------

Co-authored-by: Megan Greenberg <mgreenberg@networkninja.com>
2025-09-19 13:19:38 -04:00
Danny Avila
99135a3dc1 🔍 fix: Race Condition in Search Bar Clear Text Handler (#9718) 2025-09-19 07:52:16 -04:00
Danny Avila
344e7c44b5 🔐 fix: Respect Server's Token Endpoint Auth Methods for MCP OAuth Refresh (#9717)
* fix: respect server's token endpoint auth methods for MCP OAuth refresh

Previously, LibreChat always used Basic Auth when refreshing OAuth tokens if a
client_secret was present. This caused issues with servers (like FastMCP) that
only support client_secret_post. Now properly checks and respects the server's
advertised token_endpoint_auth_methods_supported.

Fixes token refresh failures with error: "refresh_token.client_id: Field required"

* chore: remove MCP OAuth URL Logging
2025-09-19 06:50:02 -04:00
github-actions[bot]
e5d2a932bc 🌍 i18n: Update translation.json with latest translations (#9704)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-19 06:06:40 -04:00
Danny Avila
c40554c03b 🏪 fix: Template for Chats Starting from Agent Marketplace (#9702)
* fix: correctly build conversation template and preset for chats starting from marketplace

* test: enhance AgentDetail tests with additional localization and conversation mocks
2025-09-18 21:05:43 -04:00
Danny Avila
98af4564e8 🎯 refactor: MCP Registry To Handle serverInstructions As String "true" (#9703)
- Updated MCPServersRegistry to correctly process serverInstructions when provided as a string "true", allowing it to fetch instructions from the server.
- Added a new utility function `isEnabled` to determine if serverInstructions should trigger a fetch.
- Introduced comprehensive tests to validate the behavior for different serverInstructions configurations, ensuring both string "true" and boolean true fetch from the server while custom strings remain unchanged.

🎯 This enhancement improves the flexibility and correctness of server instruction handling in the MCPServersRegistry.
2025-09-18 21:03:21 -04:00
Real Null
26a58fcabc 🚨 fix: Redis CA file handling (#9692)
* 🚨 fix: Critical Redis CA file handling bug that could crash app

🔧 Added safe error handling for Redis CA certificate file reading in cacheConfig.js

## 🐛 Problem
- fs.readFileSync() was called directly without error handling
- Missing or inaccessible REDIS_CA files would throw unhandled exceptions
- 💥 Application would crash during startup with cryptic filesystem errors
-  No validation of file existence before attempting to read

##  Solution
-  Added getRedisCA() helper function with comprehensive error handling
- 🔍 Implemented fs.existsSync() check before file reading attempts
- 🛡️ Added try-catch block to handle filesystem errors gracefully
- 📝 Added informative warning/error logging for troubleshooting
- 🔄 Function returns null safely on any error condition

## 🎯 Benefits
- 🚫 Prevents application crashes from misconfigured CA certificate paths
- 🔍 Provides clear error messages for debugging certificate issues
-  Maintains backward compatibility for valid certificate configurations
- 🚀 Improves production stability and deployment reliability

## 🧪 Testing Results
-  Verified handling of missing REDIS_CA environment variable
-  Tested with non-existent file paths (returns null with warning)
-  Confirmed valid certificate files are read correctly
-  Validated error handling for permission/access issues

🎉 This fix ensures LibreChat continues running regardless of Redis CA
certificate configuration problems, improving overall system reliability.

🏷️ Type: 🐛 Bug Fix
📊 Impact: 🔴 High (prevents application crashes)
🎯 Area: Cache Configuration, Redis Integration

* chore: Redis CA certificate handling with proper logging + JSDocs

* chore: Improve error logging for Redis CA certificate file read failure

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-09-18 20:33:40 -04:00
Danny Avila
3fec63e597 💽 fix: Memory Permissions Handling (#9701) 2025-09-18 20:26:02 -04:00
Danny Avila
81139046e5 🔄 refactor: Convert OCR Tool Resource to Context (#9699)
* WIP: conversion of `ocr` to `context`

* refactor: make `primeResources` backwards-compatible for `ocr` tool_resources

* refactor: Convert legacy `ocr` tool resource to `context` in agent updates

- Implemented conversion logic to replace `ocr` with `context` in both incoming updates and existing agent data.
- Merged file IDs and files from `ocr` into `context` while ensuring deduplication.
- Updated tools array to reflect the change from `ocr` to `context`.

* refactor: Enhance context file handling in agent processing

- Updated the logic for managing context files by consolidating file IDs from both `ocr` and `context` resources.
- Improved backwards compatibility by ensuring that context files are correctly populated and handled.
- Simplified the iteration over context files for better readability and maintainability.

* refactor: Enhance tool_resources handling in primeResources

- Added tests to verify the deletion behavior of tool_resources fields, ensuring original objects remain unchanged.
- Implemented logic to delete `ocr` and `context` fields after fetching and re-categorizing files.
- Preserved context field when the context capability is disabled, ensuring correct behavior in various scenarios.

* refactor: Replace `ocrEnabled` with `contextEnabled` in AgentConfig

* refactor: Adjust legacy tool handling order for improved clarity

* refactor: Implement OCR to context conversion functions and remove original conversion logic in update agent handling

* refactor: Move contextEnabled declaration to maintain consistent order in capabilities

* refactor: Update localization keys for file context to improve clarity and accuracy

* chore: Update localization key for file context information to improve clarity
2025-09-18 20:06:59 -04:00
Danny Avila
89d12a8ccd 🔍 fix: Retrieve Multiple Agents In File Access Check (#9695)
- Implemented `getAgents` function to retrieve multiple agent documents based on search parameters.
- Updated `fileAccess` middleware to utilize `getAgents` instead of `getAgent` for improved file access checks.
- Added comprehensive tests for file access middleware, covering various scenarios including user permissions and agent ownership.
2025-09-18 15:42:05 -04:00
github-actions[bot]
f6d34d78ca 🌍 i18n: Update translation.json with latest translations (#9691)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-18 14:45:21 -04:00
Danny Avila
48ca1bfd88 🧩 refactor: File Upload Options based on Ephemeral Agent (#9693)
* refactor: agent tool permissions to support ephemeral agent settings

* ci: rename render tests and correct typing for `useAgentToolPermissions` hook

* refactor: implement `DragDropContext` to minimize effect of `useChatContext` in `DragDropModal`
2025-09-18 14:44:55 -04:00
Danny Avila
208be7c06c 🗨️ refactor: Only Allow Prompt Queries with Access (#9688) 2025-09-18 10:00:33 -04:00
Danny Avila
02bfe32905 🛠️ fix: Missing Tool Definitions on Redis Cache Clear (#9681) 2025-09-17 23:19:28 -04:00
github-actions[bot]
4499494aba 🌍 i18n: Update translation.json with latest translations (#9679)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-17 23:14:49 -04:00
Federico Ruggi
d04da60b3b 💫 feat: MCP OAuth Auto-Reconnect (#9646)
* add oauth reconnect tracker

* add connection tracker to mcp manager

* reconnect oauth mcp servers function

* call reconnection in auth controller

* make sure to check connection in panel

* wait for isConnected

* add const for poll interval

* add logging to tryReconnect

* check expiration

* check mcp manager is not null

* check mcp manager is not null

* add test for reconnecting mcp server

* unify logic inside OAuthReconnectionManager

* test reconnection manager, adjust

* chore: reorder import statements in index.js

* chore: imports

* chore: imports

* chore: imports

* chore: imports

* chore: imports

* chore: imports and use types explicitly

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-09-17 16:49:36 -04:00
keltschdt
0e94d97bfb fix: Disable TTL For Transient OIDC Users In Permission Service (#9643) 2025-09-17 14:21:36 -04:00
Danny Avila
45ab4d4503 🎋 refactor: Improve Message UI State Handling (#9678)
* refactor: `ExecuteCode` component with submission state handling and cancellation message

* fix: Remove unnecessary argument check for execute_code tool call

* refactor: streamlined messages context

* chore: remove unused Convo prop

* chore: remove unnecessary whitespace in Message component

* refactor: enhance message context with submission state and latest message tracking

* chore: import order
2025-09-17 13:07:56 -04:00
github-actions[bot]
0ceef12eea 🌍 i18n: Update translation.json with latest translations (#9648)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-15 18:41:34 -04:00
Danny Avila
6738360051 📋 refactor: Agent Tool Permissions for File Upload Options (#9647)
- Added isEphemeralAgent function to streamline checks for ephemeral agents.
- Updated logic in useAgentToolPermissions to utilize the new function for determining tool access.
- Introduced comprehensive tests for useAgentToolPermissions covering various scenarios including ephemeral agents, regular agents with tools, and edge cases.
2025-09-15 12:57:40 -04:00
Dustin Healy
52b65492d5 👻 fix: Phantom MCP Tool Calls (#9634)
* fix: mcp tool calls no longer happening when unselected (without breaking new convo behavior)

* refactor: Improve ephemeral agent synchronization logic in useMCPSelect

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-09-15 10:35:15 -04:00
Danny Avila
7a9a99d2a0 🔗 refactor: URL sanitization for MCP logging (#9632) 2025-09-14 18:55:32 -04:00
Danny Avila
5bfb06b417 💻 feat: Add Proxy Config for Mistral OCR API (#9629)
* 💻 feat: Add proxy configuration support for Mistral OCR API requests

* refactor: Implement proxy support for Mistral API requests using HttpsProxyAgent
2025-09-14 18:50:41 -04:00
github-actions[bot]
2ce8f1f686 🌍 i18n: Update translation.json with latest translations (#9626)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-14 18:48:17 -04:00
Danny Avila
1a47601533 🔃 fix: Refresh Token Edge Cases (#9625)
* 🔃 fix: Refresh Token Edge Cases

* chore: Update parameter type for setAuthTokens function
2025-09-13 21:36:45 -04:00
Danny Avila
5245aeea8f 🔧 refactor: Consolidate MCP tool removal and Improve UX (#9609)
* 🔧 refactor: Consolidate MCP tool removal and Improve UX

- Removed redundant tool removal logic from MCPTool, UnconfiguredMCPTool, and UninitializedMCPTool components.
- Introduced `useRemoveMCPTool` hook to handle tool removal and toast notifications.
- Updated translation.json to include a reminder message for saving changes after tool removal.

* chore: remove unused i18n key
2025-09-12 21:37:07 -04:00
Jesse Bye
dd93db40bc ⛑️ feat: Helm serviceAccount Config (#9606) 2025-09-12 17:37:29 -04:00
Muhammad Azhdari
136cf1d5a8 ⛑️ fix: follow postgres bitnami values schema in rag-api helm chart (#7782) 2025-09-12 17:36:38 -04:00
Danny Avila
751522087a v0.8.0-rc4 (#9601)
*  v0.8.0-rc4

* chore: update jest.config.cjs to include release comment and linting

* chore: bump CONFIG_VERSION to 1.2.9
2025-09-12 13:37:10 -04:00
github-actions[bot]
7fe830acfc 🌍 i18n: Update translation.json with latest translations (#9599)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-12 13:00:30 -04:00
Danny Avila
cdfe686987 📦 chore: bump axios to v1.12.1 (#9600) 2025-09-12 13:00:12 -04:00
Danny Avila
5b5723343c 🔍 refactor: Preserve Category in Agent Marketplace Search (#9598) 2025-09-12 11:36:42 -04:00
Sebastien Bruel
30c24a66f6 🪟 fix: Auto-fetch agents to fill Viewport in Marketplace Scroll (#9591) 2025-09-12 10:59:15 -04:00
Christian Geisler
ecf9733bc1 🐳 fix: Add missing uploads directory to Dockerfile (#9590) 2025-09-12 10:56:14 -04:00
Danny Avila
133312fb40 🔐 fix: Remove OAuth handler cleanup at Connection Change (#9589) 2025-09-12 00:34:45 -04:00
github-actions[bot]
b62ffb533c 🌍 i18n: Update translation.json with latest translations (#9586)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-12 00:29:47 -04:00
Danny Avila
d75fb76338 refactor: Add Effective Timeout for MCP Fetch (#9585) 2025-09-11 19:09:13 -04:00
Danny Avila
51f2d43fed 🔐 refactor: Improve MCP OAuth Event Handler Cleanup (#9584)
* 🔐 refactor: Improve MCP OAuth event handling and cleanup

* ci: MCPConnection mock with additional event handling methods
2025-09-11 18:54:43 -04:00
Danny Avila
e3a645e8fb 🔃 fix: Token Refresh in Browser Only, Redirect on Refresh Failure (#9583)
* 🔃 fix: Token Refresh in Browser Only, Redirect on Refresh Failure

* chore: Update import for SearchResultData and fix FormattedToolResponse type build warning
2025-09-11 16:51:40 -04:00
Danny Avila
180046a3c5 ✂️ refactor: Artifacts and Tool Callbacks to Pass UI Resources (#9581)
* ✂️ refactor: use artifacts and callbacks to pass UI resources

* chore: imports

* refactor: Update UIResource type imports and definitions across components and tests

* refactor: Update ToolCallInfo test data structure and enhance TAttachment type definition

---------

Co-authored-by: Samuel Path <samuel.path@shopify.com>
2025-09-11 14:34:07 -04:00
github-actions[bot]
916742ab9d 🌍 i18n: Update translation.json with latest translations (#9570)
* 🌍 i18n: Update translation.json with latest translations

* Update drag and drop tooltip text

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Danny Avila <danny@librechat.ai>
2025-09-11 08:13:43 -04:00
Danny Avila
d91f34dd42 🔒 refactor: Optimize Email Domain Validation in OpenID, SAML, and Social Logins (#9567)
* refactor: Optimize Email Domain Validation in OpenID, SAML, and Social Login Strategies

    - Implemented email domain validation for user authentication in OpenID and SAML strategies, ensuring only allowed domains are processed.
    - Adjusted error messages for clarity and consistency across authentication methods.
    - Refactored social login to validate email domains before checking for existing users, improving registration flow.

* refactor: Email Domain Validation in LDAP and Social Login Strategies
2025-09-11 01:01:58 -04:00
Danny Avila
5676976564 🔒 fix: Email Domain Validation Order and Coverage (#9566) 2025-09-10 23:13:39 -04:00
Danny Avila
85aa3e7d9c 🔧 refactor: Centralize Collection Checks for Permissions Migration (#9565)
* 🔧 refactor: Centralize Collection Existence Checks for Permissions Migration

* Replace individual collection existence checks with a unified function `ensureRequiredCollectionsExist` in the database utility module.
* Update migration scripts for agents and prompts to utilize the new function, ensuring all required collections are verified for existence in a single call.
* Remove redundant collection existence logic from migration files, improving code maintainability and clarity.

* chore: import order in migration scripts

* 🔧 test: Update Token Test Cases for Realistic Scenarios

* Changed email in test data to 'user1-alt@example.com' for a more realistic scenario.
* Clarified expectation comment for token retrieval to indicate it finds the only matching token based on criteria.
2025-09-10 20:40:58 -04:00
Dustin Healy
a2ff6613c5 🪄 fix: MCP UI Renders for OAuth and Custom User Vars Servers (#9559) 2025-09-10 19:02:30 -04:00
Theo N. Truong
8d6cb5eee0 🧹 chore: Remove Unused Cache Configuration Keys (#9551)
* Remove unused STATIC_CONFIG and LIBRECHAT_YAML_CONFIG cache keys.

These cache keys were identified as dead code - they were being written to but never read from anywhere in the codebase after a recent refactor:

- STATIC_CONFIG was used as a cache namespace that stored configuration data
- LIBRECHAT_YAML_CONFIG was the key used within that namespace to store parsed YAML config
- The cache.set() operation in loadCustomConfig.js stored the config but no cache.get() operations retrieved it
- Configuration data is already handled through other mechanisms without caching

* # removed tests regarding cache
2025-09-10 19:01:44 -04:00
Federico Ruggi
31445e391a 🔖 fix: Agent Marketplace Bookmark and New Chat buttons (#9549)
* don't require conversation for bookmark button

* wrap marketplace component so it can correctly use context hooks

* chore: re-order import statement for MarketplaceProvider

---------

Co-authored-by: Danny Avila <danacordially@gmail.com>
2025-09-10 19:01:34 -04:00
Federico Ruggi
04c3a5a861 🔌 feat: Revoke MCP OAuth Credentials (#9464)
* revocation metadata fields

* store metadata

* get client info and meta

* revoke oauth tokens

* delete flow

* uninstall oauth mcp

* revoke button

* revoke oauth refactor, add comments, test

* adjust for clarity

* test deleteFlow

* handle metadata type

* no mutation

* adjust for clarity

* styling

* restructure for clarity

* move token-specific stuff

* use mcpmanager's oauth servers

* fix typo

* fix addressing of oauth prop

* log prefix

* remove debug log
2025-09-10 18:53:34 -04:00
Federico Ruggi
5667cc9702 🏪 fix: Show Agent Builder in Marketplace (#9537)
* don't require conversation endpoint

* bump up render time a bit

* a little less
2025-09-10 18:48:17 -04:00
Theo N. Truong
c0f95f971a 🗄️ refactor: Make APP_CONFIG a Dedicated Cache Store (#9558)
- This allows use APP_CONFIG in FORCED_IN_MEMORY_CACHE_NAMESPACES
- Remove the complexity of nested namespace (e.g. we no longer have to worry about the prefix of every role key)
2025-09-10 18:46:54 -04:00
Danny Avila
f125f5bd32 🤖 refactor: Auto-validate IDs in Agent Query (#9555)
* 🤖 refactor: Auto-validate IDs in Agent Query

* chore: remove comments in useAgentToolPermissions
2025-09-10 18:38:33 -04:00
Danny Avila
f3eca8c7a7 📦 chore: bump vite to address low severity vulns (#9553)
* 📦 chore: bump `vite` to address low severity vulns

* chore: update bun.lockb to reflect dependency changes
2025-09-10 14:56:46 -04:00
github-actions[bot]
f22e5f965e 🌍 i18n: Update translation.json with latest translations (#9533)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-10 14:29:33 -04:00
Danny Avila
749f539dfc 📬 refactor: Improved Rendering and Localization for Drag & Drop Files (#9547)
* 📬 refactor: Improved Rendering and Localization for Drag & Drop Files

- Refactored DragDropOverlay to use memoization and props for active state management.
- Updated the overlay to always render, reducing mount/unmount overhead.
- Improved user experience with localized text for drag-and-drop instructions.
- Enhanced file handling logic in useDragHelpers for better performance and clarity.

* fix: agent data retrieval in drag helper
2025-09-10 14:27:57 -04:00
Danny Avila
1247207afe 🔒 fix: Memory Disabled Config UI Permissions (2/2) 2025-09-09 22:00:01 -04:00
Danny Avila
5c0e9d8fbb 📂 refactor: Show File Search and Code File Upload Options Based on Agent Tools (#9532) 2025-09-09 20:48:29 -04:00
Dustin Healy
957fa7a994 😶‍🌫️ refactor: Conditionally Hide Tools Dropdown (#9530) 2025-09-09 19:57:50 -04:00
Danny Avila
751c2e1d17 👻 refactor: LocalStorage Cleanup and MCP State Optimization (#9528)
* 👻 refactor: MCP Select State with Jotai Atoms

* refactor: Implement timestamp management for ChatArea localStorage entries

* refactor: Integrate MCP Server Manager into BadgeRow context and components to avoid double-calling within BadgeRow

* refactor: add try/catch

* chore: remove comment
2025-09-09 17:32:10 -04:00
Danny Avila
519645c0b0 🔻 fix: Role and System Message Handling for ChatGPT Imports (#9524)
* fix: ChatGPT import logic breaks message graph when it encounters a system message

- Implemented `findNonSystemParent` to maintain parent-child relationships by skipping system messages.
- Added a test case to ensure system messages do not disrupt the conversation flow during import.

* fix: ChatGPT import, correct sender for user messages with GPT-4 model

* fix: Enhance model name extraction for assistant messages in import process

- Updated sender assignment logic to dynamically extract model names from model slugs, improving accuracy for various GPT models.
- Added comprehensive tests to validate the extraction and formatting of model names from different model slugs, ensuring robustness in the import functionality.
2025-09-09 13:51:26 -04:00
Danny Avila
0d0a318c3c 📦 chore: Update caniuse-lite to v1.0.30001741 (#9523) 2025-09-09 09:26:15 -04:00
Danny Avila
588e0c4611 🔒 fix: Memory Disabled Config UI Permissions (#9522) 2025-09-09 09:14:40 -04:00
github-actions[bot]
79144a6365 🌍 i18n: Update translation.json with latest translations (#9515)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-09 09:08:28 -04:00
Danny Avila
ca53c20370 🚃 refactor: Normalize paths for Vite Config Chunking (#9513) 2025-09-08 21:53:15 -04:00
Danny Avila
d635503f49 🔐 ci: Add MCP Environment Processing tests 2025-09-08 15:38:44 -04:00
Dev
920966f895 🔐 fix: Resolve Env. Variables for MCP OAuth Manual Config (#9501)
* Added functionality to process OAuth configuration within the MCP environment.
* Implemented handling for string values in OAuth settings, ensuring proper processing of environment variables.
* Maintained original structure for non-string values to preserve existing configurations.
2025-09-08 15:29:10 -04:00
Danny Avila
c46e0d3ecc 🔒 fix: href Attribute in Email Microsoft Template 2025-09-08 14:39:00 -04:00
Dustin Healy
c6ecf0095b 🎚️ feat: Anthropic Parameter Set Support via Custom Endpoints (#9415)
* refactor: modularize openai llm config logic into new getOpenAILLMConfig function (#9412)

* ✈️ refactor: Migrate Anthropic's getLLMConfig to TypeScript (#9413)

* refactor: move tokens.js over to packages/api and update imports

* refactor: port tokens.js to typescript

* refactor: move helpers.js over to packages/api and update imports

* refactor: port helpers.js to typescript

* refactor: move anthropic/llm.js over to packages/api and update imports

* refactor: port anthropic/llm.js to typescript with supporting types in types/anthropic.ts and updated tests in llm.spec.js

* refactor: move llm.spec.js over to packages/api and update import

* refactor: port llm.spec.js over to typescript

* 📝  Add Prompt Parameter Support for Anthropic Custom Endpoints (#9414)

feat: add anthropic llm config support for openai-like (custom) endpoints

* fix: missed compiler / type issues from addition of getAnthropicLLMConfig

* refactor: update tokens.ts to export constants and functions, enhance type definitions, and adjust default values

* WIP: first pass, decouple `llmConfig` from `configOptions`

* chore: update import path for OpenAI configuration from 'llm' to 'config'

* refactor: enhance type definitions for ThinkingConfig and update modelOptions in AnthropicConfigOptions

* refactor: cleanup type, introduce openai transform from alt provider

* chore: integrate removeNullishValues in Google llmConfig and update OpenAI exports

* chore: bump version of @librechat/api to 1.3.5 in package.json and package-lock.json

* refactor: update customParams type in OpenAIConfigOptions to use TConfig['customParams']

* refactor: enhance transformToOpenAIConfig to include fromEndpoint and improve config extraction

* refactor: conform userId field for anthropic/openai, cleanup anthropic typing

* ci: add backward compatibility tests for getOpenAIConfig with various endpoints and configurations

* ci: replace userId with user in clientOptions for getLLMConfig

* test: add Azure OpenAI endpoint tests for various configurations in getOpenAIConfig

* refactor: defaultHeaders retrieval for prompt caching for anthropic-based custom endpoint (litellm)

* test: add unit tests for getOpenAIConfig with various Anthropic model configurations

* test: enhance Anthropic compatibility tests with addParams and dropParams handling

* chore: update @librechat/agents dependency to version 2.4.78 in package.json and package-lock.json

* chore: update @librechat/agents dependency to version 2.4.79 in package.json and package-lock.json

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-09-08 14:35:29 -04:00
Danny Avila
7de6f6e44c ⚙️ chore: Update Build Config due to Windows Tests (#9511)
* chore: remove `rollup-plugin-generate-package-json`

* chore: increase maximum file size to cache in Vite configuration for windows builds
2025-09-08 14:16:49 -04:00
Danny Avila
035f85c3ba 🧪 ci: Tests for Anthropic and OpenAI LLM Configuration (#9484)
* fix: freq. and pres. penalty use camelcase

* ci: OpenAI Configuration Tests

* ci: Enhance OpenAI Configuration Tests with Azure and Custom Endpoint Scenarios

* Added integration tests for OpenAI and Azure configurations simulating various initialization scenarios.
* Updated OpenAIConfigOptions to allow null values for reverseProxyUrl and proxy.
* Improved handling of reasoning parameters in tests for both OpenAI and Azure setups.
* Ensured robust error handling for missing API keys and malformed configurations.
* Optimized performance for large parameter sets in configuration.

* test: Add comprehensive integration tests for Anthropic LLM configuration

* Introduced real usage integration tests for various Anthropic endpoint configurations, including handling of proxy and reverse proxy setups.
* Implemented model-specific scenarios for Claude-3.7 and web search functionality.
* Enhanced error handling for missing user IDs and large parameter sets.
* Validated parameter logic, including default values, boundary conditions, and type handling for numeric and array parameters.
* Ensured proper exclusion of system options from model options and maintained expected behavior across different model variations.
2025-09-06 09:42:12 -04:00
Daniel Andersen
6f6a34d126 🔗 feat: Custom Jina API URL for Web Search Reranking (#9236)
* feat: added support for custom JINA_API_URL

* fixed tests

* chore: Update @librechat/agents dependency to version 2.4.77 in package-lock.json and package.json files

* fix: Update Jina API URL to use environment variable in configuration files

* Refactor AppService, web.ts, and config.ts to replace hardcoded Jina API URL with an environment variable placeholder.
* Ensure consistency across tests and configuration for Jina API URL.

* chore: alphabetical order translation.json

* fix: alphabetical order

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-09-06 08:39:20 -04:00
Danny Avila
fff1f1cf27 🔒 fix: Update Token Deletion To Prevent Undefined Field Queries (#9477)
* Refactor deleteTokens to use an array of conditions for querying, ensuring only specified fields are considered for deletion.
* Add error handling to prevent accidental deletion when no query parameters are provided.
* Update AuthService to match the new deleteTokens signature by passing an object instead of a string for email.
2025-09-05 17:26:02 -04:00
Danny Avila
1869854d70 🌐 fix: Prevent MCP Body/Header Timeouts at 5-Minute mark (#9476)
* chore: improve error log for tool error

* fix: add undici as fetch method with agent to prevent body/header timeouts at 5-minute mark
2025-09-05 17:14:39 -04:00
github-actions[bot]
4dd2998592 🌍 i18n: Update translation.json with latest translations (#9473)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-05 16:59:11 -04:00
Danny Avila
a4a174b3dc 🛠️ refactor: Only Show Agents MCP UI When Configured (#9471) 2025-09-05 12:28:00 -04:00
Danny Avila
65c83317aa 🗣️ feat: Language Support for OpenAI Speech-to-Text (#9470) 2025-09-05 12:01:00 -04:00
Sebastien Bruel
e95e0052da 🗄️ feat: Allow Skipping Transactions When Balance is Disabled (#9419)
* Disable transaction creation when balance is disabled

* Add configuration to disable transactions creation

* chore: remove comments

---------

Co-authored-by: Danny Avila <danacordially@gmail.com>
2025-09-05 11:21:02 -04:00
Danny Avila
0ecafcd38e 🔢 feat: Add Support for Integer and Float JSON Schema Types (#9469)
* 🔧 fix: Extend JsonSchemaType to include 'integer' and 'float' types

* ci: tests for new integer/float types
2025-09-05 11:12:44 -04:00
Pranshu Mahajan
cadfe14abe ⚙️ fix: Dynamic HPA API Version Selection for K8s Compatibility (#9320)
Co-authored-by: Pranshu Mahajan <pranshu.mahajan#foxtel.com.au>
2025-09-05 11:11:51 -04:00
Danny Avila
75dd6fb28b 🛂 refactor: Centralize fileStrategy Resolution for OpenID, SAML, and Social Logins (#9468)
* 🔑 refactor: `fileStrategy` for OpenID, SAML, and Social logins

* ci: Update Apple strategy tests to use correct isEnabled import and enhance handleExistingUser call
2025-09-05 11:09:32 -04:00
Ben Verhees
eef93024d5 🔍 fix: Display File Search Citations Based on Permissions (#9454)
* Make file search citations conditional

* refactor: improve permission handling to avoid redundant checks by including it in artifact

* chore: reorder imports for better organization and clarity

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-09-05 09:14:55 -04:00
Danny Avila
cd73cb0b3e 🔐 fix: Image Validation when Reusing OpenID Token (#9458)
* 🔧 fix: Enhance OpenID token handling with user ID for image path validation

* 🔧 fix: Change logger level to error for user info fetch failure and remove redundant info log in OpenID user lookup

* 🔧 refactor: Remove validateImageRequest from middleware exports and enhance validation logic in validateImageRequest.js

* Removed validateImageRequest from the middleware index.
* Improved error handling and validation checks in validateImageRequest.js, including handling of OpenID tokens, URL length, and malformed URLs.
* Updated tests in validateImages.spec.js to cover new validation scenarios and edge cases.
2025-09-05 03:12:17 -04:00
github-actions[bot]
e705b09280 🌍 i18n: Update translation.json with latest translations (#9439)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-03 12:02:07 -04:00
Danny Avila
23bd4dfbfd 🔧 fix: Handle Missing MCP Config Gracefully in Config/Plugin Routes (#9438)
* 🛠️ fix: Update Plugins and Config Routes to Handle No MCP Config

* refactor: Rename cachedMCPPlugins to mcpPlugins for clarity in PluginController
2025-09-03 11:58:39 -04:00
github-actions[bot]
df17582103 🌍 i18n: Update translation.json with latest translations (#9434)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-03 03:05:36 -04:00
Danny Avila
d79b80a4bf 📜 chore: Remove debug log for request headers in MCPConnection 2025-09-03 03:01:39 -04:00
Danny Avila
45da421e7d 🦾 refactor: filter Model Specs based on user access to Agents (#9433) 2025-09-03 02:59:57 -04:00
Eduardo Cruz Guedes
122ff416ac 🌒 refactor: Theme Handling to use isDark Utility (#9405)
*  fix: Refactor theme handling to use isDark utility across components

* 🔧 fix: Update package client version to 0.2.8 and adjust theme import path in ThemeSelector component

---------

Co-authored-by: Marco Beretta <81851188+berry-13@users.noreply.github.com>
2025-09-03 02:56:36 -04:00
github-actions[bot]
b66bf93b31 🌍 i18n: Update translation.json with latest translations (#9381)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-03 02:21:38 -04:00
Samuel Path
6d791e3e12 🚦 feat: Simplify MCP UI integration and add unit tests (#9418) 2025-09-03 02:21:12 -04:00
Michael Forman
f9b12517b0 🌟 fix: Add Composite Indexes to Agent Categories for CosmosDB Compatibility (#9430) 2025-09-03 02:16:18 -04:00
Joseph Licata
195e1e9eb2 ⬆️ refactor: Enable File Search from Upload Option (#9425) 2025-09-03 02:08:48 -04:00
Danny Avila
47aa90df1d 📦 chore: Update data-schemas to 0.0.21 and update IUser plugins type 2025-08-30 23:20:22 -04:00
Danny Avila
460eac36f6 🗨️ fix: Prompts Pagination (#9385)
* 🗨️ fix: Prompts Pagination

* ci: Simplify user middleware setup in prompt tests
2025-08-30 15:58:49 -04:00
Sebastien Bruel
3a47deac07 📋 feat: Support Custom Content-Types in Action Descriptors (#9364) 2025-08-29 23:02:40 -04:00
Dustin Healy
49e8443ec5 ✂️ refactor: MCP UI Separation for Agents (#9237)
* refactor: MCP UI Separation for Agents (Dustin WIP)

feat: separate MCPs into their own lists away from tools + actions and add the status indicator functionality from chat to their dropdown ui

fix: spotify mcp was not persisting on agent creation

feat: show disconnected saved servers and their tools in agent mcp list in created agents

fix: select-all regression fixed (caused by deleting tools we were drawing from for rendering list)

fix: dont show all mcps, only those installed in agent in list

feat: separate ToolSelectDialog for MCPServerTools

fix: uninitialized mcp servers not showing as added in toolselectdialog

refactor: reduce looping in AgentPanelContext for categorizing groups and mcps

refactor: split ToolSelectDialog and MCPToolSelectDialog functionality (still needs customization for custom user vars)

chore: address ESLint comments

chore: address ESLint comments

feat: one-click initialization on MCP servers in agent builder

fix: stop propagation triggering reinit on caret click

refactor: split uninitialized MCPs component from initialized MCPs

feat: new mcp tool select dialog ui with custom user vars

feat: show initialization state for CUV configurable MCPs too

chore: remove unused localization string

fix: deselecting all tools caused a re-render

fix: remove subtools so removal from MCPToolSelectDialog works more consistently

feat: added servers have all tools enabled by default

feat: mcp server list now alphabetical to prevent annoying ui behavior of servers jumping around depending on tool selection

fix: filter out placeholder group mcp tools from any actual tool calls / definitions

feat: indicator now takes you to config dialog for uninitialized servers

feat: show previously configured mcp servers that are now missing from the yaml

feat: select all enabled by default on first add to mcp server list

chore: address ESLint comments

* refactor: MCP UI Separation for Agents (Danny WIP)

chore: remove use of `{serverName}_mcp_{serverName}`

chore: import order

WIP: separate component concerns

refactor: streamline agent mcp tools

refactor: unify MCP server handling and improve tool visibility logic, remove unnecessary normalization or sorting, remove nesting button, make variable names clear

refactor: rename mcpServerIds to mcpServerNames for clarity and consistency across components

refactor: remove groupedMCPTools and toolToServerMap, streamline MCP server handling in context and components to effectively utilize mcpServersMap

refactor: optimize tool selection logic by replacing array includes with Set for improved performance

chore: add error logging for failed auth URL parsing in ToolCall component

refactor: enhance MCP tool handling by improving server name management and updating UI elements for better clarity

* refactor: decouple connection status from useMCPServerManager with useMCPConnectionStatus

* fix: improve MCP tool validation logic to handle unconfigured servers

* chore: enhance log message clarity for MCP server disconnection in updateUserPluginsController

* refactor: simplify connection status extraction in useMCPConnectionStatus hook

* refactor: improve initializing UX

* chore: replace string literal with ResourceType constant in useResourcePermissions

* refactor: cleanup code, remove redundancies, rename variables for clarity

* chore: add back filtering and sorting for mcp tools dialog

* refactor: initializeServer to return response and early return

* refactor: enhance server initialization logic and improve UI for OAuth interaction

* chore: clarify warning message for unconfigured MCP server in handleTools

* refactor: prevent CustomUserVarsSection from submitting tools dialog form

* fix: nested button of button issue in UninitializedMCPTool

* feat: add functionality to revoke custom user variables in MCPToolSelectDialog

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-08-29 22:57:01 -04:00
Samuel Path
d16f93b5f7 🎨 feat: MCP UI basic integration (#9299) 2025-08-29 13:07:19 -04:00
Danny Avila
20b29bbfa6 🗺️ fix: Embedded file handling to use Proper Filename (#9372) 2025-08-29 12:23:18 -04:00
Danny Avila
e2a6937ca6 ⚙️ fix: Update OCR context to use req.config (#9367) 2025-08-29 10:06:03 -04:00
github-actions[bot]
005a0cb84a 🌍 i18n: Update translation.json with latest translations (#9361)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-29 08:49:18 -04:00
owengo
beabe38311 🖼️ fix: Resolve appConfig Access Before Initialization in Image Generation (#9366)
Co-authored-by: Olivier Schiavo <olivier.schiavo@wengo.com>
2025-08-29 08:47:12 -04:00
Danny Avila
62315be197 🔧 fix: Add missing configMiddleware to Convo Import Routes 2025-08-28 23:12:58 -04:00
Danny Avila
a26597a696 📇 refactor: Improve State mgmt. for File uploads and Tool Auth (#9359)
* 🔧 fix: Ensure loading state is correctly set when files are empty or in progress

* 🔧 fix: Update ephemeral agent state on file upload error for execute code tool resource

* 🔧 fix: Reset ephemeral agent state for tool when authentication fails

* refactor: Pass conversation prop to FileFormChat and AttachFileChat components
2025-08-28 23:11:16 -04:00
Danny Avila
8772b04d1d 🗃️ refactor: File Access via Agent; Deny Deletion if not Editor, Allow Viewer (#9357) 2025-08-28 21:16:23 -04:00
Danny Avila
7742b18c9c 🔧 fix: Upload Audio as Text missing Param (#9356) 2025-08-28 21:07:30 -04:00
Arthur Barrett
b75b799e34 🔧 fix: Handle Web API Streams in File Download Route for OpenAI Assistants (#9200) 2025-08-28 12:39:35 -04:00
Danny Avila
43add11b05 🎯 refactor: Custom Endpoint Request-based Header Resolution (#9344)
* refactor: resolve request-based headers for custom endpoints right before LLM request

* ci: clarify request-based header resolution in initializeClient test
2025-08-28 12:33:08 -04:00
Danny Avila
1764de53a5 fix: use appConfig correctly in getVoices 2025-08-28 00:51:22 -04:00
Danny Avila
c0511b9a5f 🔧 fix: MCP Selection Persist and UI Flicker Issues (#9324)
* refactor: useMCPSelect

    - Add useGetMCPTools to use in useMCPSelect and elsewhere hooks for fetching MCP tools
    - remove memoized key
    - remove use of `useChatContext` and require conversationId as prop

* feat: Add MCPPanelContext and integrate conversationId as prop for useMCPSelect across components

- Introduced MCPPanelContext to manage conversationId state.
- Updated MCPSelect, MCPSubMenu, and MCPConfigDialog to accept conversationId as a prop.
- Modified ToolsDropdown and BadgeRow to pass conversationId to relevant components.
- Refactored MCPPanel to utilize MCPPanelProvider for context management.

* fix: remove nested ternary in ServerInitializationSection

- Replaced conditional operator with if-else statements for better readability in determining button text based on server initialization state and reinitialization status.

* refactor: wrap setValueWrap in useCallback for performance optimization

* refactor: streamline useMCPSelect by consolidating storageKey definition

* fix: prevent clearing selections on page refresh by tracking initial load completion

* refactor: simplify concern of useMCPSelect hook

* refactor: move ConfigFieldDetail interface to common types for better reusability, isolate usage of `useGetMCPTools`

* refactor: integrate mcpServerNames into BadgeRowContext and update ToolsDropdown and MCPSelect components
2025-08-28 00:44:49 -04:00
Danny Avila
2483623c88 🔧 fix: type checking for process.browser in api-endpoints.ts 2025-08-27 20:27:57 -04:00
Danny Avila
229d6f2dfe 📦 chore: Update librechat-data-provider to v0.8.006 2025-08-27 20:23:18 -04:00
github-actions[bot]
d5ec838218 🌍 i18n: Update translation.json with latest translations (#9321)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-27 20:15:38 -04:00
Danny Avila
15d7a3d221 🎵 feat: Cumulative Transcription Support for External STT (#9318)
* 🔧 fix: TTS and STT Services to use AppConfig

- Updated `getProviderSchema` and `getProvider` methods to accept an optional `appConfig` parameter, allowing for more flexible configuration retrieval.
- Improved error handling by ensuring that the app configuration is checked before accessing TTS and STT schemas.
- Refactored `processTextToSpeech` and `streamAudio` methods to utilize the new `appConfig` parameter for better clarity and maintainability.

* feat: Cumulative Transcription Support for STT External

* style: fix medium-sized styling for admin settings dialogs
2025-08-27 18:56:04 -04:00
Danny Avila
c3e88b97c8 🎤 feat: Cumulative Transcription Support for AudioRecorder (#9316)
- Added useRef to maintain existing text during audio recording.
- Updated setText to prepend existing text to new transcriptions.
- Modified handleStartRecording and handleStopRecording to manage existing text state.
- Improved spinner icon styling for better visibility.
2025-08-27 18:00:59 -04:00
Danny Avila
ba424666f8 🔐 feat: Add Configurable Min. Password Length (#9315)
- Added support for a minimum password length defined by the MIN_PASSWORD_LENGTH environment variable.
- Updated login, registration, and reset password forms to utilize the configured minimum length.
- Enhanced validation schemas to reflect the new minimum password length requirement.
- Included tests to ensure the minimum password length functionality works as expected.
2025-08-27 16:30:56 -04:00
MarcAmick
ea3b671182 🔧 feat: Alternative DNS Lookup for AWS ElastiCache TLS Connections (#9264)
* added REDIS_USE_ALTERNATIVE_DNS_LOOKUP env variable to modify redis connection by adding dnsLookup
this is required when connecting to elasticache for ioredis
see "Special Note: Aws Elasticache Clusters with TLS" on this webpage:  https://www.npmjs.com/package/ioredis

* added REDIS_USE_ALTERNATIVE_DNS_LOOKUP env variable to modify redis connection by adding dnsLookup
this is required when connecting to elasticache for ioredis
see "Special Note: Aws Elasticache Clusters with TLS" on this webpage:  https://www.npmjs.com/package/ioredis

---------

Co-authored-by: Marc Amick <MarcAmick@jhu.edu>
2025-08-27 16:09:07 -04:00
Dustin Healy
f209f616c9 🌍 i18n: Add Slovenian Language (#9313) 2025-08-27 14:02:22 -04:00
colinlin-stripe
961af515d5 🧹 chore: [stripe] remove dangerously set html (#9288) 2025-08-27 13:58:07 -04:00
Danny Avila
a362963017 🐛 fix: String Interpolation in Messages Endpoint from #9155 (#9312)
* feat: move buildTree function for message hierarchy to data provider

* refactor: consolidate buildTree import from utils to data provider

* fix: correct string interpolation in messages function, which caused message search requests to fail
2025-08-27 13:48:48 -04:00
Danny Avila
78d735f35c 📧 fix: Missing Email fallback in openIdJwtLogin (#9311)
* 📧 fix: Missing Email fallback in `openIdJwtLogin`

* chore: Add auth module export to index
2025-08-27 12:59:40 -04:00
Danny Avila
48f6f8f2f8 📎 feat: Upload as Text Support for Plaintext, STT, RAG, and Token Limits (#8868)
* 🪶 feat: Add Support for Uploading Plaintext Files

feat: delineate between OCR and text handling in fileConfig field of config file

- also adds support for passing in mimetypes as just plain file extensions

feat: add showLabel bool to support future synthetic component DynamicDropdownInput

feat: add new combination dropdown-input component in params panel to support file type token limits

refactor: move hovercard to side to align with other hovercards

chore: clean up autogenerated comments

feat: add delineation to file upload path between text and ocr configured filetypes

feat: add token limit checks during file upload

refactor: move textParsing out of ocrEnabled logic

refactor: clean up types for filetype config

refactor: finish decoupling DynamicDropdownInput from fileTokenLimits

fix: move image token cost function into file to fix circular dependency causing unittest to fail and remove unused var for linter

chore: remove out of scope code following review

refactor: make fileTokenLimit conform to existing styles

chore: remove unused localization string

chore: undo changes to DynamicInput and other strays

feat: add fileTokenLimit to all provider config panels

fix: move textParsing back into ocr tool_resource block for now so that it doesn't interfere with other upload types

* 📤 feat: Add RAG API Endpoint Support for Text Parsing (#8849)

* feat: implement RAG API integration for text parsing with fallback to native parsing

* chore: remove TODO now that placeholder and fllback are implemented

* ✈️ refactor: Migrate Text Parsing to TS (#8892)

* refactor: move generateShortLivedToken to packages/api

* refactor: move textParsing logic into packages/api

* refactor: reduce nesting and dry code with createTextFile

* fix: add proper source handling

* fix: mock new parseText and parseTextNative functions in jest file

* ci: add test coverage for textParser

* 💬 feat: Add Audio File Support to Upload as Text (#8893)

* feat: add STT support for Upload as Text

* refactor: move processAudioFile to packages/api

* refactor: move textParsing from utils to files

* fix: remove audio/mp3 from unsupported mimetypes test since it is now supported

* ✂️ feat: Configurable File Token Limits and Truncation (#8911)

* feat: add configurable fileTokenLimit default value

* fix: add stt to fileConfig merge logic

* fix: add fileTokenLimit to mergeFileConfig logic so configurable value is actually respected from yaml

* feat: add token limiting to parsed text files

* fix: add extraction logic and update tests so fileTokenLimit isnt sent to LLM providers

* fix: address comments

* refactor: rename textTokenLimiter.ts to text.ts

* chore: update form-data package to address CVE-2025-7783 and update package-lock

* feat: use default supported mime types for ocr on frontend file validation

* fix: should be using logger.debug not console.debug

* fix: mock existsSync in text.spec.ts

* fix: mock logger rather than every one of its function calls

* fix: reorganize imports and streamline file upload processing logic

* refactor: update createTextFile function to use destructured parameters and improve readability

* chore: update file validation to use EToolResources for improved type safety

* chore: update import path for types in audio processing module

* fix: update file configuration access and replace console.debug with logger.debug for improved logging

---------

Co-authored-by: Dustin Healy <dustinhealy1@gmail.com>
Co-authored-by: Dustin Healy <54083382+dustinhealy@users.noreply.github.com>
2025-08-27 03:44:39 -04:00
ethanlaj
74bc0440f0 🐋 chore: switch from ankane/pgvector to pgvector/pgvector (#9245) 2025-08-27 02:04:58 -04:00
José Pedro Silva
18d5a75cdc 🌐 feat: Add support to SubDirectory hosting (#9155)
* feat: Add support to SubDirectory hosting

* fix: address linting and failing test

* fix: browser context validation

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-08-27 02:00:18 -04:00
Danny Avila
a820863e8b 💲 fix: Prevent Single-dollar LaTeX for abbrev. Currency (K, M, B) (#9293) 2025-08-26 23:33:56 -04:00
Danny Avila
9a210971f5 🛜 refactor: Streamline App Config Usage (#9234)
* WIP: app.locals refactoring

WIP: appConfig

fix: update memory configuration retrieval to use getAppConfig based on user role

fix: update comment for AppConfig interface to clarify purpose

🏷️ refactor: Update tests to use getAppConfig for endpoint configurations

ci: Update AppService tests to initialize app config instead of app.locals

ci: Integrate getAppConfig into remaining tests

refactor: Update multer storage destination to use promise-based getAppConfig and improve error handling in tests

refactor: Rename initializeAppConfig to setAppConfig and update related tests

ci: Mock getAppConfig in various tests to provide default configurations

refactor: Update convertMCPToolsToPlugins to use mcpManager for server configuration and adjust related tests

chore: rename `Config/getAppConfig` -> `Config/app`

fix: streamline OpenAI image tools configuration by removing direct appConfig dependency and using function parameters

chore: correct parameter documentation for imageOutputType in ToolService.js

refactor: remove `getCustomConfig` dependency in config route

refactor: update domain validation to use appConfig for allowed domains

refactor: use appConfig registration property

chore: remove app parameter from AppService invocation

refactor: update AppConfig interface to correct registration and turnstile configurations

refactor: remove getCustomConfig dependency and use getAppConfig in PluginController, multer, and MCP services

refactor: replace getCustomConfig with getAppConfig in STTService, TTSService, and related files

refactor: replace getCustomConfig with getAppConfig in Conversation and Message models, update tempChatRetention functions to use AppConfig type

refactor: update getAppConfig calls in Conversation and Message models to include user role for temporary chat expiration

ci: update related tests

refactor: update getAppConfig call in getCustomConfigSpeech to include user role

fix: update appConfig usage to access allowedDomains from actions instead of registration

refactor: enhance AppConfig to include fileStrategies and update related file strategy logic

refactor: update imports to use normalizeEndpointName from @librechat/api and remove redundant definitions

chore: remove deprecated unused RunManager

refactor: get balance config primarily from appConfig

refactor: remove customConfig dependency for appConfig and streamline loadConfigModels logic

refactor: remove getCustomConfig usage and use app config in file citations

refactor: consolidate endpoint loading logic into loadEndpoints function

refactor: update appConfig access to use endpoints structure across various services

refactor: implement custom endpoints configuration and streamline endpoint loading logic

refactor: update getAppConfig call to include user role parameter

refactor: streamline endpoint configuration and enhance appConfig usage across services

refactor: replace getMCPAuthMap with getUserMCPAuthMap and remove unused getCustomConfig file

refactor: add type annotation for loadedEndpoints in loadEndpoints function

refactor: move /services/Files/images/parse to TS API

chore: add missing FILE_CITATIONS permission to IRole interface

refactor: restructure toolkits to TS API

refactor: separate manifest logic into its own module

refactor: consolidate tool loading logic into a new tools module for startup logic

refactor: move interface config logic to TS API

refactor: migrate checkEmailConfig to TypeScript and update imports

refactor: add FunctionTool interface and availableTools to AppConfig

refactor: decouple caching and DB operations from AppService, make part of consolidated `getAppConfig`

WIP: fix tests

* fix: rebase conflicts

* refactor: remove app.locals references

* refactor: replace getBalanceConfig with getAppConfig in various strategies and middleware

* refactor: replace appConfig?.balance with getBalanceConfig in various controllers and clients

* test: add balance configuration to titleConvo method in AgentClient tests

* chore: remove unused `openai-chat-tokens` package

* chore: remove unused imports in initializeMCPs.js

* refactor: update balance configuration to use getAppConfig instead of getBalanceConfig

* refactor: integrate configMiddleware for centralized configuration handling

* refactor: optimize email domain validation by removing unnecessary async calls

* refactor: simplify multer storage configuration by removing async calls

* refactor: reorder imports for better readability in user.js

* refactor: replace getAppConfig calls with req.config for improved performance

* chore: replace getAppConfig calls with req.config in tests for centralized configuration handling

* chore: remove unused override config

* refactor: add configMiddleware to endpoint route and replace getAppConfig with req.config

* chore: remove customConfig parameter from TTSService constructor

* refactor: pass appConfig from request to processFileCitations for improved configuration handling

* refactor: remove configMiddleware from endpoint route and retrieve appConfig directly in getEndpointsConfig if not in `req.config`

* test: add mockAppConfig to processFileCitations tests for improved configuration handling

* fix: pass req.config to hasCustomUserVars and call without await after synchronous refactor

* fix: type safety in useExportConversation

* refactor: retrieve appConfig using getAppConfig in PluginController and remove configMiddleware from plugins route, to avoid always retrieving when plugins are cached

* chore: change `MongoUser` typedef to `IUser`

* fix: Add `user` and `config` fields to ServerRequest and update JSDoc type annotations from Express.Request to ServerRequest

* fix: remove unused setAppConfig mock from Server configuration tests
2025-08-26 12:10:18 -04:00
Danny Avila
e1ad235f17 🌍 i18n: Add missing com_ui_no_changes 2025-08-25 17:59:46 -04:00
Danny Avila
4a0b329e3e v0.8.0-rc3 (#9269)
* chore: bump data-provider to v0.8.004

* 📦 chore: bump @librechat/data-schemas version to 0.0.20

* 📦 chore: bump @librechat/api version to 1.3.4

*  v0.8.0-rc3

* docs: update README

* docs: update README

* docs: enhance multilingual UI section in README
2025-08-25 17:35:21 -04:00
github-actions[bot]
a22359de5e 🌍 i18n: Update translation.json with latest translations (#9267)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-25 17:14:51 -04:00
Danny Avila
bbfe4002eb 🏷️ chore: Add Missing Localizations for Agents, Categories, Bookmarks (#9266)
* fix: error when updating bookmarks if no query data

* feat: localize bookmark dialog, form labels and validation messages, also improve validation

* feat: add localization for EmptyPromptPreview component and update translation.json

* chore: add missing localizations for static UI text

* chore: update AgentPanelContextType and useGetAgentsConfig to support null configurations

* refactor: update agent categories to support localization and custom properties, improve related typing

* ci: add localization for 'All' category and update tab names in accessibility tests

* chore: remove unused AgentCategoryDisplay component and its tests

* chore: add localization handling for agent category selector

* chore: enhance AgentCard to support localized category labels and add related tests

* chore: enhance i18n unused keys detection to include additional source directories and improve handling for agent category keys
2025-08-25 13:54:13 -04:00
Marco Beretta
94426a3cae 🎭 refactor: Avatar Loading UX and Fix Initials Rendering Bugs (#9261)
Co-authored-by: Danny Avila <danny@librechat.ai>
2025-08-25 12:06:00 -04:00
Danny Avila
e559f0f4dc 📜 chore: Add Timestamp to Error logs (#9262) 2025-08-25 11:05:15 -04:00
github-actions[bot]
15c9c7e1f4 🌍 i18n: Update translation.json with latest translations (#9250)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-25 10:52:24 -04:00
Danny Avila
ac641e7cba 🗄️ refactor: Resource Migration Scripts for DocumentDB Compatibility (#9249)
* refactor: Resource Migration Scripts for DocumentDB compatibility

* fix: Correct type annotation for `db` parameter in ensureCollectionExists function
2025-08-25 03:01:50 -04:00
Danny Avila
1915d7b195 🧮 fix: Properly Escape Currency and Prevent Code Block LaTeX Bugs (#9248)
* fix(latex): prevent LaTeX conversion when closing $ is preceded by backtick

When text contained patterns like "$lookup namespace" followed by "`$lookup`",
the regex would match from the first $ to the backtick's $, treating the entire
span as a LaTeX expression. This caused programming constructs to be incorrectly
converted to double dollars.

- Added negative lookbehind (?<!`) to single dollar regex
- Prevents matching when closing $ immediately follows a backtick
- Fixes issues with inline code blocks containing $ symbols

* fix(latex): detect currency amounts with 4+ digits without commas

The currency regex pattern \d{1,3} only matched amounts with 1-3 initial digits,
causing amounts like $1157.90 to be interpreted as LaTeX instead of currency.
This resulted in text like "$1157.90 (text) + $500 (text) = $1657.90" being
incorrectly converted to a single LaTeX expression.

- Changed pattern from \d{1,3} to \d+ to match any number of initial digits
- Now properly escapes $1000, $10000, $123456, etc. without requiring commas
- Maintains support for comma-formatted amounts like $1,234.56

* fix(latex): support currency with unlimited decimal places

The currency regex limited decimal places to 1-2 digits (\.\d{1,2}), which
failed to properly escape amounts with more precision like cryptocurrency
values ($0.00001234), gas prices ($3.999), or exchange rates ($1.23456).

- Changed decimal pattern from \.\d{1,2} to \.\d+
- Now supports any number of decimal places
- Handles edge cases like scientific calculations and high-precision values
2025-08-25 02:44:13 -04:00
github-actions[bot]
c2f4b383f2 🌍 i18n: Update translation.json with latest translations (#9228)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-24 12:42:57 -04:00
Danny Avila
939af59950 🏄‍♂️ refactor: Improve Cancelled Stream Handling for Pending Authentication (#9235) 2025-08-24 12:42:34 -04:00
Danny Avila
7d08da1a8a 🪜 fix: userMCPAuthMap Sequential Agents assignment 2025-08-24 11:37:51 -04:00
Danny Avila
543b617e1c 📦 chore: bump cipher-base and sha.js (#9227) 2025-08-23 03:32:40 -04:00
Danny Avila
c827fdd10e 🚦 feat: Auto-reinitialize MCP Servers on Request (#9226) 2025-08-23 03:27:05 -04:00
Marco Beretta
ac608ded46 feat: Add cursor pagination utilities and refine user/group/role types in @librechat/data-schemas (#9218)
* feat: Add pagination interfaces and update user and group types for better data handling

* fix: Update data-schemas version to 0.0.19
2025-08-23 00:18:31 -04:00
mattmueller-stripe
0e00f357a6 🔗 chore: Remove <link href='#' /> from index.html (#9222) 2025-08-23 00:16:32 -04:00
Danny Avila
c465d7b732 🪪 fix: Preserve Existing Interface Permissions When Updating Config (#9199) 2025-08-21 12:49:45 -04:00
Danny Avila
aba0a93d1d 🧰 fix: Available Tools Retrieval with correct MCP Caching (#9181)
* fix: available tools retrieval with correct mcp caching and conversion

* test: Enhance PluginController tests with MCP tool mocking and conversion

* refactor: Simplify PluginController tests by removing unused mocks and enhancing test clarity
2025-08-20 22:33:54 -04:00
Danny Avila
a49b2b2833 🛣️ feat: directEndpoint Fetch Override for Custom Endpoints (#9179)
* feat: Add directEndpoint option to OpenAIConfigOptions and update fetch logic to override /chat/completions URL

* feat: Add directEndpoint support to fetchModels and update loadConfigModels logic
2025-08-20 15:55:32 -04:00
Danny Avila
e0ebb7097e fix: AbortSignal Cleanup Logic for New Chats (#9177)
* chore: import paths for isEnabled and logger in title.js

*  fix: `AbortSignal` Cleanup Logic for New Chats

* test: Add `isNewConvo` parameter to onStart expectation in BaseClient tests
2025-08-20 14:56:07 -04:00
Dustin Healy
9a79635012 🌍 i18n: Add Bosnian and Norsk Bokmål Languages (#9176) 2025-08-20 14:17:23 -04:00
Danny Avila
ce19abc968 🆔 feat: Add User ID to Anthropic API Payload as Metadata (#9174) 2025-08-20 13:42:08 -04:00
Danny Avila
49cd3894aa 🛠️ fix: Restrict Editable Content Types & Consolidate Typing (#9173)
* fix: only allow editing expected content types & align typing app-wide

* chore: update TPayload to use TEditedContent type for editedContent
2025-08-20 13:21:47 -04:00
Danny Avila
da4aa37493 🛠️ refactor: Consolidate MCP Tool Caching (#9172)
* 🛠️ refactor: Consolidate MCP Tool Caching

* 🐍 fix: Correctly mock and utilize updateMCPUserTools in MCP route tests
2025-08-20 12:19:29 -04:00
github-actions[bot]
5a14ee9c6a 🌍 i18n: Update translation.json with latest translations (#9151)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-19 12:05:49 -04:00
Danny Avila
3394aa5030 🗃️ fix: Only Unlink Temp files on Error for Firebase File Uploads (#9152) 2025-08-19 12:05:27 -04:00
Danny Avila
cee0579e0e 📜 chore: update librechat.example.yaml 2025-08-19 11:26:47 -04:00
Helge Wiethoff
d6c173c94b 🐍 fix: Use Standard Mongoose Module Resolution in Config Scripts (#9143) 2025-08-19 11:15:09 -04:00
Andrés Restrepo
beff848a3f 🛡️ fix: Add Null Checks to Parameter Settings to Prevent Undefined Access (#9108)
Fixes "Cannot read properties of undefined (reading 'key')" error in parameter panels by filtering out null/undefined values before mapping operations.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-19 11:12:30 -04:00
Ihsan Soydemir
b9bc3123d6 🐛 fix: Correct Next Refill Date Logic for Balance Settings (#9121) 2025-08-19 11:11:33 -04:00
Dustin Healy
639c7ad6ad 📬 feat: Agent Support Email Address Validation (#9128)
* fix: email-regex realtime updates and looser validation

* feat: add zod validation to email address input

* refactor: emailValidation to email
2025-08-19 11:07:01 -04:00
Danny Avila
822e2310ce 🚮 fix: Remove Filtering Logic Before MCP Initialization (#9149) 2025-08-19 11:03:11 -04:00
Federico Ruggi
2a0a8f6beb 📛 refactor: Decouple MCP Dialog UI from BadgeRowContext (#8920)
* decouple MCP dialog from BadgeRowContext

* chore: import order and style according to guidelines

---------

Co-authored-by: Dustin Healy <dustinhealy1@gmail.com>
2025-08-18 11:20:45 -04:00
Danny Avila
a6fd32a15a 🏷️ refactor: Normalize Request Headers in setRequestHeaders (#9106) 2025-08-17 13:23:46 -04:00
github-actions[bot]
80a1a57fde 🌍 i18n: Update translation.json with latest translations (#9104)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-17 12:25:49 -04:00
Danny Avila
3576391482 📦 chore: Bump Package Versions and Update Data Provider Workflow (#9103)
* 📦 chore: Bump `@librechat/api` to v1.3.3

* 📦 chore: Bump `librechat-data-provider` to v0.8.003

* 📦 chore: Bump `@librechat/data-schemas` to v0.0.18

* 📦 chore: Bump `@librechat/client` to v0.2.6

* 📦 chore: Update workflow name for `librechat-data-provider` and add manual dispatch

* 📦 chore: Update Node.js version to 20 in data provider workflow
2025-08-17 12:07:42 -04:00
Danny Avila
55557f7cc8 🌀 chore: Resolve primeResources Typescript Warning 2025-08-16 21:00:16 -04:00
Danny Avila
d7d02766ea 🏷️ feat: Request Placeholders for Custom Endpoint & MCP Headers (#9095)
* feat: Add conversation ID support to custom endpoint headers

- Add LIBRECHAT_CONVERSATION_ID to customUserVars when provided
- Pass conversation ID to header resolution for dynamic headers
- Add comprehensive test coverage

Enables custom endpoints to access conversation context using {{LIBRECHAT_CONVERSATION_ID}} placeholder.

* fix: filter out unresolved placeholders from headers (thanks @MrunmayS)

* feat: add support for request body placeholders in custom endpoint headers

- Add {{LIBRECHAT_BODY_*}} placeholders for conversationId, parentMessageId, messageId
- Update tests to reflect new body placeholder functionality

* refactor resolveHeaders

* style: minor styling cleanup

* fix: type error in unit test

* feat: add body to other endpoints

* feat: add body for mcp tool calls

* chore: remove changes that unnecessarily increase scope after clarification of requirements

* refactor: move http.ts to packages/api and have RequestBody intersect with Express request body

* refactor: processMCPEnv now uses single object argument pattern

* refactor: update processMCPEnv to use 'options' parameter and align types across MCP connection classes

* feat: enhance MCP connection handling with dynamic request headers to pass request body fields

---------

Co-authored-by: Gopal Sharma <gopalsharma@gopal.sharma1>
Co-authored-by: s10gopal <36487439+s10gopal@users.noreply.github.com>
Co-authored-by: Dustin Healy <dustinhealy1@gmail.com>
2025-08-16 20:45:55 -04:00
Marco Beretta
627f0bffe5 🔧 chore: Update package versions for data-provider and data-schemas (#9096) 2025-08-16 19:24:54 -04:00
Danny Avila
8d1d95371f ®️ chore: Remove Unnecessary sourcemapPathTransform in Rollup Config 2025-08-16 16:53:43 -04:00
Danny Avila
8bcdc041b2 ⚙️ refactor: Only register OpenID Strategy if Config Succeeded (#9094)
* fix: register openId the strategy if setupOpenId succeeded

* chore: linting and update imports

* refactor: extract OpenID configuration into a separate function

---------

Co-authored-by: Denis <denis.sheremetov@gmail.com>
2025-08-16 14:49:03 -04:00
Luís André
9b6395d955 🌐 feat: Configurable Redis Cluster Mode with Single URI Support (#9039)
*  feat: Add support for enabling Redis cluster configuration

*  feat: Enhance Redis client initialization to support cluster configuration without multiple URIs

*  feat: Add tests for USE_REDIS_CLUSTER configuration and validation

* 🐞 fix: Remove unnecessary blank line in cacheConfig tests
2025-08-16 14:41:53 -04:00
Danny Avila
ad1503abdc 🧪 feat: Claude Sonnet 4 - 1M Context Window (Beta Header) (#9093)
* adding beta header context-1m-2025-08-07 to claude sonnet 4 to increase contact window to 1M tokens

* adding context-1m beta header to test cases

* ci: Update Anthropic `getLLMConfig` tests and headers for model variations

- Refactored test cases to ensure proper handling of model variations for 'claude-sonnet-4'.
- Cleaned up unused mock implementations in tests for better clarity and performance.

* refactor: regex in header retrieval for 'claude-sonnet-4' models

* refactor: default tokens for 'claude-sonnet-4' to `1,000,000`

* refactor: add token value retrieval and pattern matching to model tests

---------

Co-authored-by: Dirk Petersen <no-reply@nowhere.com>
2025-08-16 13:36:46 -04:00
Jón Levy
a6d7ebf22e 🛠️ fix: Workaround for Federated OpenID Nonce Validation Issues (#9067)
* fix: generate nonce for federated providers

* fix: lint issues

* chore: comments

---------

Co-authored-by: Danny Avila <danacordially@gmail.com>
2025-08-16 13:14:01 -04:00
Danny Avila
cebf140bce 🔄 fix: Add Azure to Recognized and Content array providers for MCP Tool Calls (#9092) 2025-08-16 12:44:12 -04:00
Danny Avila
cc0cf359a2 🔧 fix: Remove Unused Duplicate Group Methods (#9091) 2025-08-16 12:17:12 -04:00
Danny Avila
3547873bc4 🧑‍💻 refactor: Secure Field Selection for 2FA & API Build Sourcemap (#9087)
* refactor: `packages/api` build scripts for better inline debugging

* refactor: Explicitly select secure fields as no longer returned by default, exclude backupCodes from user data retrieval in authentication and 2FA processes

* refactor: Backup Codes UI to not expect backup codes, only regeneration

* refactor: Ensure secure fields are deleted from user data in getUserController
2025-08-15 18:55:49 -04:00
Danny Avila
50b7bd6643 🔄 fix: Ensure lastRefill Date for Existing Users & Refactor Balance Middleware (#9086)
- Deleted setBalanceConfig middleware and its associated file.
- Introduced createSetBalanceConfig factory function to create middleware for synchronizing user balance settings.
- Updated auth and oauth routes to use the new balance configuration middleware.
- Added comprehensive tests for the new balance middleware functionality.
- Updated package versions and dependencies in package.json and package-lock.json.
- Added balance types and updated middleware index to export new balance middleware.
2025-08-15 17:02:49 -04:00
Danny Avila
81186312ef 🪙 refactor: Remove Title maxTokens & Support New LMStudio/Ollama Reasoning Format (#9085)
* 📦 chore: bump `@librechat/agents` to v2.4.76

* refactor: remove default maxTokens from title `clientOptions`
2025-08-15 15:29:16 -04:00
Marco Beretta
4ec7bcb60f 🪟 style: Agent Marketplace UI Responsiveness, a11y, and Navigation (#9068)
* refactor: Agent Marketplace Button with access control

* fix(agent-marketplace): update marketplace UI and access control

* fix(agent-card): handle optional agent description for accessibility

* fix(agent-card): remove unnecessary icon checks from tests

* chore: remove unused keys

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-08-15 14:59:10 -04:00
Marco Beretta
c78fd0fc83 🔐 feat: Group schema support, refine user schema security, and improve types (#9070)
* feat: Add groupSchema and update userSchema to hide sensitive fields

* chore: Bump version of @librechat/data-schemas to 0.0.16
2025-08-14 22:58:54 -04:00
Danny Avila
d711fc7852 🤝 refactor: Clarify labels for Sharing Permissions 2025-08-14 22:56:49 -04:00
Danny Avila
6af7efd0f4 🤖 fix: Active Tab Logic for Promoted Agents in Agent Marketplace (#9069)
* 🤖 fix: Active Tab Logic in AgentMarketplace for Promoted Agents

* test: increase render time threshold for Virtual Scrolling Performance test
2025-08-14 22:56:39 -04:00
Danny Avila
d57e7aec73 🥷 fix: Correct Agents Handling for Marketplace Users (#9065)
* refactor: Introduce ModelSelectorChatContext and integrate with ModelSelector

* fix: agents handling in ModelSelector to show expected agents if user has marketplace access
2025-08-14 19:13:48 -04:00
Danny Avila
e4e25aaf2b 🔎 feat: Add Prompt and Agent Permissions Migration Checks (#9063)
* chore: fix mock typing in packages/api tests

* chore: improve imports, type handling and method signatures for MCPServersRegistry

* chore: use enum in migration scripts

* chore: ParsedServerConfig type to enhance server configuration handling

* feat: Implement agent permissions migration check and logging

* feat: Integrate migration checks into server initialization process

* feat: Add prompt permissions migration check and logging to server initialization

* chore: move prompt formatting functions to dedicated prompts dir
2025-08-14 17:20:00 -04:00
Danny Avila
e8ddd279fd 🎏 refactor: Streamline Role Permissions from Interface Config 2025-08-14 02:15:33 -04:00
Danny Avila
b742c8c7f9 👁️‍🗨️ refactor: use PermissionBits.VIEW in useAgentsMap for requiredPermission 2025-08-13 16:24:26 -04:00
Danny Avila
803ade8601 🔄 refactor: Principal Type Handling in Search Principals to use Array 2025-08-13 16:24:26 -04:00
Danny Avila
dcd96c29c5 🗨️ refactor: Optimize Prompt Queries
feat: Refactor prompt and prompt group schemas; move types to separate file

feat: Implement paginated access to prompt groups with filtering and public visibility

refactor: Add PromptGroups context provider and integrate it into relevant components

refactor: Optimize filter change handling and query invalidation in usePromptGroupsNav hook

refactor: Simplify context usage in FilterPrompts and GroupSidePanel components
2025-08-13 16:24:25 -04:00
Danny Avila
53c31b85d0 🗞️ refactor: Apply Role Permissions at Startup only if Missing or Configured 2025-08-13 16:24:25 -04:00
Danny Avila
d07c2b3475 🛒 feat: Implement Marketplace Permissions Management UI
- Added MarketplaceAdminSettings component for managing marketplace permissions.
- Updated roles.js to include marketplace permissions in the API.
- Refactored interface.js to streamline marketplace permissions handling.
- Enhanced Marketplace component to integrate admin settings.
- Updated localization files to include new marketplace-related keys.
- Added new API endpoint for updating marketplace permissions in data-service.
2025-08-13 16:24:24 -04:00
Danny Avila
a434d28579 🧑‍🤝‍🧑 feat: Add People Picker Permissions Management UI 2025-08-13 16:24:24 -04:00
Marco Beretta
d82a63642d 🖼️ style: Improve Marketplace & Sharing Dialog UI
feat: Enhance CategoryTabs and Marketplace components for better responsiveness and navigation

feat: Refactor AgentCard and AgentGrid components for improved layout and accessibility

feat: Implement animated category transitions in AgentMarketplace and update NewChat component layout

feat: Refactor UI components for improved styling and accessibility in sharing dialogs

refactor: remove GenericManagePermissionsDialog and GrantAccessDialog components

- Deleted GenericManagePermissionsDialog and GrantAccessDialog components to streamline sharing functionality.
- Updated ManagePermissionsDialog to utilize AccessRolesPicker directly.
- Introduced UnifiedPeopleSearch for improved people selection experience.
- Enhanced PublicSharingToggle with InfoHoverCard for better user guidance.
- Adjusted AgentPanel to change error status to warning for duplicate agent versions.
- Updated translations to include new keys for search and access management.

feat: Add responsive design for SelectedPrincipalsList and improve layout in GenericGrantAccessDialog

feat: Enhance styling in SelectedPrincipalsList and SearchPicker components for improved UI consistency

feat: Improve PublicSharingToggle component with enhanced styling and accessibility features

feat: Introduce InfoHoverCard component and refactor enums for better organization

feat: Implement infinite scroll for agent grids and enhance performance

- Added `useInfiniteScroll` hook to manage infinite scrolling behavior in agent grids.
- Integrated infinite scroll functionality into `AgentGrid` and `VirtualizedAgentGrid` components.
- Updated `AgentMarketplace` to pass the scroll container to the agent grid components.
- Refactored loading indicators to show a spinner instead of a "Load More" button.
- Created `VirtualizedAgentGrid` component for optimized rendering of agent cards using virtualization.
- Added performance tests for `VirtualizedAgentGrid` to ensure efficient handling of large datasets.
- Updated translations to include new messages for end-of-results scenarios.

chore: Remove unused permission-related UI localization keys

ci: Update Agent model tests to handle duplicate support_contact updates

- Modified tests to ensure that updating an agent with the same support_contact does not create a new version and returns successfully.
- Enhanced verification for partial changes in support_contact, confirming no new version is created when content remains the same.

chore: Address ESLint, clean up unused imports and improve prop definitions in various components

ci: fix tests

ci: update tests

chore: remove unused search localization keys
2025-08-13 16:24:24 -04:00
Danny Avila
9585db14ba 🤖 fix: Edge Case for Agent List Access Control
- Refactored `getListAgentsByAccess` to streamline query construction for accessible agents.
- Added comprehensive security tests for `getListAgentsByAccess` and `getListAgentsHandler` to ensure proper access control and filtering based on user permissions.
- Enhanced test coverage for various scenarios, including pagination, category filtering, and handling of non-existent IDs.
2025-08-13 16:24:23 -04:00
Danny Avila
c191af6c9b 🎨 style: Theming in SharePointPickerDialog, PrincipalAvatar, and PeoplePickerSearchItem 2025-08-13 16:24:23 -04:00
Danny Avila
39346d6b8e 🛂 feat: Role as Permission Principal Type
WIP: Role as Permission Principal Type

WIP: add user role check optimization to user principal check, update type comparisons

WIP: cover edge cases for string vs ObjectId handling in permission granting and checking

chore: Update people picker access middleware to use PrincipalType constants

feat: Enhance people picker access control to include roles permissions

chore: add missing default role schema values for people picker perms, cleanup typing

feat: Enhance PeoplePicker component with role-specific UI and localization updates

chore: Add missing `VIEW_ROLES` permission to role schema
2025-08-13 16:24:23 -04:00
Danny Avila
28d63dab71 🔧 refactor: Integrate PrincipalModel Enum for Principal Handling
- Replaced string literals for principal models ('User', 'Group') with the new PrincipalModel enum across various models, services, and tests to enhance type safety and consistency.
- Updated permission handling in multiple files to utilize the PrincipalModel enum, improving maintainability and reducing potential errors.
- Ensured all relevant tests reflect these changes to maintain coverage and functionality.
2025-08-13 16:24:22 -04:00
Danny Avila
49d1cefe71 🔧 refactor: Add and use PrincipalType Enum
- Replaced string literals for principal types ('user', 'group', 'public') with the new PrincipalType enum across various models, services, and tests for improved type safety and consistency.
- Updated permission handling in multiple files to utilize the PrincipalType enum, enhancing maintainability and reducing potential errors.
- Ensured all relevant tests reflect these changes to maintain coverage and functionality.
2025-08-13 16:24:22 -04:00
Danny Avila
0262c25989 🌏 chore: Remove unused localization keys from translation.json
- Deleted keys related to sharing prompts and public access that are no longer in use, streamlining the localization file.
2025-08-13 16:24:21 -04:00
Danny Avila
90b037a67f 🧪 ci: Update PermissionService tests for PromptGroup resource type
- Refactor tests to use PromptGroup roles instead of Project roles.
- Initialize models and seed default roles in test setup.
- Update error handling for non-existent resource types.
- Ensure proper cleanup of test data while retaining seeded roles.
2025-08-13 16:24:21 -04:00
Danny Avila
fc8fd489d6 🔗 fix: File Citation Processing to Use Tool Artifacts 2025-08-13 16:24:21 -04:00
Danny Avila
81b32e400a 🔧 refactor: Organize Sharing/Agent Components and Improve Type Safety
refactor: organize Sharing/Agent components, improve type safety for resource types and access role ids, rename enums to PascalCase

refactor: organize Sharing/Agent components, improve type safety for resource types and access role ids

chore: move sharing related components to dedicated "Sharing" directory

chore: remove PublicSharingToggle component and update index exports

chore: move non-sidepanel agent components to `~/components/Agents`

chore: move AgentCategoryDisplay component with tests

chore: remove commented out code

refactor: change PERMISSION_BITS from const to enum for better type safety

refactor: reorganize imports in GenericGrantAccessDialog and update index exports for hooks

refactor: update type definitions to use ACCESS_ROLE_IDS for improved type safety

refactor: remove unused canAccessPromptResource middleware and related code

refactor: remove unused prompt access roles from createAccessRoleMethods

refactor: update resourceType in AclEntry type definition to remove unused 'prompt' value

refactor: introduce ResourceType enum and update resourceType usage across data provider files for improved type safety

refactor: update resourceType usage to ResourceType enum across sharing and permissions components for improved type safety

refactor: standardize resourceType usage to ResourceType enum across agent and prompt models, permissions controller, and middleware for enhanced type safety

refactor: update resourceType references from PROMPT_GROUP to PROMPTGROUP for consistency across models, middleware, and components

refactor: standardize access role IDs and resource type usage across agent, file, and prompt models for improved type safety and consistency

chore: add typedefs for TUpdateResourcePermissionsRequest and TUpdateResourcePermissionsResponse to enhance type definitions

chore: move SearchPicker to PeoplePicker dir

refactor: implement debouncing for query changes in SearchPicker for improved performance

chore: fix typing, import order for agent admin settings

fix: agent admin settings, prevent agent form submission

refactor: rename `ACCESS_ROLE_IDS` to `AccessRoleIds`

refactor: replace PermissionBits with PERMISSION_BITS

refactor: replace PERMISSION_BITS with PermissionBits
2025-08-13 16:24:20 -04:00
Danny Avila
ae732b2ebc 🗨️ feat: Granular Prompt Permissions via ACL and Permission Bits
feat: Implement prompt permissions management and access control middleware

fix: agent deletion process to remove associated permissions and ACL entries

fix: Import Permissions for enhanced access control in GrantAccessDialog

feat: use PromptGroup for access control

- Added migration script for PromptGroup permissions, categorizing groups into global view access and private groups.
- Created unit tests for the migration script to ensure correct categorization and permission granting.
- Introduced middleware for checking access permissions on PromptGroups and prompts via their groups.
- Updated routes to utilize new access control middleware for PromptGroups.
- Enhanced access role definitions to include roles specific to PromptGroups.
- Modified ACL entry schema and types to accommodate PromptGroup resource type.
- Updated data provider to include new access role identifiers for PromptGroups.

feat: add generic access management dialogs and hooks for resource permissions

fix: remove duplicate imports in FileContext component

fix: remove duplicate mongoose dependency in package.json

feat: add access permissions handling for dynamic resource types and add promptGroup roles

feat: implement centralized role localization and update access role types

refactor: simplify author handling in prompt group routes and enhance ACL checks

feat: implement addPromptToGroup functionality and update PromptForm to use it

feat: enhance permission handling in ChatGroupItem, DashGroupItem, and PromptForm components

chore: rename migration script for prompt group permissions and update package.json scripts

chore: update prompt tests
2025-08-13 16:24:20 -04:00
Danny Avila
7e7e75714e 🧹 chore: Add Back Agent-Specific File Retrieval and Deletion Permissions 2025-08-13 16:24:19 -04:00
“Praneeth
ff54cbffd9 🔒 feat: Implement Granular File Storage Strategies and Access Control Middleware 2025-08-13 16:24:19 -04:00
Danny Avila
74e029e78f 🧪 ci: Update Test Files & fix ESLint issues 2025-08-13 16:24:19 -04:00
Danny Avila
75324e1c7e 🎨 style: Theming and Consistency Improvements for Agent Marketplace
style: AccessRolesPicker to use DropdownPopup, theming, import order, localization

refactor: Update localization keys for Agent Marketplace in NewChat component, remove duplicate key

style: Adjust layout and font size in NewChat component for Agent Marketplace button

style: theming in AgentGrid

style: Update theming and text colors across Agent components for improved consistency

chore: import order

style: Replace Dialog with OGDialog and update content components in AgentDetail

refactor: Simplify AgentDetail component by removing dropdown menu and replacing it with a copy link button

style: Enhance scrollbar visibility and layout in AgentMarketplace and CategoryTabs components

style: Adjust layout in AgentMarketplace component by removing unnecessary padding from the container

style: Refactor layout in AgentMarketplace component by reorganizing hero section and sticky wrapper for improved structure with collapsible header effect

style: Improve responsiveness and layout in AgentMarketplace component by adjusting header visibility and modifying container styles based on screen size

fix: Update localization key for no categories message in CategoryTabs component and corresponding test

style: Add className prop to OpenSidebar component for improved styling flexibility and update Header to utilize it for responsive design

style: Enhance layout and scrolling behavior in CategoryTabs component by adding scroll snap properties and adjusting class names for improved user experience

style: Update AgentGrid component layout and skeleton structure for improved visual consistency and responsiveness
2025-08-13 16:24:18 -04:00
“Praneeth
949682ef0f 🏪 feat: Agent Marketplace
bugfix: Enhance Agent and AgentCategory schemas with new fields for category, support contact, and promotion status

refactored and moved agent category methods and schema to data-schema package

🔧 fix: Merge and Rebase Conflicts

- Move AgentCategory from api/models to @packages/data-schemas structure
  - Add schema, types, methods, and model following codebase conventions
  - Implement auto-seeding of default categories during AppService startup
  - Update marketplace controller to use new data-schemas methods
  - Remove old model file and standalone seed script

refactor: unify agent marketplace to single endpoint with cursor pagination

  - Replace multiple marketplace routes with unified /marketplace endpoint
  - Add query string controls: category, search, limit, cursor, promoted, requiredPermission
  - Implement cursor-based pagination replacing page-based system
  - Integrate ACL permissions for proper access control
  - Fix ObjectId constructor error in Agent model
  - Update React components to use unified useGetMarketplaceAgentsQuery hook
  - Enhance type safety and remove deprecated useDynamicAgentQuery
  - Update tests for new marketplace architecture
  -Known issues:
  see more button after category switching + Unit tests

feat: add icon property to ProcessedAgentCategory interface

- Add useMarketplaceAgentsInfiniteQuery and useGetAgentCategoriesQuery to client/src/data-provider/Agents/
  - Replace manual pagination in AgentGrid with infinite query pattern
  - Update imports to use local data provider instead of librechat-data-provider
  - Add proper permission handling with PERMISSION_BITS.VIEW/EDIT constants
  - Improve agent access control by adding requiredPermission validation in backend
  - Remove manual cursor/state management in favor of infinite query built-ins
  - Maintain existing search and category filtering functionality

refactor: consolidate agent marketplace endpoints into main agents API and improve data management consistency

  - Remove dedicated marketplace controller and routes, merging functionality into main agents v1 API
  - Add countPromotedAgents function to Agent model for promoted agents count
  - Enhance getListAgents handler with marketplace filtering (category, search, promoted status)
  - Move getAgentCategories from marketplace to v1 controller with same functionality
  - Update agent mutations to invalidate marketplace queries and handle multiple permission levels
  - Improve cache management by updating all agent query variants (VIEW/EDIT permissions)
  - Consolidate agent data access patterns for better maintainability and consistency
  - Remove duplicate marketplace route definitions and middleware

selected view only agents injected in the drop down

fix: remove minlength validation for support contact name in agent schema

feat: add validation and error messages for agent name in AgentConfig and AgentPanel

fix: update agent permission check logic in AgentPanel to simplify condition

Fix linting WIP

Fix Unit tests WIP

ESLint fixes

eslint fix

refactor: enhance isDuplicateVersion function in Agent model for improved comparison logic

- Introduced handling for undefined/null values in array and object comparisons.
- Normalized array comparisons to treat undefined/null as empty arrays.
- Added deep comparison for objects and improved handling of primitive values.
- Enhanced projectIds comparison to ensure consistent MongoDB ObjectId handling.

refactor: remove redundant properties from IAgent interface in agent schema

chore: update localization for agent detail component and clean up imports

ci: update access middleware tests

chore: remove unused PermissionTypes import from Role model

ci: update AclEntry model tests

ci: update button accessibility labels in AgentDetail tests

refactor: update exhaustive dep. lint warning

🔧 fix: Fixed agent actions access

feat: Add role-level permissions for agent sharing people picker

  - Add PEOPLE_PICKER permission type with VIEW_USERS and VIEW_GROUPS permissions
  - Create custom middleware for query-aware permission validation
  - Implement permission-based type filtering in PeoplePicker component
  - Hide people picker UI when user lacks permissions, show only public toggle
  - Support granular access: users-only, groups-only, or mixed search modes

refactor: Replace marketplace interface config with permission-based system

  - Add MARKETPLACE permission type to handle marketplace access control
  - Update interface configuration to use role-based marketplace settings (admin/user)
  - Replace direct marketplace boolean config with permission-based checks
  - Modify frontend components to use marketplace permissions instead of interface config
  - Update agent query hooks to use marketplace permissions for determining permission levels
  - Add marketplace configuration structure similar to peoplePicker in YAML config
  - Backend now sets MARKETPLACE permissions based on interface configuration
  - When marketplace enabled: users get agents with EDIT permissions in dropdown lists  (builder mode)
  - When marketplace disabled: users get agents with VIEW permissions  in dropdown lists (browse mode)

🔧 fix: Redirect to New Chat if No Marketplace Access and Required Agent Name Placeholder (#8213)

* Fix: Fix the redirect to new chat page if access to marketplace is denied

* Fixed the required agent name placeholder

---------

Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com>

chore: fix tests, remove unnecessary imports

refactor: Implement permission checks for file access via agents

- Updated `hasAccessToFilesViaAgent` to utilize permission checks for VIEW and EDIT access.
- Replaced project-based access validation with permission-based checks.
- Enhanced tests to cover new permission logic and ensure proper access control for files associated with agents.
- Cleaned up imports and initialized models in test files for consistency.

refactor: Enhance test setup and cleanup for file access control

- Introduced modelsToCleanup array to track models added during tests for proper cleanup.
- Updated afterAll hooks in test files to ensure all collections are cleared and only added models are deleted.
- Improved consistency in model initialization across test files.
- Added comments for clarity on cleanup processes and test data management.

chore: Update Jest configuration and test setup for improved timeout handling

- Added a global test timeout of 30 seconds in jest.config.js.
- Configured jest.setTimeout in jestSetup.js to allow individual test overrides if needed.
- Enhanced test reliability by ensuring consistent timeout settings across all tests.

refactor: Implement file access filtering based on agent permissions

- Introduced `filterFilesByAgentAccess` function to filter files based on user access through agents.
- Updated `getFiles` and `primeFiles` functions to utilize the new filtering logic.
- Moved `hasAccessToFilesViaAgent` function from the File model to permission services, adjusting imports accordingly
- Enhanced tests to ensure proper access control and filtering behavior for files associated with agents.

fix: make support_contact field a nested object rather than a sub-document

refactor: Update support_contact field initialization in agent model

- Removed handling for empty support_contact object in createAgent function.
- Changed default value of support_contact in agent schema to undefined.

test: Add comprehensive tests for support_contact field handling and versioning

refactor: remove unused avatar upload mutation field and add informational toast for success

chore: add missing SidePanelProvider for AgentMarketplace and organize imports

fix: resolve agent selection race condition in marketplace HandleStartChat
- Set agent in localStorage before newConversation to prevent useSelectorEffects from auto-selecting previous agent

fix: resolve agent dropdown showing raw ID instead of agent info from URL

  - Add proactive agent fetching when agent_id is present in URL parameters
  - Inject fetched agent into agents cache so dropdowns display proper name/avatar
  - Use useAgentsMap dependency to ensure proper cache initialization timing
  - Prevents raw agent IDs from showing in UI when visiting shared agent links

Fix: Agents endpoint renamed to "My Agent" for less confusion with the Marketplace agents.

chore: fix ESLint issues and Test Mocks

ci: update permissions structure in loadDefaultInterface tests

- Refactored permissions for MEMORY and added new permissions for MARKETPLACE and PEOPLE_PICKER.
- Ensured consistent structure for permissions across different types.

feat:  support_contact validation to allow empty email strings
2025-08-13 16:24:18 -04:00
Danny Avila
66bd419baa 🔐 feat: Granular Role-based Permissions + Entra ID Group Discovery (#7804)
WIP: pre-granular-permissions commit

feat: Add category and support contact fields to Agent schema and UI components

Revert "feat: Add category and support contact fields to Agent schema and UI components"

This reverts commit c43a52b4c9.

Fix: Update import for renderHook in useAgentCategories.spec.tsx

fix: Update icon rendering in AgentCategoryDisplay tests to use empty spans

refactor: Improve category synchronization logic and clean up AgentConfig component

refactor: Remove unused UI flow translations from translation.json

feat: agent marketplace features

🔐 feat: Granular Role-based Permissions + Entra ID Group Discovery (#7804)
2025-08-13 16:24:17 -04:00
Jordi Higuera
aa42759ffd 🍃 feat: Add MongoDB Connection Pool Configuration Options (#8537)
* 🔧 Feat: Add MongoDB connection pool configuration options to environment variables

* 🔧 feat: Add environment variables for automatic index creation and collection creation in MongoDB connection

---------

Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com>
2025-08-13 16:24:17 -04:00
Danny Avila
52e59e40be 📚 feat: Add Source Citations for File Search in Agents (#8652)
* feat: Source Citations for file_search in Agents

* Fix: Added citation limits and relevance score to app service. Removed duplicate tests

*  feat: implement Role-level toggle to optionally disable file Source Citation in Agents

* 🐛 fix: update mock for librechat-data-provider to include PermissionTypes and SystemRoles

---------

Co-authored-by: “Praneeth <praneeth.goparaju@slalom.com>
2025-08-13 16:24:16 -04:00
Danny Avila
a955097faf 📁 feat: Integrate SharePoint File Picker and Download Workflow (#8651)
* feat(sharepoint): integrate SharePoint file picker and download workflow
Introduces end‑to‑end SharePoint import support:
* Token exchange with Microsoft Graph and scope management (`useSharePointToken`)
* Re‑usable hooks: `useSharePointPicker`, `useSharePointDownload`,
  `useSharePointFileHandling`
* FileSearch dropdown now offers **From Local Machine** / **From SharePoint**
  sources and gracefully falls back when SharePoint is disabled
* Agent upload model, `AttachFileMenu`, and `DropdownPopup` extended for
  SharePoint files and sub‑menus
* Blurry overlay with progress indicator and `maxSelectionCount` limit during
  downloads
* Cache‑flush utility (`config/flush-cache.js`) supporting Redis & filesystem,
  with dry‑run and npm script
* Updated `SharePointIcon` (uses `currentColor`) and new i18n keys
* Bug fixes: placeholder syntax in progress message, picker event‑listener
  cleanup
* Misc style and performance optimizations

* Fix ESLint warnings

---------

Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com>
2025-08-13 16:24:16 -04:00
Theo N. Truong
b6413b06bc 🐞 fix: Update MCP server initialization to skip non-startup and oauth servers (#9049) 2025-08-13 16:19:55 -04:00
Danny Avila
e6cebdf2b6 🚌 fix: MCP Runtime Errors while Initializing (#9046)
* chore: Remove eslint-plugin-perfectionist from dependencies

* 🚌 fix: MCP Runtime Errors while Initializing

* chore: Bump @librechat/api version to 1.3.1

* chore: import order

* chore: import order
2025-08-13 14:41:38 -04:00
github-actions[bot]
3eb6debe6a 🌍 i18n: Update translation.json with latest translations (#9020)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-13 11:47:59 -04:00
Theo N. Truong
8780a78165 ♻️ refactor: MCPManager for Scalability, Fix App-Level Detection, Add Lazy Connections (#8930)
* feat: MCP Connection management overhaul - Making MCPManager manageable

Refactor the monolithic MCPManager into focused, single-responsibility classes:

• MCPServersRegistry: Server configuration discovery and metadata management
• UserConnectionManager: Manages user-level connections
• ConnectionsRepository: Low-level connection pool with lazy loading
• MCPConnectionFactory: Handles MCP connection creation with OAuth support

New Features:
• Lazy loading of app-level connections for horizontal scaling
• Automatic reconnection for app-level connections
• Enhanced OAuth detection with explicit requiresOAuth flag
• Centralized MCP configuration management

Bug Fixes:
• App-level connection detection in MCPManager.callTool
• MCP Connection Reinitialization route behavior

Optimizations:
• MCPConnection.isConnected() caching to reduce overhead
• Concurrent server metadata retrieval instead of sequential

This refactoring addresses scalability bottlenecks and improves reliability
while maintaining backward compatibility with existing configurations.

* feat: Enabled import order in eslint.

* # Moved tests to __tests__ folder
# added tests for MCPServersRegistry.ts

* # Add unit tests for ConnectionsRepository functionality

* # Add unit tests for MCPConnectionFactory functionality

* # Reorganize MCP connection tests and improve error handling

* # reordering imports

* # Update testPathIgnorePatterns in jest.config.mjs to exclude development TypeScript files

* # removed mcp/manager.ts
2025-08-13 11:45:06 -04:00
Joseph Licata
9dbf153489 🐞 fix: Prevent Type Error in Successful Bookmark Deletion (#9014) 2025-08-12 22:38:20 -04:00
Theo N. Truong
4799593e1a 🔧 fix: Redis cluster connection errors and configuration (#9016)
- Fix ioredis cluster initialization to use proper node array format
- Fix reconnectOnError to return numeric delay instead of boolean
- Fix keyv redis cluster configuration to use proper URL objects
- Resolves "undefinedms" reconnection delay errors
2025-08-12 22:23:29 -04:00
faustoFF
a199b87478 🐋 ci: Optimize Dockerfile Caching (#8480)
* 🐳 chore(docker): optimize Docker npm dependency caching

* Update Dockerfile

---------

Co-authored-by: Danny Avila <danacordially@gmail.com>
2025-08-12 20:00:59 -04:00
Danny Avila
007570b5c6 🎨 style: Add missing markdown font size variable to CSS (#9011) 2025-08-12 10:19:29 -04:00
Jeffrey Bulanadi
5943d5346c 📑 docs: Fix Typos in JSDoc and Doc Files (#8998)
- Fix grammar in translations README: 'if has not been ran'  'if it has not been run'
- Fix spacing in JSDoc comments: 'at theend'  'at the end' (2 instances)
2025-08-12 10:18:55 -04:00
Danny Avila
052e61b735 v0.8.0-rc2 (#9000) 2025-08-11 18:58:21 -04:00
Danny Avila
1ccac58403 🔒 fix: Provider Validation for Social, OpenID, SAML, and LDAP Logins (#8999)
* fix: social login provider crossover

* feat: Enhance OpenID login handling and add tests for provider validation

* refactor: authentication error handling to use ErrorTypes.AUTH_FAILED enum

* refactor: update authentication error handling in LDAP and SAML strategies to use ErrorTypes.AUTH_FAILED enum

* ci: Add validation for login with existing email and different provider in SAML strategy

chore: Add logging for existing users with different providers in LDAP, SAML, and Social Login strategies
2025-08-11 18:51:46 -04:00
Clay Rosenthal
04d74a7e07 🪖 ci: Helm OCI Publishing (#7256)
* Adding helm oci publishing (#3)

Signed-off-by: Clay Rosenthal <clayros@amazon.com>

* Update chart version

Signed-off-by: Clay Rosenthal <clayros@amazon.com>

* Update helm release

Signed-off-by: Clay Rosenthal <clayros@amazon.com>

* Update helm chart package path

Signed-off-by: Clay Rosenthal <clayros@amazon.com>

---------

Signed-off-by: Clay Rosenthal <clayros@amazon.com>
2025-08-11 16:21:05 -04:00
github-actions[bot]
0fdca8ddbd 🌍 i18n: Update translation.json with latest translations (#8996)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-11 16:19:37 -04:00
Danny Avila
c5ca621efd 🧑‍💼 feat: Add Agent Model Validation (#8995)
* fix: Update logger import to use data-schemas module

* feat: agent model validation

* fix: Remove invalid error messages from translation file
2025-08-11 14:26:28 -04:00
Danny Avila
8cefa566da 📸 fix: Avatar Handling for Social Login (#8993)
- Updated `handleExistingUser` function to improve avatar handling logic, including checks for manual flags and null/undefined avatars.
- Introduced a new test suite for `handleExistingUser` covering various scenarios, ensuring robust functionality for avatar updates in both local and non-local storage contexts.
2025-08-11 11:50:46 -04:00
Danny Avila
7e4c8a5d0d 🛡️ fix: OTP Verification For 2FA Disable Operation (#8975) 2025-08-10 15:05:16 -04:00
Danny Avila
edf33bedcb 🛂 feat: Payload limits and Validation for User-created Memories (#8974) 2025-08-10 14:46:16 -04:00
Dustin Healy
21e00168b1 🪙 fix: Max Output Tokens Refactor for Responses API (#8972)
🪙 fix: Max Output Tokens Refactor for Responses API (#8972)

chore: Remove `max_output_tokens` from model kwargs in `titleConvo` if provided
2025-08-10 14:08:35 -04:00
github-actions[bot]
da3730b7d6 🌍 i18n: Update translation.json with latest translations (#8957)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-09 12:04:56 -04:00
Danny Avila
770c766d50 🔧 refactor: Move Plugin-related Helpers to TS API and Add Tests (#8961) 2025-08-09 12:02:44 -04:00
alkshmir
5eb6926464 🪖 chore: Fix Typo in helm/librechat/values.yaml (#8960) 2025-08-09 12:00:37 -04:00
Danny Avila
e478ae1c28 📦 chore: Bump @librechat/agents to v2.4.75 (#8956) 2025-08-09 00:01:21 -04:00
Danny Avila
0c9284c8ae 🧠 refactor: Memory Timeout after Completion and Guarantee Stream Final Event (#8955) 2025-08-09 00:01:10 -04:00
Danny Avila
4eeadddfe6 🔮 fix: Artifacts readOnly to Re-render when Expected (#8954) 2025-08-08 22:44:58 -04:00
Dustin Healy
9ca1847535 🔧 refactor: customUserVar Error Normalization (#8950)
* fix: localization string had unused template var

* fix: add normalizeHttpError to hopefully stop UI hangs when an error is returned in UserController

- Ensures updateUserPluginsController always returns valid HTTP status codes instead of undefined
- Add normalizeHttpError() helper to safely extract status/message from errors
- Default to 400 status code when Error.status is undefined/invalid

* refactor: move normalizeHttpError to packages/api
2025-08-08 15:53:04 -04:00
Danny Avila
5d0bc95193 🧪 fix: Editor Styling, Incomplete Artifact Editing, Optimize Artifact Context (#8953)
* refactor: optimize artifacts context for improved performance

* fix: layout classes for artifacts editor

* chore: linting

* fix: enhance artifact mutation handling in CodeEditor to prevent infinite retries

* fix: handle incomplete artifacts in replaceArtifactContent and add regression tests
2025-08-08 15:49:58 -04:00
github-actions[bot]
e7d6100fe4 🌍 i18n: Update translation.json with latest translations (#8934)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-08 12:21:27 -04:00
Danny Avila
01a95229f2 📦 chore: Bump @librechat/agents to v2.4.73 (#8949) 2025-08-08 12:19:36 -04:00
Danny Avila
0939250f07 🛣️ fix: Remove Title Tokens Limit for GPT-5 Models (#8948)
* 🛣️ fix: Remove Title Tokens Limit for GPT-5 Models

* 🛣️ fix: Remove max_completion_tokens from modelKwargs when maxTokens is disabled

* chore: Add test-image* to .gitignore for CI/CD data
2025-08-08 11:15:29 -04:00
Danny Avila
7147bce3c3 feat: Add OpenAI Verbosity Parameter (#8929)
* WIP: Verbosity OpenAI Parameter

* 🔧 chore: remove unused import of extractEnvVariable from parsers.ts

*  feat: add comprehensive tests for getOpenAIConfig and enhance verbosity handling

* fix: Handling for maxTokens in GPT-5+ models and add corresponding tests

* feat: Implement GPT-5+ model handling in processMemory function
2025-08-07 20:49:40 -04:00
github-actions[bot]
486fe34a2b 🌍 i18n: Update translation.json with latest translations (#8924)
* 🌍 i18n: Update translation.json with latest translations

* Update translation.json

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Danny Avila <danny@librechat.ai>
2025-08-07 20:42:44 -04:00
github-actions[bot]
922f43f520 🌍 i18n: Update translation.json with latest translations (#8907)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-07 16:25:24 -04:00
Marco Beretta
e6fa01d514 💬 style: Enhance Tooltip with HTML support and Improve Styling (#8915)
*  feat: Enhance Tooltip component with HTML support and styling improvements

*  feat: Integrate DOMPurify for HTML sanitization in Tooltip component
2025-08-07 16:24:42 -04:00
Danny Avila
8238fb49e0 📦 chore: bump @librechat/agents to v2.4.72 2025-08-07 16:23:12 -04:00
Danny Avila
430557676d feat: GPT-5 Token Limits, Rates, Icon, Reasoning Support 2025-08-07 16:23:11 -04:00
Danny Avila
8a5047c456 📦 chore: bump @librechat/agents to v2.4.71 2025-08-07 16:23:11 -04:00
Danny Avila
c787515894 🧠 feat: Add minimal Reasoning Effort option 2025-08-07 16:23:11 -04:00
Danny Avila
d95d8032cc feat: GPT-OSS models Token Limits & Rates 2025-08-07 16:23:10 -04:00
Joseph Licata
b9f72f4869 🎚️ refactor: Update Min. Values for OpenAI Parameters (#8922) 2025-08-07 14:38:08 -04:00
Danny Avila
429bb6653a 📦 chore: bump @librechat/agents to v2.4.70 (#8923) 2025-08-07 14:36:10 -04:00
Dustin Healy
47caafa8f8 🔧 fix: MCP Queries and Connections (#8870)
* fix: add refetchQueries on connection success so ToolSelectDialog doesn't require hard refresh

* fix: change hook so we only query connection status when mcpServers are configured

* fix: change refetchQueries to invalidateQueries for tools after server connection update

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-08-07 02:31:05 -04:00
Dustin Healy
8530594f37 🟢 fix: Incorrect customUserVars Set States (#8905) 2025-08-07 02:19:06 -04:00
Sebastien Bruel
0b071c06f6 🥞 refactor: Duplicate Agent Versions as Informational Instead of Errors (#8881)
* Fix error when updating an agent with no changes

* Add tests

* Revert translation file changes
2025-08-07 02:12:05 -04:00
Danny Avila
1092392ed8 📂 fix: File Cleanup for Uploaded "Agent" Files (#8900) 2025-08-06 19:45:57 -04:00
Danny Avila
36c8947029 🔄 refactor: Select OpenRouter LLM Class Dynamically by baseURL (#8898) 2025-08-06 19:26:40 -04:00
Danny Avila
4175a3ea19 v0.8.0-rc1 (#8846) 2025-08-04 15:25:07 -04:00
github-actions[bot]
02dc71f4b7 🌍 i18n: Update translation.json with latest translations (#8845)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-04 15:24:24 -04:00
github-actions[bot]
a6c99a3267 🌍 i18n: Update translation.json with latest translations (#8828)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-04 14:54:07 -04:00
SollalF
fcefc6eedf feat: Add OpenID Audience Parameter (#8837)
*  feat: Add OpenID audience parameter support in authorization requests

* Updated .env.example to include OPENID_AUDIENCE variable for configuration.
* Enhanced openidStrategy to set the audience parameter in authorization requests if specified, improving OpenID integration.

* Update .env.example

* Update openidStrategy.js

---------

Co-authored-by: Danny Avila <danacordially@gmail.com>
2025-08-04 14:49:36 -04:00
Marco Beretta
dfdafdbd09 🖌️ feat: add animation styles for popovers and tooltips (#8831)
* feat: Update client version to 0.2.2 and add animation styles for popovers and tooltips

* refactor: Remove focus outline styles from Dropdown component

* feat: Update client version to 0.2.3 and add Select component export

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-08-04 14:44:00 -04:00
1090 changed files with 66572 additions and 19540 deletions

View File

@@ -40,6 +40,13 @@ NO_INDEX=true
# Defaulted to 1.
TRUST_PROXY=1
# Minimum password length for user authentication
# Default: 8
# Note: When using LDAP authentication, you may want to set this to 1
# to bypass local password validation, as LDAP servers handle their own
# password policies.
# MIN_PASSWORD_LENGTH=8
#===============#
# JSON Logging #
#===============#
@@ -156,10 +163,10 @@ GOOGLE_KEY=user_provided
# GOOGLE_AUTH_HEADER=true
# Gemini API (AI Studio)
# GOOGLE_MODELS=gemini-2.5-pro,gemini-2.5-flash,gemini-2.5-flash-lite-preview-06-17,gemini-2.0-flash,gemini-2.0-flash-lite
# GOOGLE_MODELS=gemini-2.5-pro,gemini-2.5-flash,gemini-2.5-flash-lite,gemini-2.0-flash,gemini-2.0-flash-lite
# Vertex AI
# GOOGLE_MODELS=gemini-2.5-pro,gemini-2.5-flash,gemini-2.5-flash-lite-preview-06-17,gemini-2.0-flash-001,gemini-2.0-flash-lite-001
# GOOGLE_MODELS=gemini-2.5-pro,gemini-2.5-flash,gemini-2.5-flash-lite,gemini-2.0-flash-001,gemini-2.0-flash-lite-001
# GOOGLE_TITLE_MODEL=gemini-2.0-flash-lite-001
@@ -189,7 +196,7 @@ GOOGLE_KEY=user_provided
#============#
OPENAI_API_KEY=user_provided
# OPENAI_MODELS=o1,o1-mini,o1-preview,gpt-4o,gpt-4.5-preview,chatgpt-4o-latest,gpt-4o-mini,gpt-3.5-turbo-0125,gpt-3.5-turbo-0301,gpt-3.5-turbo,gpt-4,gpt-4-0613,gpt-4-vision-preview,gpt-3.5-turbo-0613,gpt-3.5-turbo-16k-0613,gpt-4-0125-preview,gpt-4-turbo-preview,gpt-4-1106-preview,gpt-3.5-turbo-1106,gpt-3.5-turbo-instruct,gpt-3.5-turbo-instruct-0914,gpt-3.5-turbo-16k
# OPENAI_MODELS=gpt-5,gpt-5-codex,gpt-5-mini,gpt-5-nano,o3-pro,o3,o4-mini,gpt-4.1,gpt-4.1-mini,gpt-4.1-nano,o3-mini,o1-pro,o1,gpt-4o,gpt-4o-mini
DEBUG_OPENAI=false
@@ -247,6 +254,10 @@ AZURE_AI_SEARCH_SEARCH_OPTION_SELECT=
# OpenAI Image Tools Customization
#----------------
# IMAGE_GEN_OAI_API_KEY= # Create or reuse OpenAI API key for image generation tool
# IMAGE_GEN_OAI_BASEURL= # Custom OpenAI base URL for image generation tool
# IMAGE_GEN_OAI_AZURE_API_VERSION= # Custom Azure OpenAI deployments
# IMAGE_GEN_OAI_DESCRIPTION=
# IMAGE_GEN_OAI_DESCRIPTION_WITH_FILES=Custom description for image generation tool when files are present
# IMAGE_GEN_OAI_DESCRIPTION_NO_FILES=Custom description for image generation tool when no files are present
# IMAGE_EDIT_OAI_DESCRIPTION=Custom description for image editing tool
@@ -452,10 +463,15 @@ OPENID_CALLBACK_URL=/oauth/openid/callback
OPENID_REQUIRED_ROLE=
OPENID_REQUIRED_ROLE_TOKEN_KIND=
OPENID_REQUIRED_ROLE_PARAMETER_PATH=
OPENID_ADMIN_ROLE=
OPENID_ADMIN_ROLE_PARAMETER_PATH=
OPENID_ADMIN_ROLE_TOKEN_KIND=
# Set to determine which user info property returned from OpenID Provider to store as the User's username
OPENID_USERNAME_CLAIM=
# Set to determine which user info property returned from OpenID Provider to store as the User's name
OPENID_NAME_CLAIM=
# Optional audience parameter for OpenID authorization requests
OPENID_AUDIENCE=
OPENID_BUTTON_LABEL=
OPENID_IMAGE_URL=
@@ -641,6 +657,12 @@ HELP_AND_FAQ_URL=https://librechat.ai
# Google tag manager id
#ANALYTICS_GTM_ID=user provided google tag manager id
# limit conversation file imports to a certain number of bytes in size to avoid the container
# maxing out memory limitations by unremarking this line and supplying a file size in bytes
# such as the below example of 250 mib
# CONVERSATION_IMPORT_MAX_FILE_SIZE_BYTES=262144000
#===============#
# REDIS Options #
#===============#
@@ -658,6 +680,10 @@ HELP_AND_FAQ_URL=https://librechat.ai
# REDIS_URI=rediss://127.0.0.1:6380
# REDIS_CA=/path/to/ca-cert.pem
# Elasticache may need to use an alternate dnsLookup for TLS connections. see "Special Note: Aws Elasticache Clusters with TLS" on this webpage: https://www.npmjs.com/package/ioredis
# Enable alternative dnsLookup for redis
# REDIS_USE_ALTERNATIVE_DNS_LOOKUP=true
# Redis authentication (if required)
# REDIS_USERNAME=your_redis_username
# REDIS_PASSWORD=your_redis_password
@@ -677,8 +703,18 @@ HELP_AND_FAQ_URL=https://librechat.ai
# REDIS_PING_INTERVAL=300
# Force specific cache namespaces to use in-memory storage even when Redis is enabled
# Comma-separated list of CacheKeys (e.g., STATIC_CONFIG,ROLES,MESSAGES)
# FORCED_IN_MEMORY_CACHE_NAMESPACES=STATIC_CONFIG,ROLES
# Comma-separated list of CacheKeys (e.g., ROLES,MESSAGES)
# FORCED_IN_MEMORY_CACHE_NAMESPACES=ROLES,MESSAGES
# Leader Election Configuration (for multi-instance deployments with Redis)
# Duration in seconds that the leader lease is valid before it expires (default: 25)
# LEADER_LEASE_DURATION=25
# Interval in seconds at which the leader renews its lease (default: 10)
# LEADER_RENEW_INTERVAL=10
# Maximum number of retry attempts when renewing the lease fails (default: 3)
# LEADER_RENEW_ATTEMPTS=3
# Delay in seconds between retry attempts when renewing the lease (default: 0.5)
# LEADER_RENEW_RETRY_DELAY=0.5
#==================================================#
# Others #
@@ -740,3 +776,16 @@ OPENWEATHER_API_KEY=
# JINA_API_KEY=your_jina_api_key
# or
# COHERE_API_KEY=your_cohere_api_key
#======================#
# MCP Configuration #
#======================#
# Treat 401/403 responses as OAuth requirement when no oauth metadata found
# MCP_OAUTH_ON_AUTH_ERROR=true
# Timeout for OAuth detection requests in milliseconds
# MCP_OAUTH_DETECTION_TIMEOUT=5000
# Cache connection status checks for this many milliseconds to avoid expensive verification
# MCP_CONNECTION_CHECK_TTL=60000

View File

@@ -147,7 +147,7 @@ Apply the following naming conventions to branches, labels, and other Git-relate
## 8. Module Import Conventions
- `npm` packages first,
- from shortest line (top) to longest (bottom)
- from longest line (top) to shortest (bottom)
- Followed by typescript types (pertains to data-provider and client workspaces)
- longest line (top) to shortest (bottom)
@@ -157,6 +157,8 @@ Apply the following naming conventions to branches, labels, and other Git-relate
- longest line (top) to shortest (bottom)
- imports with alias `~` treated the same as relative import with respect to line length
**Note:** ESLint will automatically enforce these import conventions when you run `npm run lint --fix` or through pre-commit hooks.
---
Please ensure that you adapt this summary to fit the specific context and nuances of your project.

View File

@@ -0,0 +1,96 @@
name: Cache Integration Tests
on:
pull_request:
branches:
- main
- dev
- release/*
paths:
- 'packages/api/src/cache/**'
- 'packages/api/src/cluster/**'
- 'packages/api/src/mcp/**'
- 'redis-config/**'
- '.github/workflows/cache-integration-tests.yml'
jobs:
cache_integration_tests:
name: Integration Tests that use actual Redis Cache
timeout-minutes: 30
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Use Node.js 20.x
uses: actions/setup-node@v4
with:
node-version: 20
cache: 'npm'
- name: Install Redis tools
run: |
sudo apt-get update
sudo apt-get install -y redis-server redis-tools
- name: Start Single Redis Instance
run: |
redis-server --daemonize yes --port 6379
sleep 2
# Verify single Redis is running
redis-cli -p 6379 ping || exit 1
- name: Start Redis Cluster
working-directory: redis-config
run: |
chmod +x start-cluster.sh stop-cluster.sh
./start-cluster.sh
sleep 10
# Verify cluster is running
redis-cli -p 7001 cluster info || exit 1
redis-cli -p 7002 cluster info || exit 1
redis-cli -p 7003 cluster info || exit 1
- name: Install dependencies
run: npm ci
- name: Build packages
run: |
npm run build:data-provider
npm run build:data-schemas
npm run build:api
- name: Run cache integration tests
working-directory: packages/api
env:
NODE_ENV: test
USE_REDIS: true
REDIS_URI: redis://127.0.0.1:6379
REDIS_CLUSTER_URI: redis://127.0.0.1:7001,redis://127.0.0.1:7002,redis://127.0.0.1:7003
run: npm run test:cache-integration:core
- name: Run cluster integration tests
working-directory: packages/api
env:
NODE_ENV: test
USE_REDIS: true
REDIS_URI: redis://127.0.0.1:6379
run: npm run test:cache-integration:cluster
- name: Run mcp integration tests
working-directory: packages/api
env:
NODE_ENV: test
USE_REDIS: true
REDIS_URI: redis://127.0.0.1:6379
run: npm run test:cache-integration:mcp
- name: Stop Redis Cluster
if: always()
working-directory: redis-config
run: ./stop-cluster.sh || true
- name: Stop Single Redis Instance
if: always()
run: redis-cli -p 6379 shutdown || true

View File

@@ -1,4 +1,4 @@
name: Node.js Package
name: Publish `librechat-data-provider` to NPM
on:
push:
@@ -6,6 +6,12 @@ on:
- main
paths:
- 'packages/data-provider/package.json'
workflow_dispatch:
inputs:
reason:
description: 'Reason for manual trigger'
required: false
default: 'Manual publish requested'
jobs:
build:
@@ -14,7 +20,7 @@ jobs:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 16
node-version: 20
- run: cd packages/data-provider && npm ci
- run: cd packages/data-provider && npm run build
@@ -25,7 +31,7 @@ jobs:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 16
node-version: 20
registry-url: 'https://registry.npmjs.org'
- run: cd packages/data-provider && npm ci
- run: cd packages/data-provider && npm run build

View File

@@ -4,12 +4,13 @@ name: Build Helm Charts on Tag
on:
push:
tags:
- "*"
- "chart-*"
jobs:
release:
permissions:
contents: write
packages: write
runs-on: ubuntu-latest
steps:
- name: Checkout
@@ -26,15 +27,49 @@ jobs:
uses: azure/setup-helm@v4
env:
GITHUB_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
- name: Build Subchart Deps
run: |
cd helm/librechat-rag-api
helm dependency build
cd helm/librechat
helm dependency build
cd ../librechat-rag-api
helm dependency build
- name: Run chart-releaser
uses: helm/chart-releaser-action@v1.6.0
- name: Get Chart Version
id: chart-version
run: |
CHART_VERSION=$(echo "${{ github.ref_name }}" | cut -d'-' -f2)
echo "CHART_VERSION=${CHART_VERSION}" >> "$GITHUB_OUTPUT"
# Log in to GitHub Container Registry
- name: Log in to GitHub Container Registry
uses: docker/login-action@v3
with:
charts_dir: helm
skip_existing: true
env:
CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
# Run Helm OCI Charts Releaser
# This is for the librechat chart
- name: Release Helm OCI Charts for librechat
uses: appany/helm-oci-chart-releaser@v0.4.2
with:
name: librechat
repository: ${{ github.actor }}/librechat-chart
tag: ${{ steps.chart-version.outputs.CHART_VERSION }}
path: helm/librechat
registry: ghcr.io
registry_username: ${{ github.actor }}
registry_password: ${{ secrets.GITHUB_TOKEN }}
# this is for the librechat-rag-api chart
- name: Release Helm OCI Charts for librechat-rag-api
uses: appany/helm-oci-chart-releaser@v0.4.2
with:
name: librechat-rag-api
repository: ${{ github.actor }}/librechat-chart
tag: ${{ steps.chart-version.outputs.CHART_VERSION }}
path: helm/librechat-rag-api
registry: ghcr.io
registry_username: ${{ github.actor }}
registry_password: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,5 +1,10 @@
name: Detect Unused i18next Strings
# This workflow checks for unused i18n keys in translation files.
# It has special handling for:
# - com_ui_special_var_* keys that are dynamically constructed
# - com_agents_category_* keys that are stored in the database and used dynamically
on:
pull_request:
paths:
@@ -7,6 +12,7 @@ on:
- "api/**"
- "packages/data-provider/src/**"
- "packages/client/**"
- "packages/data-schemas/src/**"
jobs:
detect-unused-i18n-keys:
@@ -24,7 +30,7 @@ jobs:
# Define paths
I18N_FILE="client/src/locales/en/translation.json"
SOURCE_DIRS=("client/src" "api" "packages/data-provider/src" "packages/client")
SOURCE_DIRS=("client/src" "api" "packages/data-provider/src" "packages/client" "packages/data-schemas/src")
# Check if translation file exists
if [[ ! -f "$I18N_FILE" ]]; then
@@ -52,6 +58,31 @@ jobs:
fi
done
# Also check if the key is directly used somewhere
if [[ "$FOUND" == false ]]; then
for DIR in "${SOURCE_DIRS[@]}"; do
if grep -r --include=\*.{js,jsx,ts,tsx} -q "$KEY" "$DIR"; then
FOUND=true
break
fi
done
fi
# Special case for agent category keys that are dynamically used from database
elif [[ "$KEY" == com_agents_category_* ]]; then
# Check if agent category localization is being used
for DIR in "${SOURCE_DIRS[@]}"; do
# Check for dynamic category label/description usage
if grep -r --include=\*.{js,jsx,ts,tsx} -E "category\.(label|description).*startsWith.*['\"]com_" "$DIR" > /dev/null 2>&1 || \
# Check for the method that defines these keys
grep -r --include=\*.{js,jsx,ts,tsx} "ensureDefaultCategories" "$DIR" > /dev/null 2>&1 || \
# Check for direct usage in agentCategory.ts
grep -r --include=\*.ts -E "label:.*['\"]$KEY['\"]" "$DIR" > /dev/null 2>&1 || \
grep -r --include=\*.ts -E "description:.*['\"]$KEY['\"]" "$DIR" > /dev/null 2>&1; then
FOUND=true
break
fi
done
# Also check if the key is directly used somewhere
if [[ "$FOUND" == false ]]; then
for DIR in "${SOURCE_DIRS[@]}"; do

4
.gitignore vendored
View File

@@ -13,6 +13,9 @@ pids
*.seed
.git
# CI/CD data
test-image*
# Directory for instrumented libs generated by jscoverage/JSCover
lib-cov
@@ -134,3 +137,4 @@ helm/**/.values.yaml
/.openai/
/.tabnine/
/.codeium
*.local.md

View File

@@ -1,5 +1,2 @@
#!/usr/bin/env sh
set -e
. "$(dirname -- "$0")/_/husky.sh"
[ -n "$CI" ] && exit 0
npx lint-staged --config ./.husky/lint-staged.config.js

View File

@@ -1,4 +1,4 @@
# v0.7.9
# v0.8.1-rc1
# Base node image
FROM node:20-alpine AS node
@@ -19,24 +19,31 @@ WORKDIR /app
USER node
COPY --chown=node:node . .
COPY --chown=node:node package.json package-lock.json ./
COPY --chown=node:node api/package.json ./api/package.json
COPY --chown=node:node client/package.json ./client/package.json
COPY --chown=node:node packages/data-provider/package.json ./packages/data-provider/package.json
COPY --chown=node:node packages/data-schemas/package.json ./packages/data-schemas/package.json
COPY --chown=node:node packages/api/package.json ./packages/api/package.json
RUN \
# Allow mounting of these files, which have no default
touch .env ; \
# Create directories for the volumes to inherit the correct permissions
mkdir -p /app/client/public/images /app/api/logs ; \
mkdir -p /app/client/public/images /app/api/logs /app/uploads ; \
npm config set fetch-retry-maxtimeout 600000 ; \
npm config set fetch-retries 5 ; \
npm config set fetch-retry-mintimeout 15000 ; \
npm install --no-audit; \
npm ci --no-audit
COPY --chown=node:node . .
RUN \
# React client build
NODE_OPTIONS="--max-old-space-size=2048" npm run frontend; \
npm prune --production; \
npm cache clean --force
RUN mkdir -p /app/client/public/images /app/api/logs
# Node API setup
EXPOSE 3080
ENV HOST=0.0.0.0
@@ -47,4 +54,4 @@ CMD ["npm", "run", "backend"]
# WORKDIR /usr/share/nginx/html
# COPY --from=node /app/client/dist /usr/share/nginx/html
# COPY client/nginx.conf /etc/nginx/conf.d/default.conf
# ENTRYPOINT ["nginx", "-g", "daemon off;"]
# ENTRYPOINT ["nginx", "-g", "daemon off;"]

View File

@@ -1,5 +1,5 @@
# Dockerfile.multi
# v0.7.9
# v0.8.1-rc1
# Base for all builds
FROM node:20-alpine AS base-min

View File

@@ -65,14 +65,17 @@
- 🔦 **Agents & Tools Integration**:
- **[LibreChat Agents](https://www.librechat.ai/docs/features/agents)**:
- No-Code Custom Assistants: Build specialized, AI-driven helpers without coding
- Flexible & Extensible: Use MCP Servers, tools, file search, code execution, and more
- No-Code Custom Assistants: Build specialized, AI-driven helpers
- Agent Marketplace: Discover and deploy community-built agents
- Collaborative Sharing: Share agents with specific users and groups
- Flexible & Extensible: Use MCP Servers, tools, file search, code execution, and more
- Compatible with Custom Endpoints, OpenAI, Azure, Anthropic, AWS Bedrock, Google, Vertex AI, Responses API, and more
- [Model Context Protocol (MCP) Support](https://modelcontextprotocol.io/clients#librechat) for Tools
- 🔍 **Web Search**:
- Search the internet and retrieve relevant information to enhance your AI context
- Combines search providers, content scrapers, and result rerankers for optimal results
- **Customizable Jina Reranking**: Configure custom Jina API URLs for reranking services
- **[Learn More →](https://www.librechat.ai/docs/features/web_search)**
- 🪄 **Generative UI with Code Artifacts**:
@@ -87,15 +90,18 @@
- Create, Save, & Share Custom Presets
- Switch between AI Endpoints and Presets mid-chat
- Edit, Resubmit, and Continue Messages with Conversation branching
- Create and share prompts with specific users and groups
- [Fork Messages & Conversations](https://www.librechat.ai/docs/features/fork) for Advanced Context control
- 💬 **Multimodal & File Interactions**:
- Upload and analyze images with Claude 3, GPT-4.5, GPT-4o, o1, Llama-Vision, and Gemini 📸
- Chat with Files using Custom Endpoints, OpenAI, Azure, Anthropic, AWS Bedrock, & Google 🗃️
- 🌎 **Multilingual UI**:
- English, 中文, Deutsch, Español, Français, Italiano, Polski, Português Brasileiro
- Русский, 日本語, Svenska, 한국어, Tiếng Việt, 繁體中文, العربية, Türkçe, Nederlands, עברית
- 🌎 **Multilingual UI**:
- English, 中文 (简体), 中文 (繁體), العربية, Deutsch, Español, Français, Italiano
- Polski, Português (PT), Português (BR), Русский, 日本語, Svenska, 한국어, Tiếng Việt
- Türkçe, Nederlands, עברית, Català, Čeština, Dansk, Eesti, فارسی
- Suomi, Magyar, Հայերեն, Bahasa Indonesia, ქართული, Latviešu, ไทย, ئۇيغۇرچە
- 🧠 **Reasoning UI**:
- Dynamic Reasoning UI for Chain-of-Thought/Reasoning AI models like DeepSeek-R1

View File

@@ -1,4 +1,5 @@
const Anthropic = require('@anthropic-ai/sdk');
const { logger } = require('@librechat/data-schemas');
const { HttpsProxyAgent } = require('https-proxy-agent');
const {
Constants,
@@ -9,8 +10,18 @@ const {
getResponseSender,
validateVisionModel,
} = require('librechat-data-provider');
const { SplitStreamHandler: _Handler } = require('@librechat/agents');
const { Tokenizer, createFetch, createStreamEventHandlers } = require('@librechat/api');
const { sleep, SplitStreamHandler: _Handler } = require('@librechat/agents');
const {
Tokenizer,
createFetch,
matchModelName,
getClaudeHeaders,
getModelMaxTokens,
configureReasoning,
checkPromptCacheSupport,
getModelMaxOutputTokens,
createStreamEventHandlers,
} = require('@librechat/api');
const {
truncateText,
formatMessage,
@@ -19,17 +30,9 @@ const {
parseParamFromPrompt,
createContextHandlers,
} = require('./prompts');
const {
getClaudeHeaders,
configureReasoning,
checkPromptCacheSupport,
} = require('~/server/services/Endpoints/anthropic/helpers');
const { getModelMaxTokens, getModelMaxOutputTokens, matchModelName } = require('~/utils');
const { spendTokens, spendStructuredTokens } = require('~/models/spendTokens');
const { encodeAndFormat } = require('~/server/services/Files/images/encode');
const { sleep } = require('~/server/utils');
const BaseClient = require('./BaseClient');
const { logger } = require('~/config');
const HUMAN_PROMPT = '\n\nHuman:';
const AI_PROMPT = '\n\nAssistant:';

View File

@@ -1,21 +1,31 @@
const crypto = require('crypto');
const fetch = require('node-fetch');
const { logger } = require('@librechat/data-schemas');
const {
supportsBalanceCheck,
isAgentsEndpoint,
isParamEndpoint,
EModelEndpoint,
getBalanceConfig,
extractFileContext,
encodeAndFormatAudios,
encodeAndFormatVideos,
encodeAndFormatDocuments,
} = require('@librechat/api');
const {
Constants,
ErrorTypes,
FileSources,
ContentTypes,
excludedKeys,
ErrorTypes,
Constants,
EModelEndpoint,
isParamEndpoint,
isAgentsEndpoint,
supportsBalanceCheck,
} = require('librechat-data-provider');
const { getMessages, saveMessage, updateMessage, saveConvo, getConvo } = require('~/models');
const { getStrategyFunctions } = require('~/server/services/Files/strategies');
const { checkBalance } = require('~/models/balanceMethods');
const { truncateToolCallOutputs } = require('./prompts');
const countTokens = require('~/server/utils/countTokens');
const { getFiles } = require('~/models/File');
const TextStream = require('./TextStream');
const { logger } = require('~/config');
class BaseClient {
constructor(apiKey, options = {}) {
@@ -37,6 +47,8 @@ class BaseClient {
this.conversationId;
/** @type {string} */
this.responseMessageId;
/** @type {string} */
this.parentMessageId;
/** @type {TAttachment[]} */
this.attachments;
/** The key for the usage object's input tokens
@@ -110,13 +122,15 @@ class BaseClient {
* If a correction to the token usage is needed, the method should return an object with the corrected token counts.
* Should only be used if `recordCollectedUsage` was not used instead.
* @param {string} [model]
* @param {AppConfig['balance']} [balance]
* @param {number} promptTokens
* @param {number} completionTokens
* @returns {Promise<void>}
*/
async recordTokenUsage({ model, promptTokens, completionTokens }) {
async recordTokenUsage({ model, balance, promptTokens, completionTokens }) {
logger.debug('[BaseClient] `recordTokenUsage` not implemented.', {
model,
balance,
promptTokens,
completionTokens,
});
@@ -185,7 +199,8 @@ class BaseClient {
this.user = user;
const saveOptions = this.getSaveOptions();
this.abortController = opts.abortController ?? new AbortController();
const conversationId = overrideConvoId ?? opts.conversationId ?? crypto.randomUUID();
const requestConvoId = overrideConvoId ?? opts.conversationId;
const conversationId = requestConvoId ?? crypto.randomUUID();
const parentMessageId = opts.parentMessageId ?? Constants.NO_PARENT;
const userMessageId =
overrideUserMessageId ?? opts.overrideParentMessageId ?? crypto.randomUUID();
@@ -210,11 +225,12 @@ class BaseClient {
...opts,
user,
head,
saveOptions,
userMessageId,
requestConvoId,
conversationId,
parentMessageId,
userMessageId,
responseMessageId,
saveOptions,
};
}
@@ -233,11 +249,12 @@ class BaseClient {
const {
user,
head,
saveOptions,
userMessageId,
requestConvoId,
conversationId,
parentMessageId,
userMessageId,
responseMessageId,
saveOptions,
} = await this.setMessageOptions(opts);
const userMessage = opts.isEdited
@@ -259,7 +276,8 @@ class BaseClient {
}
if (typeof opts?.onStart === 'function') {
opts.onStart(userMessage, responseMessageId);
const isNewConvo = !requestConvoId && parentMessageId === Constants.NO_PARENT;
opts.onStart(userMessage, responseMessageId, isNewConvo);
}
return {
@@ -565,6 +583,7 @@ class BaseClient {
}
async sendMessage(message, opts = {}) {
const appConfig = this.options.req?.config;
/** @type {Promise<TMessage>} */
let userMessagePromise;
const { user, head, isEdited, conversationId, responseMessageId, saveOptions, userMessage } =
@@ -614,15 +633,19 @@ class BaseClient {
this.currentMessages.push(userMessage);
}
/**
* When the userMessage is pushed to currentMessages, the parentMessage is the userMessageId.
* this only matters when buildMessages is utilizing the parentMessageId, and may vary on implementation
*/
const parentMessageId = isEdited ? head : userMessage.messageId;
this.parentMessageId = parentMessageId;
let {
prompt: payload,
tokenCountMap,
promptTokens,
} = await this.buildMessages(
this.currentMessages,
// When the userMessage is pushed to currentMessages, the parentMessage is the userMessageId.
// this only matters when buildMessages is utilizing the parentMessageId, and may vary on implementation
isEdited ? head : userMessage.messageId,
parentMessageId,
this.getBuildMessagesOptions(opts),
opts,
);
@@ -647,9 +670,9 @@ class BaseClient {
}
}
const balance = this.options.req?.app?.locals?.balance;
const balanceConfig = getBalanceConfig(appConfig);
if (
balance?.enabled &&
balanceConfig?.enabled &&
supportsBalanceCheck[this.options.endpointType ?? this.options.endpoint]
) {
await checkBalance({
@@ -748,6 +771,7 @@ class BaseClient {
usage,
promptTokens,
completionTokens,
balance: balanceConfig,
model: responseMessage.model,
});
}
@@ -1183,8 +1207,135 @@ class BaseClient {
return await this.sendCompletion(payload, opts);
}
async addDocuments(message, attachments) {
const documentResult = await encodeAndFormatDocuments(
this.options.req,
attachments,
{
provider: this.options.agent?.provider,
useResponsesApi: this.options.agent?.model_parameters?.useResponsesApi,
},
getStrategyFunctions,
);
message.documents =
documentResult.documents && documentResult.documents.length
? documentResult.documents
: undefined;
return documentResult.files;
}
async addVideos(message, attachments) {
const videoResult = await encodeAndFormatVideos(
this.options.req,
attachments,
this.options.agent.provider,
getStrategyFunctions,
);
message.videos =
videoResult.videos && videoResult.videos.length ? videoResult.videos : undefined;
return videoResult.files;
}
async addAudios(message, attachments) {
const audioResult = await encodeAndFormatAudios(
this.options.req,
attachments,
this.options.agent.provider,
getStrategyFunctions,
);
message.audios =
audioResult.audios && audioResult.audios.length ? audioResult.audios : undefined;
return audioResult.files;
}
/**
* Extracts text context from attachments and sets it on the message.
* This handles text that was already extracted from files (OCR, transcriptions, document text, etc.)
* @param {TMessage} message - The message to add context to
* @param {MongoFile[]} attachments - Array of file attachments
* @returns {Promise<void>}
*/
async addFileContextToMessage(message, attachments) {
const fileContext = await extractFileContext({
attachments,
req: this.options?.req,
tokenCountFn: (text) => countTokens(text),
});
if (fileContext) {
message.fileContext = fileContext;
}
}
async processAttachments(message, attachments) {
const categorizedAttachments = {
images: [],
videos: [],
audios: [],
documents: [],
};
const allFiles = [];
for (const file of attachments) {
/** @type {FileSources} */
const source = file.source ?? FileSources.local;
if (source === FileSources.text) {
allFiles.push(file);
continue;
}
if (file.embedded === true || file.metadata?.fileIdentifier != null) {
allFiles.push(file);
continue;
}
if (file.type.startsWith('image/')) {
categorizedAttachments.images.push(file);
} else if (file.type === 'application/pdf') {
categorizedAttachments.documents.push(file);
allFiles.push(file);
} else if (file.type.startsWith('video/')) {
categorizedAttachments.videos.push(file);
allFiles.push(file);
} else if (file.type.startsWith('audio/')) {
categorizedAttachments.audios.push(file);
allFiles.push(file);
}
}
const [imageFiles] = await Promise.all([
categorizedAttachments.images.length > 0
? this.addImageURLs(message, categorizedAttachments.images)
: Promise.resolve([]),
categorizedAttachments.documents.length > 0
? this.addDocuments(message, categorizedAttachments.documents)
: Promise.resolve([]),
categorizedAttachments.videos.length > 0
? this.addVideos(message, categorizedAttachments.videos)
: Promise.resolve([]),
categorizedAttachments.audios.length > 0
? this.addAudios(message, categorizedAttachments.audios)
: Promise.resolve([]),
]);
allFiles.push(...imageFiles);
const seenFileIds = new Set();
const uniqueFiles = [];
for (const file of allFiles) {
if (file.file_id && !seenFileIds.has(file.file_id)) {
seenFileIds.add(file.file_id);
uniqueFiles.push(file);
} else if (!file.file_id) {
uniqueFiles.push(file);
}
}
return uniqueFiles;
}
/**
*
* @param {TMessage[]} _messages
* @returns {Promise<TMessage[]>}
*/
@@ -1233,7 +1384,8 @@ class BaseClient {
{},
);
await this.addImageURLs(message, files, this.visionMode);
await this.addFileContextToMessage(message, files);
await this.processAttachments(message, files);
this.message_file_map[message.messageId] = files;
return message;

View File

@@ -1,4 +1,7 @@
const { google } = require('googleapis');
const { sleep } = require('@librechat/agents');
const { logger } = require('@librechat/data-schemas');
const { getModelMaxTokens } = require('@librechat/api');
const { concat } = require('@langchain/core/utils/stream');
const { ChatVertexAI } = require('@langchain/google-vertexai');
const { Tokenizer, getSafetySettings } = require('@librechat/api');
@@ -21,9 +24,6 @@ const {
} = require('librechat-data-provider');
const { encodeAndFormat } = require('~/server/services/Files/images');
const { spendTokens } = require('~/models/spendTokens');
const { getModelMaxTokens } = require('~/utils');
const { sleep } = require('~/server/utils');
const { logger } = require('~/config');
const {
formatMessage,
createContextHandlers,

View File

@@ -2,7 +2,7 @@ const { z } = require('zod');
const axios = require('axios');
const { Ollama } = require('ollama');
const { sleep } = require('@librechat/agents');
const { logAxiosError } = require('@librechat/api');
const { resolveHeaders } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { Constants } = require('librechat-data-provider');
const { deriveBaseURL } = require('~/utils');
@@ -44,6 +44,7 @@ class OllamaClient {
constructor(options = {}) {
const host = deriveBaseURL(options.baseURL ?? 'http://localhost:11434');
this.streamRate = options.streamRate ?? Constants.DEFAULT_STREAM_RATE;
this.headers = options.headers ?? {};
/** @type {Ollama} */
this.client = new Ollama({ host });
}
@@ -51,27 +52,32 @@ class OllamaClient {
/**
* Fetches Ollama models from the specified base API path.
* @param {string} baseURL
* @param {Object} [options] - Optional configuration
* @param {Partial<IUser>} [options.user] - User object for header resolution
* @param {Record<string, string>} [options.headers] - Headers to include in the request
* @returns {Promise<string[]>} The Ollama models.
* @throws {Error} Throws if the Ollama API request fails
*/
static async fetchModels(baseURL) {
let models = [];
static async fetchModels(baseURL, options = {}) {
if (!baseURL) {
return models;
}
try {
const ollamaEndpoint = deriveBaseURL(baseURL);
/** @type {Promise<AxiosResponse<OllamaListResponse>>} */
const response = await axios.get(`${ollamaEndpoint}/api/tags`, {
timeout: 5000,
});
models = response.data.models.map((tag) => tag.name);
return models;
} catch (error) {
const logMessage =
"Failed to fetch models from Ollama API. If you are not using Ollama directly, and instead, through some aggregator or reverse proxy that handles fetching via OpenAI spec, ensure the name of the endpoint doesn't start with `ollama` (case-insensitive).";
logAxiosError({ message: logMessage, error });
return [];
}
const ollamaEndpoint = deriveBaseURL(baseURL);
const resolvedHeaders = resolveHeaders({
headers: options.headers,
user: options.user,
});
/** @type {Promise<AxiosResponse<OllamaListResponse>>} */
const response = await axios.get(`${ollamaEndpoint}/api/tags`, {
headers: resolvedHeaders,
timeout: 5000,
});
const models = response.data.models.map((tag) => tag.name);
return models;
}
/**

View File

@@ -1,13 +1,15 @@
const { OllamaClient } = require('./OllamaClient');
const { logger } = require('@librechat/data-schemas');
const { HttpsProxyAgent } = require('https-proxy-agent');
const { SplitStreamHandler, CustomOpenAIClient: OpenAI } = require('@librechat/agents');
const { sleep, SplitStreamHandler, CustomOpenAIClient: OpenAI } = require('@librechat/agents');
const {
isEnabled,
Tokenizer,
createFetch,
resolveHeaders,
constructAzureURL,
getModelMaxTokens,
genAzureChatCompletion,
getModelMaxOutputTokens,
createStreamEventHandlers,
} = require('@librechat/api');
const {
@@ -31,17 +33,16 @@ const {
titleInstruction,
createContextHandlers,
} = require('./prompts');
const { extractBaseURL, getModelMaxTokens, getModelMaxOutputTokens } = require('~/utils');
const { encodeAndFormat } = require('~/server/services/Files/images/encode');
const { addSpaceIfNeeded, sleep } = require('~/server/utils');
const { spendTokens } = require('~/models/spendTokens');
const { addSpaceIfNeeded } = require('~/server/utils');
const { handleOpenAIErrors } = require('./tools/util');
const { createLLM, RunManager } = require('./llm');
const { OllamaClient } = require('./OllamaClient');
const { summaryBuffer } = require('./memory');
const { runTitleChain } = require('./chains');
const { extractBaseURL } = require('~/utils');
const { tokenSplit } = require('./document');
const BaseClient = require('./BaseClient');
const { logger } = require('~/config');
class OpenAIClient extends BaseClient {
constructor(apiKey, options = {}) {
@@ -612,77 +613,8 @@ class OpenAIClient extends BaseClient {
return (reply ?? '').trim();
}
initializeLLM({
model = openAISettings.model.default,
modelName,
temperature = 0.2,
max_tokens,
streaming,
context,
tokenBuffer,
initialMessageCount,
conversationId,
}) {
const modelOptions = {
modelName: modelName ?? model,
temperature,
user: this.user,
};
if (max_tokens) {
modelOptions.max_tokens = max_tokens;
}
const configOptions = {};
if (this.langchainProxy) {
configOptions.basePath = this.langchainProxy;
}
if (this.useOpenRouter) {
configOptions.basePath = 'https://openrouter.ai/api/v1';
configOptions.baseOptions = {
headers: {
'HTTP-Referer': 'https://librechat.ai',
'X-Title': 'LibreChat',
},
};
}
const { headers } = this.options;
if (headers && typeof headers === 'object' && !Array.isArray(headers)) {
configOptions.baseOptions = {
headers: resolveHeaders({
...headers,
...configOptions?.baseOptions?.headers,
}),
};
}
if (this.options.proxy) {
configOptions.httpAgent = new HttpsProxyAgent(this.options.proxy);
configOptions.httpsAgent = new HttpsProxyAgent(this.options.proxy);
}
const { req, res, debug } = this.options;
const runManager = new RunManager({ req, res, debug, abortController: this.abortController });
this.runManager = runManager;
const llm = createLLM({
modelOptions,
configOptions,
openAIApiKey: this.apiKey,
azure: this.azure,
streaming,
callbacks: runManager.createCallbacks({
context,
tokenBuffer,
conversationId: this.conversationId ?? conversationId,
initialMessageCount,
}),
});
return llm;
initializeLLM() {
throw new Error('Deprecated');
}
/**
@@ -700,6 +632,7 @@ class OpenAIClient extends BaseClient {
* In case of failure, it will return the default title, "New Chat".
*/
async titleConvo({ text, conversationId, responseText = '' }) {
const appConfig = this.options.req?.config;
this.conversationId = conversationId;
if (this.options.attachments) {
@@ -728,8 +661,7 @@ class OpenAIClient extends BaseClient {
max_tokens: 16,
};
/** @type {TAzureConfig | undefined} */
const azureConfig = this.options?.req?.app?.locals?.[EModelEndpoint.azureOpenAI];
const azureConfig = appConfig?.endpoints?.[EModelEndpoint.azureOpenAI];
const resetTitleOptions = !!(
(this.azure && azureConfig) ||
@@ -749,7 +681,7 @@ class OpenAIClient extends BaseClient {
groupMap,
});
this.options.headers = resolveHeaders(headers);
this.options.headers = resolveHeaders({ headers });
this.options.reverseProxyUrl = baseURL ?? null;
this.langchainProxy = extractBaseURL(this.options.reverseProxyUrl);
this.apiKey = azureOptions.azureOpenAIApiKey;
@@ -1118,6 +1050,7 @@ ${convo}
}
async chatCompletion({ payload, onProgress, abortController = null }) {
const appConfig = this.options.req?.config;
let error = null;
let intermediateReply = [];
const errorCallback = (err) => (error = err);
@@ -1163,8 +1096,7 @@ ${convo}
opts.fetchOptions.agent = new HttpsProxyAgent(this.options.proxy);
}
/** @type {TAzureConfig | undefined} */
const azureConfig = this.options?.req?.app?.locals?.[EModelEndpoint.azureOpenAI];
const azureConfig = appConfig?.endpoints?.[EModelEndpoint.azureOpenAI];
if (
(this.azure && this.isVisionModel && azureConfig) ||
@@ -1181,7 +1113,7 @@ ${convo}
modelGroupMap,
groupMap,
});
opts.defaultHeaders = resolveHeaders(headers);
opts.defaultHeaders = resolveHeaders({ headers });
this.langchainProxy = extractBaseURL(baseURL);
this.apiKey = azureOptions.azureOpenAIApiKey;
@@ -1222,7 +1154,9 @@ ${convo}
}
if (this.isOmni === true && modelOptions.max_tokens != null) {
modelOptions.max_completion_tokens = modelOptions.max_tokens;
const paramName =
modelOptions.useResponsesApi === true ? 'max_output_tokens' : 'max_completion_tokens';
modelOptions[paramName] = modelOptions.max_tokens;
delete modelOptions.max_tokens;
}
if (this.isOmni === true && modelOptions.temperature != null) {

View File

@@ -1,5 +1,5 @@
const { Readable } = require('stream');
const { logger } = require('~/config');
const { logger } = require('@librechat/data-schemas');
class TextStream extends Readable {
constructor(text, options = {}) {

View File

@@ -1,5 +1,5 @@
const { logger } = require('@librechat/data-schemas');
const { ZeroShotAgentOutputParser } = require('langchain/agents');
const { logger } = require('~/config');
class CustomOutputParser extends ZeroShotAgentOutputParser {
constructor(fields) {

View File

@@ -1,95 +0,0 @@
const { promptTokensEstimate } = require('openai-chat-tokens');
const { EModelEndpoint, supportsBalanceCheck } = require('librechat-data-provider');
const { formatFromLangChain } = require('~/app/clients/prompts');
const { getBalanceConfig } = require('~/server/services/Config');
const { checkBalance } = require('~/models/balanceMethods');
const { logger } = require('~/config');
const createStartHandler = ({
context,
conversationId,
tokenBuffer = 0,
initialMessageCount,
manager,
}) => {
return async (_llm, _messages, runId, parentRunId, extraParams) => {
const { invocation_params } = extraParams;
const { model, functions, function_call } = invocation_params;
const messages = _messages[0].map(formatFromLangChain);
logger.debug(`[createStartHandler] handleChatModelStart: ${context}`, {
model,
function_call,
});
if (context !== 'title') {
logger.debug(`[createStartHandler] handleChatModelStart: ${context}`, {
functions,
});
}
const payload = { messages };
let prelimPromptTokens = 1;
if (functions) {
payload.functions = functions;
prelimPromptTokens += 2;
}
if (function_call) {
payload.function_call = function_call;
prelimPromptTokens -= 5;
}
prelimPromptTokens += promptTokensEstimate(payload);
logger.debug('[createStartHandler]', {
prelimPromptTokens,
tokenBuffer,
});
prelimPromptTokens += tokenBuffer;
try {
const balance = await getBalanceConfig();
if (balance?.enabled && supportsBalanceCheck[EModelEndpoint.openAI]) {
const generations =
initialMessageCount && messages.length > initialMessageCount
? messages.slice(initialMessageCount)
: null;
await checkBalance({
req: manager.req,
res: manager.res,
txData: {
user: manager.user,
tokenType: 'prompt',
amount: prelimPromptTokens,
debug: manager.debug,
generations,
model,
endpoint: EModelEndpoint.openAI,
},
});
}
} catch (err) {
logger.error(`[createStartHandler][${context}] checkBalance error`, err);
manager.abortController.abort();
if (context === 'summary' || context === 'plugins') {
manager.addRun(runId, { conversationId, error: err.message });
throw new Error(err);
}
return;
}
manager.addRun(runId, {
model,
messages,
functions,
function_call,
runId,
parentRunId,
conversationId,
prelimPromptTokens,
});
};
};
module.exports = createStartHandler;

View File

@@ -1,5 +0,0 @@
const createStartHandler = require('./createStartHandler');
module.exports = {
createStartHandler,
};

View File

@@ -1,7 +1,7 @@
const { z } = require('zod');
const { logger } = require('@librechat/data-schemas');
const { langPrompt, createTitlePrompt, escapeBraces, getSnippet } = require('../prompts');
const { createStructuredOutputChainFromZod } = require('langchain/chains/openai_functions');
const { logger } = require('~/config');
const langSchema = z.object({
language: z.string().describe('The language of the input text (full noun, no abbreviations).'),

View File

@@ -1,105 +0,0 @@
const { createStartHandler } = require('~/app/clients/callbacks');
const { spendTokens } = require('~/models/spendTokens');
const { logger } = require('~/config');
class RunManager {
constructor(fields) {
const { req, res, abortController, debug } = fields;
this.abortController = abortController;
this.user = req.user.id;
this.req = req;
this.res = res;
this.debug = debug;
this.runs = new Map();
this.convos = new Map();
}
addRun(runId, runData) {
if (!this.runs.has(runId)) {
this.runs.set(runId, runData);
if (runData.conversationId) {
this.convos.set(runData.conversationId, runId);
}
return runData;
} else {
const existingData = this.runs.get(runId);
const update = { ...existingData, ...runData };
this.runs.set(runId, update);
if (update.conversationId) {
this.convos.set(update.conversationId, runId);
}
return update;
}
}
removeRun(runId) {
if (this.runs.has(runId)) {
this.runs.delete(runId);
} else {
logger.error(`[api/app/clients/llm/RunManager] Run with ID ${runId} does not exist.`);
}
}
getAllRuns() {
return Array.from(this.runs.values());
}
getRunById(runId) {
return this.runs.get(runId);
}
getRunByConversationId(conversationId) {
const runId = this.convos.get(conversationId);
return { run: this.runs.get(runId), runId };
}
createCallbacks(metadata) {
return [
{
handleChatModelStart: createStartHandler({ ...metadata, manager: this }),
handleLLMEnd: async (output, runId, _parentRunId) => {
const { llmOutput, ..._output } = output;
logger.debug(`[RunManager] handleLLMEnd: ${JSON.stringify(metadata)}`, {
runId,
_parentRunId,
llmOutput,
});
if (metadata.context !== 'title') {
logger.debug('[RunManager] handleLLMEnd:', {
output: _output,
});
}
const { tokenUsage } = output.llmOutput;
const run = this.getRunById(runId);
this.removeRun(runId);
const txData = {
user: this.user,
model: run?.model ?? 'gpt-3.5-turbo',
...metadata,
};
await spendTokens(txData, tokenUsage);
},
handleLLMError: async (err) => {
logger.error(`[RunManager] handleLLMError: ${JSON.stringify(metadata)}`, err);
if (metadata.context === 'title') {
return;
} else if (metadata.context === 'plugins') {
throw new Error(err);
}
const { conversationId } = metadata;
const { run } = this.getRunByConversationId(conversationId);
if (run && run.error) {
const { error } = run;
throw new Error(error);
}
},
},
];
}
}
module.exports = RunManager;

View File

@@ -1,81 +0,0 @@
const { ChatOpenAI } = require('@langchain/openai');
const { isEnabled, sanitizeModelName, constructAzureURL } = require('@librechat/api');
/**
* Creates a new instance of a language model (LLM) for chat interactions.
*
* @param {Object} options - The options for creating the LLM.
* @param {ModelOptions} options.modelOptions - The options specific to the model, including modelName, temperature, presence_penalty, frequency_penalty, and other model-related settings.
* @param {ConfigOptions} options.configOptions - Configuration options for the API requests, including proxy settings and custom headers.
* @param {Callbacks} [options.callbacks] - Callback functions for managing the lifecycle of the LLM, including token buffers, context, and initial message count.
* @param {boolean} [options.streaming=false] - Determines if the LLM should operate in streaming mode.
* @param {string} options.openAIApiKey - The API key for OpenAI, used for authentication.
* @param {AzureOptions} [options.azure={}] - Optional Azure-specific configurations. If provided, Azure configurations take precedence over OpenAI configurations.
*
* @returns {ChatOpenAI} An instance of the ChatOpenAI class, configured with the provided options.
*
* @example
* const llm = createLLM({
* modelOptions: { modelName: 'gpt-4o-mini', temperature: 0.2 },
* configOptions: { basePath: 'https://example.api/path' },
* callbacks: { onMessage: handleMessage },
* openAIApiKey: 'your-api-key'
* });
*/
function createLLM({
modelOptions,
configOptions,
callbacks,
streaming = false,
openAIApiKey,
azure = {},
}) {
let credentials = { openAIApiKey };
let configuration = {
apiKey: openAIApiKey,
...(configOptions.basePath && { baseURL: configOptions.basePath }),
};
/** @type {AzureOptions} */
let azureOptions = {};
if (azure) {
const useModelName = isEnabled(process.env.AZURE_USE_MODEL_AS_DEPLOYMENT_NAME);
credentials = {};
configuration = {};
azureOptions = azure;
azureOptions.azureOpenAIApiDeploymentName = useModelName
? sanitizeModelName(modelOptions.modelName)
: azureOptions.azureOpenAIApiDeploymentName;
}
if (azure && process.env.AZURE_OPENAI_DEFAULT_MODEL) {
modelOptions.modelName = process.env.AZURE_OPENAI_DEFAULT_MODEL;
}
if (azure && configOptions.basePath) {
const azureURL = constructAzureURL({
baseURL: configOptions.basePath,
azureOptions,
});
azureOptions.azureOpenAIBasePath = azureURL.split(
`/${azureOptions.azureOpenAIApiDeploymentName}`,
)[0];
}
return new ChatOpenAI(
{
streaming,
credentials,
configuration,
...azureOptions,
...modelOptions,
...credentials,
callbacks,
},
configOptions,
);
}
module.exports = createLLM;

View File

@@ -1,9 +1,5 @@
const createLLM = require('./createLLM');
const RunManager = require('./RunManager');
const createCoherePayload = require('./createCoherePayload');
module.exports = {
createLLM,
RunManager,
createCoherePayload,
};

View File

@@ -1,31 +0,0 @@
require('dotenv').config();
const { ChatOpenAI } = require('@langchain/openai');
const { getBufferString, ConversationSummaryBufferMemory } = require('langchain/memory');
const chatPromptMemory = new ConversationSummaryBufferMemory({
llm: new ChatOpenAI({ modelName: 'gpt-4o-mini', temperature: 0 }),
maxTokenLimit: 10,
returnMessages: true,
});
(async () => {
await chatPromptMemory.saveContext({ input: 'hi my name\'s Danny' }, { output: 'whats up' });
await chatPromptMemory.saveContext({ input: 'not much you' }, { output: 'not much' });
await chatPromptMemory.saveContext(
{ input: 'are you excited for the olympics?' },
{ output: 'not really' },
);
// We can also utilize the predict_new_summary method directly.
const messages = await chatPromptMemory.chatHistory.getMessages();
console.log('MESSAGES\n\n');
console.log(JSON.stringify(messages));
const previous_summary = '';
const predictSummary = await chatPromptMemory.predictNewSummary(messages, previous_summary);
console.log('SUMMARY\n\n');
console.log(JSON.stringify(getBufferString([{ role: 'system', content: predictSummary }])));
// const { history } = await chatPromptMemory.loadMemoryVariables({});
// console.log('HISTORY\n\n');
// console.log(JSON.stringify(history));
})();

View File

@@ -1,7 +1,7 @@
const { logger } = require('@librechat/data-schemas');
const { ConversationSummaryBufferMemory, ChatMessageHistory } = require('langchain/memory');
const { formatLangChainMessages, SUMMARY_PROMPT } = require('../prompts');
const { predictNewSummary } = require('../chains');
const { logger } = require('~/config');
const createSummaryBufferMemory = ({ llm, prompt, messages, ...rest }) => {
const chatHistory = new ChatMessageHistory(messages);

View File

@@ -1,4 +1,4 @@
const { logger } = require('~/config');
const { logger } = require('@librechat/data-schemas');
/**
* The `addImages` function corrects any erroneous image URLs in the `responseMessage.text`

View File

@@ -3,6 +3,7 @@ const { EModelEndpoint, ArtifactModes } = require('librechat-data-provider');
const { generateShadcnPrompt } = require('~/app/clients/prompts/shadcn-docs/generate');
const { components } = require('~/app/clients/prompts/shadcn-docs/components');
/** @deprecated */
// eslint-disable-next-line no-unused-vars
const artifactsPromptV1 = dedent`The assistant can create and reference artifacts during conversations.
@@ -115,6 +116,7 @@ Here are some examples of correct usage of artifacts:
</assistant_response>
</example>
</examples>`;
const artifactsPrompt = dedent`The assistant can create and reference artifacts during conversations.
Artifacts are for substantial, self-contained content that users might modify or reuse, displayed in a separate UI window for clarity.
@@ -165,6 +167,10 @@ Artifacts are for substantial, self-contained content that users might modify or
- SVG: "image/svg+xml"
- The user interface will render the Scalable Vector Graphics (SVG) image within the artifact tags.
- The assistant should specify the viewbox of the SVG rather than defining a width/height
- Markdown: "text/markdown" or "text/md"
- The user interface will render Markdown content placed within the artifact tags.
- Supports standard Markdown syntax including headers, lists, links, images, code blocks, tables, and more.
- Both "text/markdown" and "text/md" are accepted as valid MIME types for Markdown content.
- Mermaid Diagrams: "application/vnd.mermaid"
- The user interface will render Mermaid diagrams placed within the artifact tags.
- React Components: "application/vnd.react"
@@ -366,6 +372,10 @@ Artifacts are for substantial, self-contained content that users might modify or
- SVG: "image/svg+xml"
- The user interface will render the Scalable Vector Graphics (SVG) image within the artifact tags.
- The assistant should specify the viewbox of the SVG rather than defining a width/height
- Markdown: "text/markdown" or "text/md"
- The user interface will render Markdown content placed within the artifact tags.
- Supports standard Markdown syntax including headers, lists, links, images, code blocks, tables, and more.
- Both "text/markdown" and "text/md" are accepted as valid MIME types for Markdown content.
- Mermaid Diagrams: "application/vnd.mermaid"
- The user interface will render Mermaid diagrams placed within the artifact tags.
- React Components: "application/vnd.react"

View File

@@ -1,7 +1,6 @@
const axios = require('axios');
const { isEnabled } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { generateShortLivedToken } = require('~/server/services/AuthService');
const { isEnabled, generateShortLivedToken } = require('@librechat/api');
const footer = `Use the context as your learned knowledge to better answer the user.

View File

@@ -245,7 +245,7 @@ describe('AnthropicClient', () => {
});
describe('Claude 4 model headers', () => {
it('should add "prompt-caching" beta header for claude-sonnet-4 model', () => {
it('should add "prompt-caching" and "context-1m" beta headers for claude-sonnet-4 model', () => {
const client = new AnthropicClient('test-api-key');
const modelOptions = {
model: 'claude-sonnet-4-20250514',
@@ -255,10 +255,30 @@ describe('AnthropicClient', () => {
expect(anthropicClient._options.defaultHeaders).toBeDefined();
expect(anthropicClient._options.defaultHeaders).toHaveProperty('anthropic-beta');
expect(anthropicClient._options.defaultHeaders['anthropic-beta']).toBe(
'prompt-caching-2024-07-31',
'prompt-caching-2024-07-31,context-1m-2025-08-07',
);
});
it('should add "prompt-caching" and "context-1m" beta headers for claude-sonnet-4 model formats', () => {
const client = new AnthropicClient('test-api-key');
const modelVariations = [
'claude-sonnet-4-20250514',
'claude-sonnet-4-latest',
'anthropic/claude-sonnet-4-20250514',
];
modelVariations.forEach((model) => {
const modelOptions = { model };
client.setOptions({ modelOptions, promptCache: true });
const anthropicClient = client.getClient(modelOptions);
expect(anthropicClient._options.defaultHeaders).toBeDefined();
expect(anthropicClient._options.defaultHeaders).toHaveProperty('anthropic-beta');
expect(anthropicClient._options.defaultHeaders['anthropic-beta']).toBe(
'prompt-caching-2024-07-31,context-1m-2025-08-07',
);
});
});
it('should add "prompt-caching" beta header for claude-opus-4 model', () => {
const client = new AnthropicClient('test-api-key');
const modelOptions = {
@@ -273,20 +293,6 @@ describe('AnthropicClient', () => {
);
});
it('should add "prompt-caching" beta header for claude-4-sonnet model', () => {
const client = new AnthropicClient('test-api-key');
const modelOptions = {
model: 'claude-4-sonnet-20250514',
};
client.setOptions({ modelOptions, promptCache: true });
const anthropicClient = client.getClient(modelOptions);
expect(anthropicClient._options.defaultHeaders).toBeDefined();
expect(anthropicClient._options.defaultHeaders).toHaveProperty('anthropic-beta');
expect(anthropicClient._options.defaultHeaders['anthropic-beta']).toBe(
'prompt-caching-2024-07-31',
);
});
it('should add "prompt-caching" beta header for claude-4-opus model', () => {
const client = new AnthropicClient('test-api-key');
const modelOptions = {

View File

@@ -2,6 +2,14 @@ const { Constants } = require('librechat-data-provider');
const { initializeFakeClient } = require('./FakeClient');
jest.mock('~/db/connect');
jest.mock('~/server/services/Config', () => ({
getAppConfig: jest.fn().mockResolvedValue({
// Default app config for tests
paths: { uploads: '/tmp' },
fileStrategy: 'local',
memory: { disabled: false },
}),
}));
jest.mock('~/models', () => ({
User: jest.fn(),
Key: jest.fn(),
@@ -579,6 +587,8 @@ describe('BaseClient', () => {
expect(onStart).toHaveBeenCalledWith(
expect.objectContaining({ text: 'Hello, world!' }),
expect.any(String),
/** `isNewConvo` */
true,
);
});

View File

@@ -1,5 +1,5 @@
const { getModelMaxTokens } = require('@librechat/api');
const BaseClient = require('../BaseClient');
const { getModelMaxTokens } = require('../../../utils');
class FakeClient extends BaseClient {
constructor(apiKey, options = {}) {

View File

@@ -1,4 +1,4 @@
const availableTools = require('./manifest.json');
const manifest = require('./manifest');
// Structured Tools
const DALLE3 = require('./structured/DALLE3');
@@ -13,23 +13,8 @@ const TraversaalSearch = require('./structured/TraversaalSearch');
const createOpenAIImageTools = require('./structured/OpenAIImageTools');
const TavilySearchResults = require('./structured/TavilySearchResults');
/** @type {Record<string, TPlugin | undefined>} */
const manifestToolMap = {};
/** @type {Array<TPlugin>} */
const toolkits = [];
availableTools.forEach((tool) => {
manifestToolMap[tool.pluginKey] = tool;
if (tool.toolkit === true) {
toolkits.push(tool);
}
});
module.exports = {
toolkits,
availableTools,
manifestToolMap,
...manifest,
// Structured Tools
DALLE3,
FluxAPI,

View File

@@ -0,0 +1,20 @@
const availableTools = require('./manifest.json');
/** @type {Record<string, TPlugin | undefined>} */
const manifestToolMap = {};
/** @type {Array<TPlugin>} */
const toolkits = [];
availableTools.forEach((tool) => {
manifestToolMap[tool.pluginKey] = tool;
if (tool.toolkit === true) {
toolkits.push(tool);
}
});
module.exports = {
toolkits,
availableTools,
manifestToolMap,
};

View File

@@ -49,7 +49,7 @@
"pluginKey": "image_gen_oai",
"toolkit": true,
"description": "Image Generation and Editing using OpenAI's latest state-of-the-art models",
"icon": "/assets/image_gen_oai.png",
"icon": "assets/image_gen_oai.png",
"authConfig": [
{
"authField": "IMAGE_GEN_OAI_API_KEY",
@@ -75,7 +75,7 @@
"name": "Browser",
"pluginKey": "web-browser",
"description": "Scrape and summarize webpage data",
"icon": "/assets/web-browser.svg",
"icon": "assets/web-browser.svg",
"authConfig": [
{
"authField": "OPENAI_API_KEY",
@@ -170,7 +170,7 @@
"name": "OpenWeather",
"pluginKey": "open_weather",
"description": "Get weather forecasts and historical data from the OpenWeather API",
"icon": "/assets/openweather.png",
"icon": "assets/openweather.png",
"authConfig": [
{
"authField": "OPENWEATHER_API_KEY",

View File

@@ -1,7 +1,7 @@
const { z } = require('zod');
const { Tool } = require('@langchain/core/tools');
const { logger } = require('@librechat/data-schemas');
const { SearchClient, AzureKeyCredential } = require('@azure/search-documents');
const { logger } = require('~/config');
class AzureAISearch extends Tool {
// Constants for default values
@@ -18,7 +18,7 @@ class AzureAISearch extends Tool {
super();
this.name = 'azure-ai-search';
this.description =
'Use the \'azure-ai-search\' tool to retrieve search results relevant to your input';
"Use the 'azure-ai-search' tool to retrieve search results relevant to your input";
/* Used to initialize the Tool without necessary variables. */
this.override = fields.override ?? false;

View File

@@ -1,14 +1,13 @@
const { z } = require('zod');
const path = require('path');
const OpenAI = require('openai');
const fetch = require('node-fetch');
const { v4: uuidv4 } = require('uuid');
const { ProxyAgent } = require('undici');
const { ProxyAgent, fetch } = require('undici');
const { Tool } = require('@langchain/core/tools');
const { logger } = require('@librechat/data-schemas');
const { getImageBasename } = require('@librechat/api');
const { FileContext, ContentTypes } = require('librechat-data-provider');
const { getImageBasename } = require('~/server/services/Files/images');
const extractBaseURL = require('~/utils/extractBaseURL');
const logger = require('~/config/winston');
const displayMessage =
"DALL-E displayed an image. All generated images are already plainly visible, so don't repeat the descriptions in detail. Do not list download links as they are available in the UI already. The user may download the images by clicking on them, but do not mention anything about downloading to the user.";

View File

@@ -3,12 +3,12 @@ const axios = require('axios');
const fetch = require('node-fetch');
const { v4: uuidv4 } = require('uuid');
const { Tool } = require('@langchain/core/tools');
const { logger } = require('@librechat/data-schemas');
const { HttpsProxyAgent } = require('https-proxy-agent');
const { FileContext, ContentTypes } = require('librechat-data-provider');
const { logger } = require('~/config');
const displayMessage =
'Flux displayed an image. All generated images are already plainly visible, so don\'t repeat the descriptions in detail. Do not list download links as they are available in the UI already. The user may download the images by clicking on them, but do not mention anything about downloading to the user.';
"Flux displayed an image. All generated images are already plainly visible, so don't repeat the descriptions in detail. Do not list download links as they are available in the UI already. The user may download the images by clicking on them, but do not mention anything about downloading to the user.";
/**
* FluxAPI - A tool for generating high-quality images from text prompts using the Flux API.

View File

@@ -1,69 +1,17 @@
const { z } = require('zod');
const axios = require('axios');
const { v4 } = require('uuid');
const OpenAI = require('openai');
const FormData = require('form-data');
const { ProxyAgent } = require('undici');
const { tool } = require('@langchain/core/tools');
const { logAxiosError } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { HttpsProxyAgent } = require('https-proxy-agent');
const { logAxiosError, oaiToolkit } = require('@librechat/api');
const { ContentTypes, EImageOutputType } = require('librechat-data-provider');
const { getStrategyFunctions } = require('~/server/services/Files/strategies');
const { extractBaseURL } = require('~/utils');
const extractBaseURL = require('~/utils/extractBaseURL');
const { getFiles } = require('~/models/File');
/** Default descriptions for image generation tool */
const DEFAULT_IMAGE_GEN_DESCRIPTION = `
Generates high-quality, original images based solely on text, not using any uploaded reference images.
When to use \`image_gen_oai\`:
- To create entirely new images from detailed text descriptions that do NOT reference any image files.
When NOT to use \`image_gen_oai\`:
- If the user has uploaded any images and requests modifications, enhancements, or remixing based on those uploads → use \`image_edit_oai\` instead.
Generated image IDs will be returned in the response, so you can refer to them in future requests made to \`image_edit_oai\`.
`.trim();
/** Default description for image editing tool */
const DEFAULT_IMAGE_EDIT_DESCRIPTION =
`Generates high-quality, original images based on text and one or more uploaded/referenced images.
When to use \`image_edit_oai\`:
- The user wants to modify, extend, or remix one **or more** uploaded images, either:
- Previously generated, or in the current request (both to be included in the \`image_ids\` array).
- Always when the user refers to uploaded images for editing, enhancement, remixing, style transfer, or combining elements.
- Any current or existing images are to be used as visual guides.
- If there are any files in the current request, they are more likely than not expected as references for image edit requests.
When NOT to use \`image_edit_oai\`:
- Brand-new generations that do not rely on an existing image → use \`image_gen_oai\` instead.
Both generated and referenced image IDs will be returned in the response, so you can refer to them in future requests made to \`image_edit_oai\`.
`.trim();
/** Default prompt descriptions */
const DEFAULT_IMAGE_GEN_PROMPT_DESCRIPTION = `Describe the image you want in detail.
Be highly specific—break your idea into layers:
(1) main concept and subject,
(2) composition and position,
(3) lighting and mood,
(4) style, medium, or camera details,
(5) important features (age, expression, clothing, etc.),
(6) background.
Use positive, descriptive language and specify what should be included, not what to avoid.
List number and characteristics of people/objects, and mention style/technical requirements (e.g., "DSLR photo, 85mm lens, golden hour").
Do not reference any uploaded images—use for new image creation from text only.`;
const DEFAULT_IMAGE_EDIT_PROMPT_DESCRIPTION = `Describe the changes, enhancements, or new ideas to apply to the uploaded image(s).
Be highly specific—break your request into layers:
(1) main concept or transformation,
(2) specific edits/replacements or composition guidance,
(3) desired style, mood, or technique,
(4) features/items to keep, change, or add (such as objects, people, clothing, lighting, etc.).
Use positive, descriptive language and clarify what should be included or changed, not what to avoid.
Always base this prompt on the most recently uploaded reference images.`;
const displayMessage =
"The tool displayed an image. All generated images are already plainly visible, so don't repeat the descriptions in detail. Do not list download links as they are available in the UI already. The user may download the images by clicking on them, but do not mention anything about downloading to the user.";
@@ -91,22 +39,6 @@ function returnValue(value) {
return value;
}
const getImageGenDescription = () => {
return process.env.IMAGE_GEN_OAI_DESCRIPTION || DEFAULT_IMAGE_GEN_DESCRIPTION;
};
const getImageEditDescription = () => {
return process.env.IMAGE_EDIT_OAI_DESCRIPTION || DEFAULT_IMAGE_EDIT_DESCRIPTION;
};
const getImageGenPromptDescription = () => {
return process.env.IMAGE_GEN_OAI_PROMPT_DESCRIPTION || DEFAULT_IMAGE_GEN_PROMPT_DESCRIPTION;
};
const getImageEditPromptDescription = () => {
return process.env.IMAGE_EDIT_OAI_PROMPT_DESCRIPTION || DEFAULT_IMAGE_EDIT_PROMPT_DESCRIPTION;
};
function createAbortHandler() {
return function () {
logger.debug('[ImageGenOAI] Image generation aborted');
@@ -121,7 +53,9 @@ function createAbortHandler() {
* @param {string} fields.IMAGE_GEN_OAI_API_KEY - The OpenAI API key
* @param {boolean} [fields.override] - Whether to override the API key check, necessary for app initialization
* @param {MongoFile[]} [fields.imageFiles] - The images to be used for editing
* @returns {Array} - Array of image tools
* @param {string} [fields.imageOutputType] - The image output type configuration
* @param {string} [fields.fileStrategy] - The file storage strategy
* @returns {Array<ReturnType<tool>>} - Array of image tools
*/
function createOpenAIImageTools(fields = {}) {
/** @type {boolean} Used to initialize the Tool without necessary variables. */
@@ -131,8 +65,8 @@ function createOpenAIImageTools(fields = {}) {
throw new Error('This tool is only available for agents.');
}
const { req } = fields;
const imageOutputType = req?.app.locals.imageOutputType || EImageOutputType.PNG;
const appFileStrategy = req?.app.locals.fileStrategy;
const imageOutputType = fields.imageOutputType || EImageOutputType.PNG;
const appFileStrategy = fields.fileStrategy;
const getApiKey = () => {
const apiKey = process.env.IMAGE_GEN_OAI_API_KEY ?? '';
@@ -285,46 +219,7 @@ Error Message: ${error.message}`);
];
return [response, { content, file_ids }];
},
{
name: 'image_gen_oai',
description: getImageGenDescription(),
schema: z.object({
prompt: z.string().max(32000).describe(getImageGenPromptDescription()),
background: z
.enum(['transparent', 'opaque', 'auto'])
.optional()
.describe(
'Sets transparency for the background. Must be one of transparent, opaque or auto (default). When transparent, the output format should be png or webp.',
),
/*
n: z
.number()
.int()
.min(1)
.max(10)
.optional()
.describe('The number of images to generate. Must be between 1 and 10.'),
output_compression: z
.number()
.int()
.min(0)
.max(100)
.optional()
.describe('The compression level (0-100%) for webp or jpeg formats. Defaults to 100.'),
*/
quality: z
.enum(['auto', 'high', 'medium', 'low'])
.optional()
.describe('The quality of the image. One of auto (default), high, medium, or low.'),
size: z
.enum(['auto', '1024x1024', '1536x1024', '1024x1536'])
.optional()
.describe(
'The size of the generated image. One of 1024x1024, 1536x1024 (landscape), 1024x1536 (portrait), or auto (default).',
),
}),
responseFormat: 'content_and_artifact',
},
oaiToolkit.image_gen_oai,
);
/**
@@ -454,16 +349,7 @@ Error Message: ${error.message}`);
};
if (process.env.PROXY) {
try {
const url = new URL(process.env.PROXY);
axiosConfig.proxy = {
host: url.hostname.replace(/^\[|\]$/g, ''),
port: url.port ? parseInt(url.port, 10) : undefined,
protocol: url.protocol.replace(':', ''),
};
} catch (error) {
logger.error('Error parsing proxy URL:', error);
}
axiosConfig.httpsAgent = new HttpsProxyAgent(process.env.PROXY);
}
if (process.env.IMAGE_GEN_OAI_AZURE_API_VERSION && process.env.IMAGE_GEN_OAI_BASEURL) {
@@ -517,48 +403,7 @@ Error Message: ${error.message || 'Unknown error'}`);
}
}
},
{
name: 'image_edit_oai',
description: getImageEditDescription(),
schema: z.object({
image_ids: z
.array(z.string())
.min(1)
.describe(
`
IDs (image ID strings) of previously generated or uploaded images that should guide the edit.
Guidelines:
- If the user's request depends on any prior image(s), copy their image IDs into the \`image_ids\` array (in the same order the user refers to them).
- Never invent or hallucinate IDs; only use IDs that are still visible in the conversation context.
- If no earlier image is relevant, omit the field entirely.
`.trim(),
),
prompt: z.string().max(32000).describe(getImageEditPromptDescription()),
/*
n: z
.number()
.int()
.min(1)
.max(10)
.optional()
.describe('The number of images to generate. Must be between 1 and 10. Defaults to 1.'),
*/
quality: z
.enum(['auto', 'high', 'medium', 'low'])
.optional()
.describe(
'The quality of the image. One of auto (default), high, medium, or low. High/medium/low only supported for gpt-image-1.',
),
size: z
.enum(['auto', '1024x1024', '1536x1024', '1024x1536', '256x256', '512x512'])
.optional()
.describe(
'The size of the generated images. For gpt-image-1: auto (default), 1024x1024, 1536x1024, 1024x1536. For dall-e-2: 256x256, 512x512, 1024x1024.',
),
}),
responseFormat: 'content_and_artifact',
},
oaiToolkit.image_edit_oai,
);
return [imageGenTool, imageEditTool];

View File

@@ -6,19 +6,19 @@ const axios = require('axios');
const sharp = require('sharp');
const { v4: uuidv4 } = require('uuid');
const { Tool } = require('@langchain/core/tools');
const { logger } = require('@librechat/data-schemas');
const { FileContext, ContentTypes } = require('librechat-data-provider');
const paths = require('~/config/paths');
const { logger } = require('~/config');
const displayMessage =
'Stable Diffusion displayed an image. All generated images are already plainly visible, so don\'t repeat the descriptions in detail. Do not list download links as they are available in the UI already. The user may download the images by clicking on them, but do not mention anything about downloading to the user.';
"Stable Diffusion displayed an image. All generated images are already plainly visible, so don't repeat the descriptions in detail. Do not list download links as they are available in the UI already. The user may download the images by clicking on them, but do not mention anything about downloading to the user.";
class StableDiffusionAPI extends Tool {
constructor(fields) {
super();
/** @type {string} User ID */
this.userId = fields.userId;
/** @type {Express.Request | undefined} Express Request object, only provided by ToolService */
/** @type {ServerRequest | undefined} Express Request object, only provided by ToolService */
this.req = fields.req;
/** @type {boolean} Used to initialize the Tool without necessary variables. */
this.override = fields.override ?? false;
@@ -44,7 +44,7 @@ class StableDiffusionAPI extends Tool {
// "negative_prompt":"semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, out of frame, low quality, ugly, mutation, deformed"
// - Generate images only once per human query unless explicitly requested by the user`;
this.description =
'You can generate images using text with \'stable-diffusion\'. This tool is exclusively for visual content.';
"You can generate images using text with 'stable-diffusion'. This tool is exclusively for visual content.";
this.schema = z.object({
prompt: z
.string()

View File

@@ -1,7 +1,7 @@
const { z } = require('zod');
const { Tool } = require('@langchain/core/tools');
const { logger } = require('@librechat/data-schemas');
const { getEnvironmentVariable } = require('@langchain/core/utils/env');
const { logger } = require('~/config');
/**
* Tool for the Traversaal AI search API, Ares.
@@ -21,7 +21,7 @@ class TraversaalSearch extends Tool {
query: z
.string()
.describe(
'A properly written sentence to be interpreted by an AI to search the web according to the user\'s request.',
"A properly written sentence to be interpreted by an AI to search the web according to the user's request.",
),
});
@@ -38,7 +38,6 @@ class TraversaalSearch extends Tool {
return apiKey;
}
// eslint-disable-next-line no-unused-vars
async _call({ query }, _runManager) {
const body = {
query: [query],

View File

@@ -1,8 +1,8 @@
/* eslint-disable no-useless-escape */
const axios = require('axios');
const { z } = require('zod');
const axios = require('axios');
const { Tool } = require('@langchain/core/tools');
const { logger } = require('~/config');
const { logger } = require('@librechat/data-schemas');
class WolframAlphaAPI extends Tool {
constructor(fields) {

View File

@@ -1,9 +1,9 @@
const { z } = require('zod');
const { ytToolkit } = require('@librechat/api');
const { tool } = require('@langchain/core/tools');
const { youtube } = require('@googleapis/youtube');
const { logger } = require('@librechat/data-schemas');
const { YoutubeTranscript } = require('youtube-transcript');
const { getApiKey } = require('./credentials');
const { logger } = require('~/config');
function extractVideoId(url) {
const rawIdRegex = /^[a-zA-Z0-9_-]{11}$/;
@@ -29,7 +29,7 @@ function parseTranscript(transcriptResponse) {
.map((entry) => entry.text.trim())
.filter((text) => text)
.join(' ')
.replaceAll('&amp;#39;', '\'');
.replaceAll('&amp;#39;', "'");
}
function createYouTubeTools(fields = {}) {
@@ -42,160 +42,94 @@ function createYouTubeTools(fields = {}) {
auth: apiKey,
});
const searchTool = tool(
async ({ query, maxResults = 5 }) => {
const response = await youtubeClient.search.list({
part: 'snippet',
q: query,
type: 'video',
maxResults: maxResults || 5,
});
const result = response.data.items.map((item) => ({
title: item.snippet.title,
description: item.snippet.description,
url: `https://www.youtube.com/watch?v=${item.id.videoId}`,
}));
return JSON.stringify(result, null, 2);
},
{
name: 'youtube_search',
description: `Search for YouTube videos by keyword or phrase.
- Required: query (search terms to find videos)
- Optional: maxResults (number of videos to return, 1-50, default: 5)
- Returns: List of videos with titles, descriptions, and URLs
- Use for: Finding specific videos, exploring content, research
Example: query="cooking pasta tutorials" maxResults=3`,
schema: z.object({
query: z.string().describe('Search query terms'),
maxResults: z.number().int().min(1).max(50).optional().describe('Number of results (1-50)'),
}),
},
);
const searchTool = tool(async ({ query, maxResults = 5 }) => {
const response = await youtubeClient.search.list({
part: 'snippet',
q: query,
type: 'video',
maxResults: maxResults || 5,
});
const result = response.data.items.map((item) => ({
title: item.snippet.title,
description: item.snippet.description,
url: `https://www.youtube.com/watch?v=${item.id.videoId}`,
}));
return JSON.stringify(result, null, 2);
}, ytToolkit.youtube_search);
const infoTool = tool(
async ({ url }) => {
const videoId = extractVideoId(url);
if (!videoId) {
throw new Error('Invalid YouTube URL or video ID');
}
const infoTool = tool(async ({ url }) => {
const videoId = extractVideoId(url);
if (!videoId) {
throw new Error('Invalid YouTube URL or video ID');
}
const response = await youtubeClient.videos.list({
part: 'snippet,statistics',
id: videoId,
});
const response = await youtubeClient.videos.list({
part: 'snippet,statistics',
id: videoId,
});
if (!response.data.items?.length) {
throw new Error('Video not found');
}
const video = response.data.items[0];
if (!response.data.items?.length) {
throw new Error('Video not found');
}
const video = response.data.items[0];
const result = {
title: video.snippet.title,
description: video.snippet.description,
views: video.statistics.viewCount,
likes: video.statistics.likeCount,
comments: video.statistics.commentCount,
};
return JSON.stringify(result, null, 2);
},
{
name: 'youtube_info',
description: `Get detailed metadata and statistics for a specific YouTube video.
- Required: url (full YouTube URL or video ID)
- Returns: Video title, description, view count, like count, comment count
- Use for: Getting video metrics and basic metadata
- DO NOT USE FOR VIDEO SUMMARIES, USE TRANSCRIPTS FOR COMPREHENSIVE ANALYSIS
- Accepts both full URLs and video IDs
Example: url="https://youtube.com/watch?v=abc123" or url="abc123"`,
schema: z.object({
url: z.string().describe('YouTube video URL or ID'),
}),
},
);
const result = {
title: video.snippet.title,
description: video.snippet.description,
views: video.statistics.viewCount,
likes: video.statistics.likeCount,
comments: video.statistics.commentCount,
};
return JSON.stringify(result, null, 2);
}, ytToolkit.youtube_info);
const commentsTool = tool(
async ({ url, maxResults = 10 }) => {
const videoId = extractVideoId(url);
if (!videoId) {
throw new Error('Invalid YouTube URL or video ID');
}
const commentsTool = tool(async ({ url, maxResults = 10 }) => {
const videoId = extractVideoId(url);
if (!videoId) {
throw new Error('Invalid YouTube URL or video ID');
}
const response = await youtubeClient.commentThreads.list({
part: 'snippet',
videoId,
maxResults: maxResults || 10,
});
const response = await youtubeClient.commentThreads.list({
part: 'snippet',
videoId,
maxResults: maxResults || 10,
});
const result = response.data.items.map((item) => ({
author: item.snippet.topLevelComment.snippet.authorDisplayName,
text: item.snippet.topLevelComment.snippet.textDisplay,
likes: item.snippet.topLevelComment.snippet.likeCount,
}));
return JSON.stringify(result, null, 2);
},
{
name: 'youtube_comments',
description: `Retrieve top-level comments from a YouTube video.
- Required: url (full YouTube URL or video ID)
- Optional: maxResults (number of comments, 1-50, default: 10)
- Returns: Comment text, author names, like counts
- Use for: Sentiment analysis, audience feedback, engagement review
Example: url="abc123" maxResults=20`,
schema: z.object({
url: z.string().describe('YouTube video URL or ID'),
maxResults: z
.number()
.int()
.min(1)
.max(50)
.optional()
.describe('Number of comments to retrieve'),
}),
},
);
const result = response.data.items.map((item) => ({
author: item.snippet.topLevelComment.snippet.authorDisplayName,
text: item.snippet.topLevelComment.snippet.textDisplay,
likes: item.snippet.topLevelComment.snippet.likeCount,
}));
return JSON.stringify(result, null, 2);
}, ytToolkit.youtube_comments);
const transcriptTool = tool(
async ({ url }) => {
const videoId = extractVideoId(url);
if (!videoId) {
throw new Error('Invalid YouTube URL or video ID');
const transcriptTool = tool(async ({ url }) => {
const videoId = extractVideoId(url);
if (!videoId) {
throw new Error('Invalid YouTube URL or video ID');
}
try {
try {
const transcript = await YoutubeTranscript.fetchTranscript(videoId, { lang: 'en' });
return parseTranscript(transcript);
} catch (e) {
logger.error(e);
}
try {
try {
const transcript = await YoutubeTranscript.fetchTranscript(videoId, { lang: 'en' });
return parseTranscript(transcript);
} catch (e) {
logger.error(e);
}
try {
const transcript = await YoutubeTranscript.fetchTranscript(videoId, { lang: 'de' });
return parseTranscript(transcript);
} catch (e) {
logger.error(e);
}
const transcript = await YoutubeTranscript.fetchTranscript(videoId);
const transcript = await YoutubeTranscript.fetchTranscript(videoId, { lang: 'de' });
return parseTranscript(transcript);
} catch (error) {
throw new Error(`Failed to fetch transcript: ${error.message}`);
} catch (e) {
logger.error(e);
}
},
{
name: 'youtube_transcript',
description: `Fetch and parse the transcript/captions of a YouTube video.
- Required: url (full YouTube URL or video ID)
- Returns: Full video transcript as plain text
- Use for: Content analysis, summarization, translation reference
- This is the "Go-to" tool for analyzing actual video content
- Attempts to fetch English first, then German, then any available language
Example: url="https://youtube.com/watch?v=abc123"`,
schema: z.object({
url: z.string().describe('YouTube video URL or ID'),
}),
},
);
const transcript = await YoutubeTranscript.fetchTranscript(videoId);
return parseTranscript(transcript);
} catch (error) {
throw new Error(`Failed to fetch transcript: ${error.message}`);
}
}, ytToolkit.youtube_transcript);
return [searchTool, infoTool, commentsTool, transcriptTool];
}

View File

@@ -1,43 +1,9 @@
const DALLE3 = require('../DALLE3');
const { ProxyAgent } = require('undici');
jest.mock('tiktoken');
const processFileURL = jest.fn();
jest.mock('~/server/services/Files/images', () => ({
getImageBasename: jest.fn().mockImplementation((url) => {
const parts = url.split('/');
const lastPart = parts.pop();
const imageExtensionRegex = /\.(jpg|jpeg|png|gif|bmp|tiff|svg)$/i;
if (imageExtensionRegex.test(lastPart)) {
return lastPart;
}
return '';
}),
}));
jest.mock('fs', () => {
return {
existsSync: jest.fn(),
mkdirSync: jest.fn(),
promises: {
writeFile: jest.fn(),
readFile: jest.fn(),
unlink: jest.fn(),
},
};
});
jest.mock('path', () => {
return {
resolve: jest.fn(),
join: jest.fn(),
relative: jest.fn(),
extname: jest.fn().mockImplementation((filename) => {
return filename.slice(filename.lastIndexOf('.'));
}),
};
});
describe('DALLE3 Proxy Configuration', () => {
let originalEnv;

View File

@@ -1,9 +1,8 @@
const OpenAI = require('openai');
const { logger } = require('@librechat/data-schemas');
const DALLE3 = require('../DALLE3');
const logger = require('~/config/winston');
jest.mock('openai');
jest.mock('@librechat/data-schemas', () => {
return {
logger: {
@@ -26,25 +25,6 @@ jest.mock('tiktoken', () => {
const processFileURL = jest.fn();
jest.mock('~/server/services/Files/images', () => ({
getImageBasename: jest.fn().mockImplementation((url) => {
// Split the URL by '/'
const parts = url.split('/');
// Get the last part of the URL
const lastPart = parts.pop();
// Check if the last part of the URL matches the image extension regex
const imageExtensionRegex = /\.(jpg|jpeg|png|gif|bmp|tiff|svg)$/i;
if (imageExtensionRegex.test(lastPart)) {
return lastPart;
}
// If the regex test fails, return an empty string
return '';
}),
}));
const generate = jest.fn();
OpenAI.mockImplementation(() => ({
images: {

View File

@@ -2,9 +2,9 @@ const { z } = require('zod');
const axios = require('axios');
const { tool } = require('@langchain/core/tools');
const { logger } = require('@librechat/data-schemas');
const { generateShortLivedToken } = require('@librechat/api');
const { Tools, EToolResources } = require('librechat-data-provider');
const { filterFilesByAgentAccess } = require('~/server/services/Files/permissions');
const { generateShortLivedToken } = require('~/server/services/AuthService');
const { getFiles } = require('~/models/File');
/**
@@ -68,18 +68,19 @@ const primeFiles = async (options) => {
/**
*
* @param {Object} options
* @param {ServerRequest} options.req
* @param {string} options.userId
* @param {Array<{ file_id: string; filename: string }>} options.files
* @param {string} [options.entity_id]
* @param {boolean} [options.fileCitations=false] - Whether to include citation instructions
* @returns
*/
const createFileSearchTool = async ({ req, files, entity_id }) => {
const createFileSearchTool = async ({ userId, files, entity_id, fileCitations = false }) => {
return tool(
async ({ query }) => {
if (files.length === 0) {
return 'No files to search. Instruct the user to add files for the search.';
}
const jwtToken = generateShortLivedToken(req.user.id);
const jwtToken = generateShortLivedToken(userId);
if (!jwtToken) {
return 'There was an error authenticating the file search request.';
}
@@ -142,9 +143,9 @@ const createFileSearchTool = async ({ req, files, entity_id }) => {
const formattedString = formattedResults
.map(
(result, index) =>
`File: ${result.filename}\nAnchor: \\ue202turn0file${index} (${result.filename})\nRelevance: ${(1.0 - result.distance).toFixed(4)}\nContent: ${
result.content
}\n`,
`File: ${result.filename}${
fileCitations ? `\nAnchor: \\ue202turn0file${index} (${result.filename})` : ''
}\nRelevance: ${(1.0 - result.distance).toFixed(4)}\nContent: ${result.content}\n`,
)
.join('\n---\n');
@@ -158,12 +159,14 @@ const createFileSearchTool = async ({ req, files, entity_id }) => {
pageRelevance: result.page ? { [result.page]: 1.0 - result.distance } : {},
}));
return [formattedString, { [Tools.file_search]: { sources } }];
return [formattedString, { [Tools.file_search]: { sources, fileCitations } }];
},
{
name: Tools.file_search,
responseFormat: 'content_and_artifact',
description: `Performs semantic search across attached "${Tools.file_search}" documents using natural language queries. This tool analyzes the content of uploaded files to find relevant information, quotes, and passages that best match your query. Use this to extract specific information or find relevant sections within the available documents.
description: `Performs semantic search across attached "${Tools.file_search}" documents using natural language queries. This tool analyzes the content of uploaded files to find relevant information, quotes, and passages that best match your query. Use this to extract specific information or find relevant sections within the available documents.${
fileCitations
? `
**CITE FILE SEARCH RESULTS:**
Use anchor markers immediately after statements derived from file content. Reference the filename in your text:
@@ -171,7 +174,9 @@ Use anchor markers immediately after statements derived from file content. Refer
- Page reference: "According to report.docx... \\ue202turn0file1"
- Multi-file: "Multiple sources confirm... \\ue200\\ue202turn0file0\\ue202turn0file1\\ue201"
**ALWAYS mention the filename in your text before the citation marker. NEVER use markdown links or footnotes.**`,
**ALWAYS mention the filename in your text before the citation marker. NEVER use markdown links or footnotes.**`
: ''
}`,
schema: z.object({
query: z
.string()

View File

@@ -1,5 +1,5 @@
const OpenAI = require('openai');
const { logger } = require('~/config');
const { logger } = require('@librechat/data-schemas');
/**
* Handles errors that may occur when making requests to OpenAI's API.

View File

@@ -1,9 +1,21 @@
const { logger } = require('@librechat/data-schemas');
const { SerpAPI } = require('@langchain/community/tools/serpapi');
const { Calculator } = require('@langchain/community/tools/calculator');
const { mcpToolPattern, loadWebSearchAuth } = require('@librechat/api');
const { EnvVar, createCodeExecutionTool, createSearchTool } = require('@librechat/agents');
const { Tools, EToolResources, replaceSpecialVars } = require('librechat-data-provider');
const {
checkAccess,
createSafeUser,
mcpToolPattern,
loadWebSearchAuth,
} = require('@librechat/api');
const {
Tools,
Constants,
Permissions,
EToolResources,
PermissionTypes,
replaceSpecialVars,
} = require('librechat-data-provider');
const {
availableTools,
manifestToolMap,
@@ -24,9 +36,10 @@ const {
const { primeFiles: primeCodeFiles } = require('~/server/services/Files/Code/process');
const { createFileSearchTool, primeFiles: primeSearchFiles } = require('./fileSearch');
const { getUserPluginAuthValue } = require('~/server/services/PluginService');
const { createMCPTool, createMCPTools } = require('~/server/services/MCP');
const { loadAuthValues } = require('~/server/services/Tools/credentials');
const { getCachedTools } = require('~/server/services/Config');
const { createMCPTool } = require('~/server/services/MCP');
const { getMCPServerTools } = require('~/server/services/Config');
const { getRoleByName } = require('~/models/Role');
/**
* Validates the availability and authentication of tools for a user based on environment variables or user-specific plugin authentication values.
@@ -121,27 +134,37 @@ const getAuthFields = (toolKey) => {
/**
*
* @param {object} object
* @param {string} object.user
* @param {Pick<Agent, 'id' | 'provider' | 'model'>} [object.agent]
* @param {string} [object.model]
* @param {EModelEndpoint} [object.endpoint]
* @param {LoadToolOptions} [object.options]
* @param {boolean} [object.useSpecs]
* @param {Array<string>} object.tools
* @param {boolean} [object.functions]
* @param {boolean} [object.returnMap]
* @param {object} params
* @param {string} params.user
* @param {Record<string, Record<string, string>>} [object.userMCPAuthMap]
* @param {AbortSignal} [object.signal]
* @param {Pick<Agent, 'id' | 'provider' | 'model'>} [params.agent]
* @param {string} [params.model]
* @param {EModelEndpoint} [params.endpoint]
* @param {LoadToolOptions} [params.options]
* @param {boolean} [params.useSpecs]
* @param {Array<string>} params.tools
* @param {boolean} [params.functions]
* @param {boolean} [params.returnMap]
* @param {AppConfig['webSearch']} [params.webSearch]
* @param {AppConfig['fileStrategy']} [params.fileStrategy]
* @param {AppConfig['imageOutputType']} [params.imageOutputType]
* @returns {Promise<{ loadedTools: Tool[], toolContextMap: Object<string, any> } | Record<string,Tool>>}
*/
const loadTools = async ({
user,
agent,
model,
signal,
endpoint,
userMCPAuthMap,
tools = [],
options = {},
functions = true,
returnMap = false,
webSearch,
fileStrategy,
imageOutputType,
}) => {
const toolConstructors = {
flux: FluxAPI,
@@ -200,6 +223,8 @@ const loadTools = async ({
...authValues,
isAgent: !!agent,
req: options.req,
imageOutputType,
fileStrategy,
imageFiles,
});
},
@@ -215,7 +240,7 @@ const loadTools = async ({
const imageGenOptions = {
isAgent: !!agent,
req: options.req,
fileStrategy: options.fileStrategy,
fileStrategy,
processFileURL: options.processFileURL,
returnMetadata: options.returnMetadata,
uploadImageBuffer: options.uploadImageBuffer,
@@ -230,7 +255,7 @@ const loadTools = async ({
/** @type {Record<string, string>} */
const toolContextMap = {};
const cachedTools = (await getCachedTools({ userId: user, includeGlobal: true })) ?? {};
const requestedMCPTools = {};
for (const tool of tools) {
if (tool === Tools.execute_code) {
@@ -268,15 +293,36 @@ const loadTools = async ({
if (toolContext) {
toolContextMap[tool] = toolContext;
}
return createFileSearchTool({ req: options.req, files, entity_id: agent?.id });
/** @type {boolean | undefined} Check if user has FILE_CITATIONS permission */
let fileCitations;
if (fileCitations == null && options.req?.user != null) {
try {
fileCitations = await checkAccess({
user: options.req.user,
permissionType: PermissionTypes.FILE_CITATIONS,
permissions: [Permissions.USE],
getRoleByName,
});
} catch (error) {
logger.error('[handleTools] FILE_CITATIONS permission check failed:', error);
fileCitations = false;
}
}
return createFileSearchTool({
userId: user,
files,
entity_id: agent?.id,
fileCitations,
});
};
continue;
} else if (tool === Tools.web_search) {
const webSearchConfig = options?.req?.app?.locals?.webSearch;
const result = await loadWebSearchAuth({
userId: user,
loadAuthValues,
webSearchConfig,
webSearchConfig: webSearch,
});
const { onSearchResults, onGetHighlights } = options?.[Tools.web_search] ?? {};
requestedTools[tool] = async () => {
@@ -298,15 +344,34 @@ Current Date & Time: ${replaceSpecialVars({ text: '{{iso_datetime}}' })}
});
};
continue;
} else if (tool && cachedTools && mcpToolPattern.test(tool)) {
requestedTools[tool] = async () =>
createMCPTool({
req: options.req,
res: options.res,
toolKey: tool,
model: agent?.model ?? model,
provider: agent?.provider ?? endpoint,
});
} else if (tool && mcpToolPattern.test(tool)) {
const [toolName, serverName] = tool.split(Constants.mcp_delimiter);
if (toolName === Constants.mcp_server) {
/** Placeholder used for UI purposes */
continue;
}
if (serverName && options.req?.config?.mcpConfig?.[serverName] == null) {
logger.warn(
`MCP server "${serverName}" for "${toolName}" tool is not configured${agent?.id != null && agent.id ? ` but attached to "${agent.id}"` : ''}`,
);
continue;
}
if (toolName === Constants.mcp_all) {
requestedMCPTools[serverName] = [
{
type: 'all',
serverName,
},
];
continue;
}
requestedMCPTools[serverName] = requestedMCPTools[serverName] || [];
requestedMCPTools[serverName].push({
type: 'single',
toolKey: tool,
serverName,
});
continue;
}
@@ -346,6 +411,75 @@ Current Date & Time: ${replaceSpecialVars({ text: '{{iso_datetime}}' })}
}
const loadedTools = (await Promise.all(toolPromises)).flatMap((plugin) => plugin || []);
const mcpToolPromises = [];
/** MCP server tools are initialized sequentially by server */
let index = -1;
const failedMCPServers = new Set();
const safeUser = createSafeUser(options.req?.user);
for (const [serverName, toolConfigs] of Object.entries(requestedMCPTools)) {
index++;
/** @type {LCAvailableTools} */
let availableTools;
for (const config of toolConfigs) {
try {
if (failedMCPServers.has(serverName)) {
continue;
}
const mcpParams = {
index,
signal,
user: safeUser,
userMCPAuthMap,
res: options.res,
model: agent?.model ?? model,
serverName: config.serverName,
provider: agent?.provider ?? endpoint,
};
if (config.type === 'all' && toolConfigs.length === 1) {
/** Handle async loading for single 'all' tool config */
mcpToolPromises.push(
createMCPTools(mcpParams).catch((error) => {
logger.error(`Error loading ${serverName} tools:`, error);
return null;
}),
);
continue;
}
if (!availableTools) {
try {
availableTools = await getMCPServerTools(safeUser.id, serverName);
} catch (error) {
logger.error(`Error fetching available tools for MCP server ${serverName}:`, error);
}
}
/** Handle synchronous loading */
const mcpTool =
config.type === 'all'
? await createMCPTools(mcpParams)
: await createMCPTool({
...mcpParams,
availableTools,
toolKey: config.toolKey,
});
if (Array.isArray(mcpTool)) {
loadedTools.push(...mcpTool);
} else if (mcpTool) {
loadedTools.push(mcpTool);
} else {
failedMCPServers.add(serverName);
logger.warn(
`MCP tool creation failed for "${config.toolKey}", server may be unavailable or unauthenticated.`,
);
}
} catch (error) {
logger.error(`Error loading MCP tool for server ${serverName}:`, error);
}
}
}
loadedTools.push(...(await Promise.all(mcpToolPromises)).flatMap((plugin) => plugin || []));
return { loadedTools, toolContextMap };
};

View File

@@ -9,7 +9,27 @@ const mockPluginService = {
jest.mock('~/server/services/PluginService', () => mockPluginService);
const { BaseLLM } = require('@langchain/openai');
jest.mock('~/server/services/Config', () => ({
getAppConfig: jest.fn().mockResolvedValue({
// Default app config for tool tests
paths: { uploads: '/tmp' },
fileStrategy: 'local',
filteredTools: [],
includedTools: [],
}),
getCachedTools: jest.fn().mockResolvedValue({
// Default cached tools for tests
dalle: {
type: 'function',
function: {
name: 'dalle',
description: 'DALL-E image generation',
parameters: {},
},
},
}),
}));
const { Calculator } = require('@langchain/community/tools/calculator');
const { User } = require('~/db/models');
@@ -151,7 +171,6 @@ describe('Tool Handlers', () => {
beforeAll(async () => {
const toolMap = await loadTools({
user: fakeUser._id,
model: BaseLLM,
tools: sampleTools,
returnMap: true,
useSpecs: true,
@@ -245,7 +264,6 @@ describe('Tool Handlers', () => {
it('returns an empty object when no tools are requested', async () => {
toolFunctions = await loadTools({
user: fakeUser._id,
model: BaseLLM,
returnMap: true,
useSpecs: true,
});
@@ -255,7 +273,6 @@ describe('Tool Handlers', () => {
process.env.SD_WEBUI_URL = mockCredential;
toolFunctions = await loadTools({
user: fakeUser._id,
model: BaseLLM,
tools: ['stable-diffusion'],
functions: true,
returnMap: true,

View File

@@ -1,157 +0,0 @@
const fs = require('fs');
describe('cacheConfig', () => {
let originalEnv;
let originalReadFileSync;
beforeEach(() => {
originalEnv = { ...process.env };
originalReadFileSync = fs.readFileSync;
// Clear all related env vars first
delete process.env.REDIS_URI;
delete process.env.REDIS_CA;
delete process.env.REDIS_KEY_PREFIX_VAR;
delete process.env.REDIS_KEY_PREFIX;
delete process.env.USE_REDIS;
delete process.env.REDIS_PING_INTERVAL;
delete process.env.FORCED_IN_MEMORY_CACHE_NAMESPACES;
// Clear require cache
jest.resetModules();
});
afterEach(() => {
process.env = originalEnv;
fs.readFileSync = originalReadFileSync;
jest.resetModules();
});
describe('REDIS_KEY_PREFIX validation and resolution', () => {
test('should throw error when both REDIS_KEY_PREFIX_VAR and REDIS_KEY_PREFIX are set', () => {
process.env.REDIS_KEY_PREFIX_VAR = 'DEPLOYMENT_ID';
process.env.REDIS_KEY_PREFIX = 'manual-prefix';
expect(() => {
require('./cacheConfig');
}).toThrow('Only either REDIS_KEY_PREFIX_VAR or REDIS_KEY_PREFIX can be set.');
});
test('should resolve REDIS_KEY_PREFIX from variable reference', () => {
process.env.REDIS_KEY_PREFIX_VAR = 'DEPLOYMENT_ID';
process.env.DEPLOYMENT_ID = 'test-deployment-123';
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.REDIS_KEY_PREFIX).toBe('test-deployment-123');
});
test('should use direct REDIS_KEY_PREFIX value', () => {
process.env.REDIS_KEY_PREFIX = 'direct-prefix';
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.REDIS_KEY_PREFIX).toBe('direct-prefix');
});
test('should default to empty string when no prefix is configured', () => {
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.REDIS_KEY_PREFIX).toBe('');
});
test('should handle empty variable reference', () => {
process.env.REDIS_KEY_PREFIX_VAR = 'EMPTY_VAR';
process.env.EMPTY_VAR = '';
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.REDIS_KEY_PREFIX).toBe('');
});
test('should handle undefined variable reference', () => {
process.env.REDIS_KEY_PREFIX_VAR = 'UNDEFINED_VAR';
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.REDIS_KEY_PREFIX).toBe('');
});
});
describe('USE_REDIS and REDIS_URI validation', () => {
test('should throw error when USE_REDIS is enabled but REDIS_URI is not set', () => {
process.env.USE_REDIS = 'true';
expect(() => {
require('./cacheConfig');
}).toThrow('USE_REDIS is enabled but REDIS_URI is not set.');
});
test('should not throw error when USE_REDIS is enabled and REDIS_URI is set', () => {
process.env.USE_REDIS = 'true';
process.env.REDIS_URI = 'redis://localhost:6379';
expect(() => {
require('./cacheConfig');
}).not.toThrow();
});
test('should handle empty REDIS_URI when USE_REDIS is enabled', () => {
process.env.USE_REDIS = 'true';
process.env.REDIS_URI = '';
expect(() => {
require('./cacheConfig');
}).toThrow('USE_REDIS is enabled but REDIS_URI is not set.');
});
});
describe('REDIS_CA file reading', () => {
test('should be null when REDIS_CA is not set', () => {
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.REDIS_CA).toBeNull();
});
});
describe('REDIS_PING_INTERVAL configuration', () => {
test('should default to 0 when REDIS_PING_INTERVAL is not set', () => {
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.REDIS_PING_INTERVAL).toBe(0);
});
test('should use provided REDIS_PING_INTERVAL value', () => {
process.env.REDIS_PING_INTERVAL = '300';
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.REDIS_PING_INTERVAL).toBe(300);
});
});
describe('FORCED_IN_MEMORY_CACHE_NAMESPACES validation', () => {
test('should parse comma-separated cache keys correctly', () => {
process.env.FORCED_IN_MEMORY_CACHE_NAMESPACES = ' ROLES, STATIC_CONFIG ,MESSAGES ';
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.FORCED_IN_MEMORY_CACHE_NAMESPACES).toEqual([
'ROLES',
'STATIC_CONFIG',
'MESSAGES',
]);
});
test('should throw error for invalid cache keys', () => {
process.env.FORCED_IN_MEMORY_CACHE_NAMESPACES = 'INVALID_KEY,ROLES';
expect(() => {
require('./cacheConfig');
}).toThrow('Invalid cache keys in FORCED_IN_MEMORY_CACHE_NAMESPACES: INVALID_KEY');
});
test('should handle empty string gracefully', () => {
process.env.FORCED_IN_MEMORY_CACHE_NAMESPACES = '';
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.FORCED_IN_MEMORY_CACHE_NAMESPACES).toEqual([]);
});
test('should handle undefined env var gracefully', () => {
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.FORCED_IN_MEMORY_CACHE_NAMESPACES).toEqual([]);
});
});
});

View File

@@ -1,108 +0,0 @@
const KeyvRedis = require('@keyv/redis').default;
const { Keyv } = require('keyv');
const { RedisStore } = require('rate-limit-redis');
const { Time } = require('librechat-data-provider');
const { logger } = require('@librechat/data-schemas');
const { RedisStore: ConnectRedis } = require('connect-redis');
const MemoryStore = require('memorystore')(require('express-session'));
const { keyvRedisClient, ioredisClient, GLOBAL_PREFIX_SEPARATOR } = require('./redisClients');
const { cacheConfig } = require('./cacheConfig');
const { violationFile } = require('./keyvFiles');
/**
* Creates a cache instance using Redis or a fallback store. Suitable for general caching needs.
* @param {string} namespace - The cache namespace.
* @param {number} [ttl] - Time to live for cache entries.
* @param {object} [fallbackStore] - Optional fallback store if Redis is not used.
* @returns {Keyv} Cache instance.
*/
const standardCache = (namespace, ttl = undefined, fallbackStore = undefined) => {
if (
cacheConfig.USE_REDIS &&
!cacheConfig.FORCED_IN_MEMORY_CACHE_NAMESPACES?.includes(namespace)
) {
try {
const keyvRedis = new KeyvRedis(keyvRedisClient);
const cache = new Keyv(keyvRedis, { namespace, ttl });
keyvRedis.namespace = cacheConfig.REDIS_KEY_PREFIX;
keyvRedis.keyPrefixSeparator = GLOBAL_PREFIX_SEPARATOR;
cache.on('error', (err) => {
logger.error(`Cache error in namespace ${namespace}:`, err);
});
return cache;
} catch (err) {
logger.error(`Failed to create Redis cache for namespace ${namespace}:`, err);
throw err;
}
}
if (fallbackStore) return new Keyv({ store: fallbackStore, namespace, ttl });
return new Keyv({ namespace, ttl });
};
/**
* Creates a cache instance for storing violation data.
* Uses a file-based fallback store if Redis is not enabled.
* @param {string} namespace - The cache namespace for violations.
* @param {number} [ttl] - Time to live for cache entries.
* @returns {Keyv} Cache instance for violations.
*/
const violationCache = (namespace, ttl = undefined) => {
return standardCache(`violations:${namespace}`, ttl, violationFile);
};
/**
* Creates a session cache instance using Redis or in-memory store.
* @param {string} namespace - The session namespace.
* @param {number} [ttl] - Time to live for session entries.
* @returns {MemoryStore | ConnectRedis} Session store instance.
*/
const sessionCache = (namespace, ttl = undefined) => {
namespace = namespace.endsWith(':') ? namespace : `${namespace}:`;
if (!cacheConfig.USE_REDIS) return new MemoryStore({ ttl, checkPeriod: Time.ONE_DAY });
const store = new ConnectRedis({ client: ioredisClient, ttl, prefix: namespace });
if (ioredisClient) {
ioredisClient.on('error', (err) => {
logger.error(`Session store Redis error for namespace ${namespace}:`, err);
});
}
return store;
};
/**
* Creates a rate limiter cache using Redis.
* @param {string} prefix - The key prefix for rate limiting.
* @returns {RedisStore|undefined} RedisStore instance or undefined if Redis is not used.
*/
const limiterCache = (prefix) => {
if (!prefix) throw new Error('prefix is required');
if (!cacheConfig.USE_REDIS) return undefined;
prefix = prefix.endsWith(':') ? prefix : `${prefix}:`;
try {
if (!ioredisClient) {
logger.warn(`Redis client not available for rate limiter with prefix ${prefix}`);
return undefined;
}
return new RedisStore({ sendCommand, prefix });
} catch (err) {
logger.error(`Failed to create Redis rate limiter for prefix ${prefix}:`, err);
return undefined;
}
};
const sendCommand = (...args) => {
if (!ioredisClient) {
logger.warn('Redis client not available for command execution');
return Promise.reject(new Error('Redis client not available'));
}
return ioredisClient.call(...args).catch((err) => {
logger.error('Redis command execution failed:', err);
throw err;
});
};
module.exports = { standardCache, sessionCache, violationCache, limiterCache };

View File

@@ -1,432 +0,0 @@
const { Time } = require('librechat-data-provider');
// Mock dependencies first
const mockKeyvRedis = {
namespace: '',
keyPrefixSeparator: '',
};
const mockKeyv = jest.fn().mockReturnValue({
mock: 'keyv',
on: jest.fn(),
});
const mockConnectRedis = jest.fn().mockReturnValue({ mock: 'connectRedis' });
const mockMemoryStore = jest.fn().mockReturnValue({ mock: 'memoryStore' });
const mockRedisStore = jest.fn().mockReturnValue({ mock: 'redisStore' });
const mockIoredisClient = {
call: jest.fn(),
on: jest.fn(),
};
const mockKeyvRedisClient = {};
const mockViolationFile = {};
// Mock modules before requiring the main module
jest.mock('@keyv/redis', () => ({
default: jest.fn().mockImplementation(() => mockKeyvRedis),
}));
jest.mock('keyv', () => ({
Keyv: mockKeyv,
}));
jest.mock('./cacheConfig', () => ({
cacheConfig: {
USE_REDIS: false,
REDIS_KEY_PREFIX: 'test',
FORCED_IN_MEMORY_CACHE_NAMESPACES: [],
},
}));
jest.mock('./redisClients', () => ({
keyvRedisClient: mockKeyvRedisClient,
ioredisClient: mockIoredisClient,
GLOBAL_PREFIX_SEPARATOR: '::',
}));
jest.mock('./keyvFiles', () => ({
violationFile: mockViolationFile,
}));
jest.mock('connect-redis', () => ({ RedisStore: mockConnectRedis }));
jest.mock('memorystore', () => jest.fn(() => mockMemoryStore));
jest.mock('rate-limit-redis', () => ({
RedisStore: mockRedisStore,
}));
jest.mock('@librechat/data-schemas', () => ({
logger: {
error: jest.fn(),
warn: jest.fn(),
info: jest.fn(),
},
}));
// Import after mocking
const { standardCache, sessionCache, violationCache, limiterCache } = require('./cacheFactory');
const { cacheConfig } = require('./cacheConfig');
describe('cacheFactory', () => {
beforeEach(() => {
jest.clearAllMocks();
// Reset cache config mock
cacheConfig.USE_REDIS = false;
cacheConfig.REDIS_KEY_PREFIX = 'test';
cacheConfig.FORCED_IN_MEMORY_CACHE_NAMESPACES = [];
});
describe('redisCache', () => {
it('should create Redis cache when USE_REDIS is true', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'test-namespace';
const ttl = 3600;
standardCache(namespace, ttl);
expect(require('@keyv/redis').default).toHaveBeenCalledWith(mockKeyvRedisClient);
expect(mockKeyv).toHaveBeenCalledWith(mockKeyvRedis, { namespace, ttl });
expect(mockKeyvRedis.namespace).toBe(cacheConfig.REDIS_KEY_PREFIX);
expect(mockKeyvRedis.keyPrefixSeparator).toBe('::');
});
it('should create Redis cache with undefined ttl when not provided', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'test-namespace';
standardCache(namespace);
expect(mockKeyv).toHaveBeenCalledWith(mockKeyvRedis, { namespace, ttl: undefined });
});
it('should use fallback store when USE_REDIS is false and fallbackStore is provided', () => {
cacheConfig.USE_REDIS = false;
const namespace = 'test-namespace';
const ttl = 3600;
const fallbackStore = { some: 'store' };
standardCache(namespace, ttl, fallbackStore);
expect(mockKeyv).toHaveBeenCalledWith({ store: fallbackStore, namespace, ttl });
});
it('should create default Keyv instance when USE_REDIS is false and no fallbackStore', () => {
cacheConfig.USE_REDIS = false;
const namespace = 'test-namespace';
const ttl = 3600;
standardCache(namespace, ttl);
expect(mockKeyv).toHaveBeenCalledWith({ namespace, ttl });
});
it('should handle namespace and ttl as undefined', () => {
cacheConfig.USE_REDIS = false;
standardCache();
expect(mockKeyv).toHaveBeenCalledWith({ namespace: undefined, ttl: undefined });
});
it('should use fallback when namespace is in FORCED_IN_MEMORY_CACHE_NAMESPACES', () => {
cacheConfig.USE_REDIS = true;
cacheConfig.FORCED_IN_MEMORY_CACHE_NAMESPACES = ['forced-memory'];
const namespace = 'forced-memory';
const ttl = 3600;
standardCache(namespace, ttl);
expect(require('@keyv/redis').default).not.toHaveBeenCalled();
expect(mockKeyv).toHaveBeenCalledWith({ namespace, ttl });
});
it('should use Redis when namespace is not in FORCED_IN_MEMORY_CACHE_NAMESPACES', () => {
cacheConfig.USE_REDIS = true;
cacheConfig.FORCED_IN_MEMORY_CACHE_NAMESPACES = ['other-namespace'];
const namespace = 'test-namespace';
const ttl = 3600;
standardCache(namespace, ttl);
expect(require('@keyv/redis').default).toHaveBeenCalledWith(mockKeyvRedisClient);
expect(mockKeyv).toHaveBeenCalledWith(mockKeyvRedis, { namespace, ttl });
});
it('should throw error when Redis cache creation fails', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'test-namespace';
const ttl = 3600;
const testError = new Error('Redis connection failed');
const KeyvRedis = require('@keyv/redis').default;
KeyvRedis.mockImplementationOnce(() => {
throw testError;
});
expect(() => standardCache(namespace, ttl)).toThrow('Redis connection failed');
const { logger } = require('@librechat/data-schemas');
expect(logger.error).toHaveBeenCalledWith(
`Failed to create Redis cache for namespace ${namespace}:`,
testError,
);
expect(mockKeyv).not.toHaveBeenCalled();
});
});
describe('violationCache', () => {
it('should create violation cache with prefixed namespace', () => {
const namespace = 'test-violations';
const ttl = 7200;
// We can't easily mock the internal redisCache call since it's in the same module
// But we can test that the function executes without throwing
expect(() => violationCache(namespace, ttl)).not.toThrow();
});
it('should create violation cache with undefined ttl', () => {
const namespace = 'test-violations';
violationCache(namespace);
// The function should call redisCache with violations: prefixed namespace
// Since we can't easily mock the internal redisCache call, we test the behavior
expect(() => violationCache(namespace)).not.toThrow();
});
it('should handle undefined namespace', () => {
expect(() => violationCache(undefined)).not.toThrow();
});
});
describe('sessionCache', () => {
it('should return MemoryStore when USE_REDIS is false', () => {
cacheConfig.USE_REDIS = false;
const namespace = 'sessions';
const ttl = 86400;
const result = sessionCache(namespace, ttl);
expect(mockMemoryStore).toHaveBeenCalledWith({ ttl, checkPeriod: Time.ONE_DAY });
expect(result).toBe(mockMemoryStore());
});
it('should return ConnectRedis when USE_REDIS is true', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'sessions';
const ttl = 86400;
const result = sessionCache(namespace, ttl);
expect(mockConnectRedis).toHaveBeenCalledWith({
client: mockIoredisClient,
ttl,
prefix: `${namespace}:`,
});
expect(result).toBe(mockConnectRedis());
});
it('should add colon to namespace if not present', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'sessions';
sessionCache(namespace);
expect(mockConnectRedis).toHaveBeenCalledWith({
client: mockIoredisClient,
ttl: undefined,
prefix: 'sessions:',
});
});
it('should not add colon to namespace if already present', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'sessions:';
sessionCache(namespace);
expect(mockConnectRedis).toHaveBeenCalledWith({
client: mockIoredisClient,
ttl: undefined,
prefix: 'sessions:',
});
});
it('should handle undefined ttl', () => {
cacheConfig.USE_REDIS = false;
const namespace = 'sessions';
sessionCache(namespace);
expect(mockMemoryStore).toHaveBeenCalledWith({
ttl: undefined,
checkPeriod: Time.ONE_DAY,
});
});
it('should throw error when ConnectRedis constructor fails', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'sessions';
const ttl = 86400;
// Mock ConnectRedis to throw an error during construction
const redisError = new Error('Redis connection failed');
mockConnectRedis.mockImplementationOnce(() => {
throw redisError;
});
// The error should propagate up, not be caught
expect(() => sessionCache(namespace, ttl)).toThrow('Redis connection failed');
// Verify that MemoryStore was NOT used as fallback
expect(mockMemoryStore).not.toHaveBeenCalled();
});
it('should register error handler but let errors propagate to Express', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'sessions';
// Create a mock session store with middleware methods
const mockSessionStore = {
get: jest.fn(),
set: jest.fn(),
destroy: jest.fn(),
};
mockConnectRedis.mockReturnValue(mockSessionStore);
const store = sessionCache(namespace);
// Verify error handler was registered
expect(mockIoredisClient.on).toHaveBeenCalledWith('error', expect.any(Function));
// Get the error handler
const errorHandler = mockIoredisClient.on.mock.calls.find((call) => call[0] === 'error')[1];
// Simulate an error from Redis during a session operation
const redisError = new Error('Socket closed unexpectedly');
// The error handler should log but not swallow the error
const { logger } = require('@librechat/data-schemas');
errorHandler(redisError);
expect(logger.error).toHaveBeenCalledWith(
`Session store Redis error for namespace ${namespace}::`,
redisError,
);
// Now simulate what happens when session middleware tries to use the store
const callback = jest.fn();
mockSessionStore.get.mockImplementation((sid, cb) => {
cb(new Error('Redis connection lost'));
});
// Call the store's get method (as Express session would)
store.get('test-session-id', callback);
// The error should be passed to the callback, not swallowed
expect(callback).toHaveBeenCalledWith(new Error('Redis connection lost'));
});
it('should handle null ioredisClient gracefully', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'sessions';
// Temporarily set ioredisClient to null (simulating connection not established)
const originalClient = require('./redisClients').ioredisClient;
require('./redisClients').ioredisClient = null;
// ConnectRedis might accept null client but would fail on first use
// The important thing is it doesn't throw uncaught exceptions during construction
const store = sessionCache(namespace);
expect(store).toBeDefined();
// Restore original client
require('./redisClients').ioredisClient = originalClient;
});
});
describe('limiterCache', () => {
it('should return undefined when USE_REDIS is false', () => {
cacheConfig.USE_REDIS = false;
const result = limiterCache('prefix');
expect(result).toBeUndefined();
});
it('should return RedisStore when USE_REDIS is true', () => {
cacheConfig.USE_REDIS = true;
const result = limiterCache('rate-limit');
expect(mockRedisStore).toHaveBeenCalledWith({
sendCommand: expect.any(Function),
prefix: `rate-limit:`,
});
expect(result).toBe(mockRedisStore());
});
it('should add colon to prefix if not present', () => {
cacheConfig.USE_REDIS = true;
limiterCache('rate-limit');
expect(mockRedisStore).toHaveBeenCalledWith({
sendCommand: expect.any(Function),
prefix: 'rate-limit:',
});
});
it('should not add colon to prefix if already present', () => {
cacheConfig.USE_REDIS = true;
limiterCache('rate-limit:');
expect(mockRedisStore).toHaveBeenCalledWith({
sendCommand: expect.any(Function),
prefix: 'rate-limit:',
});
});
it('should pass sendCommand function that calls ioredisClient.call', async () => {
cacheConfig.USE_REDIS = true;
mockIoredisClient.call.mockResolvedValue('test-value');
limiterCache('rate-limit');
const sendCommandCall = mockRedisStore.mock.calls[0][0];
const sendCommand = sendCommandCall.sendCommand;
// Test that sendCommand properly delegates to ioredisClient.call
const args = ['GET', 'test-key'];
const result = await sendCommand(...args);
expect(mockIoredisClient.call).toHaveBeenCalledWith(...args);
expect(result).toBe('test-value');
});
it('should handle sendCommand errors properly', async () => {
cacheConfig.USE_REDIS = true;
// Mock the call method to reject with an error
const testError = new Error('Redis error');
mockIoredisClient.call.mockRejectedValue(testError);
limiterCache('rate-limit');
const sendCommandCall = mockRedisStore.mock.calls[0][0];
const sendCommand = sendCommandCall.sendCommand;
// Test that sendCommand properly handles errors
const args = ['GET', 'test-key'];
await expect(sendCommand(...args)).rejects.toThrow('Redis error');
expect(mockIoredisClient.call).toHaveBeenCalledWith(...args);
});
it('should handle undefined prefix', () => {
cacheConfig.USE_REDIS = true;
expect(() => limiterCache()).toThrow('prefix is required');
});
});
});

View File

@@ -1,5 +1,5 @@
const { isEnabled } = require('@librechat/api');
const { Time, CacheKeys } = require('librechat-data-provider');
const { isEnabled } = require('~/server/utils');
const getLogStores = require('./getLogStores');
const { USE_REDIS, LIMIT_CONCURRENT_MESSAGES } = process.env ?? {};

View File

@@ -1,9 +1,13 @@
const { cacheConfig } = require('./cacheConfig');
const { Keyv } = require('keyv');
const { CacheKeys, ViolationTypes, Time } = require('librechat-data-provider');
const { logFile } = require('./keyvFiles');
const keyvMongo = require('./keyvMongo');
const { standardCache, sessionCache, violationCache } = require('./cacheFactory');
const { Time, CacheKeys, ViolationTypes } = require('librechat-data-provider');
const {
logFile,
keyvMongo,
cacheConfig,
sessionCache,
standardCache,
violationCache,
} = require('@librechat/api');
const namespaces = {
[ViolationTypes.GENERAL]: new Keyv({ store: logFile, namespace: 'violations' }),
@@ -31,9 +35,8 @@ const namespaces = {
[CacheKeys.SAML_SESSION]: sessionCache(CacheKeys.SAML_SESSION),
[CacheKeys.ROLES]: standardCache(CacheKeys.ROLES),
[CacheKeys.MCP_TOOLS]: standardCache(CacheKeys.MCP_TOOLS),
[CacheKeys.APP_CONFIG]: standardCache(CacheKeys.APP_CONFIG),
[CacheKeys.CONFIG_STORE]: standardCache(CacheKeys.CONFIG_STORE),
[CacheKeys.STATIC_CONFIG]: standardCache(CacheKeys.STATIC_CONFIG),
[CacheKeys.PENDING_REQ]: standardCache(CacheKeys.PENDING_REQ),
[CacheKeys.ENCODED_DOMAINS]: new Keyv({ store: keyvMongo, namespace: CacheKeys.ENCODED_DOMAINS }),
[CacheKeys.ABORT_KEYS]: standardCache(CacheKeys.ABORT_KEYS, Time.TEN_MINUTES),

3
api/cache/index.js vendored
View File

@@ -1,5 +1,4 @@
const keyvFiles = require('./keyvFiles');
const getLogStores = require('./getLogStores');
const logViolation = require('./logViolation');
module.exports = { ...keyvFiles, getLogStores, logViolation };
module.exports = { getLogStores, logViolation };

View File

@@ -1,9 +0,0 @@
const { KeyvFile } = require('keyv-file');
const logFile = new KeyvFile({ filename: './data/logs.json' }).setMaxListeners(20);
const violationFile = new KeyvFile({ filename: './data/violations.json' }).setMaxListeners(20);
module.exports = {
logFile,
violationFile,
};

View File

@@ -1,4 +1,4 @@
const { isEnabled } = require('~/server/utils');
const { isEnabled } = require('@librechat/api');
const { ViolationTypes } = require('librechat-data-provider');
const getLogStores = require('./getLogStores');
const banViolation = require('./banViolation');

View File

@@ -1,27 +1,13 @@
const { EventSource } = require('eventsource');
const { Time } = require('librechat-data-provider');
const { MCPManager, FlowStateManager } = require('@librechat/api');
const { MCPManager, FlowStateManager, OAuthReconnectionManager } = require('@librechat/api');
const logger = require('./winston');
global.EventSource = EventSource;
/** @type {MCPManager} */
let mcpManager = null;
let flowManager = null;
/**
* @param {string} [userId] - Optional user ID, to avoid disconnecting the current user.
* @returns {MCPManager}
*/
function getMCPManager(userId) {
if (!mcpManager) {
mcpManager = MCPManager.getInstance();
} else {
mcpManager.checkIdleConnections(userId);
}
return mcpManager;
}
/**
* @param {Keyv} flowsCache
* @returns {FlowStateManager}
@@ -37,6 +23,9 @@ function getFlowStateManager(flowsCache) {
module.exports = {
logger,
getMCPManager,
createMCPManager: MCPManager.createInstance,
getMCPManager: MCPManager.getInstance,
getFlowStateManager,
createOAuthReconnectionManager: OAuthReconnectionManager.createInstance,
getOAuthReconnectionManager: OAuthReconnectionManager.getInstance,
};

View File

@@ -1,10 +1,8 @@
const mongoose = require('mongoose');
const { MeiliSearch } = require('meilisearch');
const { logger } = require('@librechat/data-schemas');
const { FlowStateManager } = require('@librechat/api');
const { CacheKeys } = require('librechat-data-provider');
const { isEnabled } = require('~/server/utils');
const { isEnabled, FlowStateManager } = require('@librechat/api');
const { getLogStores } = require('~/cache');
const Conversation = mongoose.models.Conversation;
@@ -32,78 +30,264 @@ class MeiliSearchClient {
}
/**
* Performs the actual sync operations for messages and conversations
* Deletes documents from MeiliSearch index that are missing the user field
* @param {import('meilisearch').Index} index - MeiliSearch index instance
* @param {string} indexName - Name of the index for logging
* @returns {Promise<number>} - Number of documents deleted
*/
async function performSync() {
const client = MeiliSearchClient.getInstance();
async function deleteDocumentsWithoutUserField(index, indexName) {
let deletedCount = 0;
let offset = 0;
const batchSize = 1000;
const { status } = await client.health();
if (status !== 'available') {
throw new Error('Meilisearch not available');
}
try {
while (true) {
const searchResult = await index.search('', {
limit: batchSize,
offset: offset,
});
if (indexingDisabled === true) {
logger.info('[indexSync] Indexing is disabled, skipping...');
return { messagesSync: false, convosSync: false };
}
if (searchResult.hits.length === 0) {
break;
}
let messagesSync = false;
let convosSync = false;
const idsToDelete = searchResult.hits.filter((hit) => !hit.user).map((hit) => hit.id);
// Check if we need to sync messages
const messageProgress = await Message.getSyncProgress();
if (!messageProgress.isComplete) {
logger.info(
`[indexSync] Messages need syncing: ${messageProgress.totalProcessed}/${messageProgress.totalDocuments} indexed`,
);
if (idsToDelete.length > 0) {
logger.info(
`[indexSync] Deleting ${idsToDelete.length} documents without user field from ${indexName} index`,
);
await index.deleteDocuments(idsToDelete);
deletedCount += idsToDelete.length;
}
// Check if we should do a full sync or incremental
const messageCount = await Message.countDocuments();
const messagesIndexed = messageProgress.totalProcessed;
const syncThreshold = parseInt(process.env.MEILI_SYNC_THRESHOLD || '1000', 10);
if (searchResult.hits.length < batchSize) {
break;
}
if (messageCount - messagesIndexed > syncThreshold) {
logger.info('[indexSync] Starting full message sync due to large difference');
await Message.syncWithMeili();
messagesSync = true;
} else if (messageCount !== messagesIndexed) {
logger.warn('[indexSync] Messages out of sync, performing incremental sync');
await Message.syncWithMeili();
messagesSync = true;
offset += batchSize;
}
} else {
logger.info(
`[indexSync] Messages are fully synced: ${messageProgress.totalProcessed}/${messageProgress.totalDocuments}`,
);
}
// Check if we need to sync conversations
const convoProgress = await Conversation.getSyncProgress();
if (!convoProgress.isComplete) {
logger.info(
`[indexSync] Conversations need syncing: ${convoProgress.totalProcessed}/${convoProgress.totalDocuments} indexed`,
);
const convoCount = await Conversation.countDocuments();
const convosIndexed = convoProgress.totalProcessed;
const syncThreshold = parseInt(process.env.MEILI_SYNC_THRESHOLD || '1000', 10);
if (convoCount - convosIndexed > syncThreshold) {
logger.info('[indexSync] Starting full conversation sync due to large difference');
await Conversation.syncWithMeili();
convosSync = true;
} else if (convoCount !== convosIndexed) {
logger.warn('[indexSync] Convos out of sync, performing incremental sync');
await Conversation.syncWithMeili();
convosSync = true;
if (deletedCount > 0) {
logger.info(`[indexSync] Deleted ${deletedCount} orphaned documents from ${indexName} index`);
}
} else {
logger.info(
`[indexSync] Conversations are fully synced: ${convoProgress.totalProcessed}/${convoProgress.totalDocuments}`,
);
} catch (error) {
logger.error(`[indexSync] Error deleting documents from ${indexName}:`, error);
}
return { messagesSync, convosSync };
return deletedCount;
}
/**
* Ensures indexes have proper filterable attributes configured and checks if documents have user field
* @param {MeiliSearch} client - MeiliSearch client instance
* @returns {Promise<{settingsUpdated: boolean, orphanedDocsFound: boolean}>} - Status of what was done
*/
async function ensureFilterableAttributes(client) {
let settingsUpdated = false;
let hasOrphanedDocs = false;
try {
// Check and update messages index
try {
const messagesIndex = client.index('messages');
const settings = await messagesIndex.getSettings();
if (!settings.filterableAttributes || !settings.filterableAttributes.includes('user')) {
logger.info('[indexSync] Configuring messages index to filter by user...');
await messagesIndex.updateSettings({
filterableAttributes: ['user'],
});
logger.info('[indexSync] Messages index configured for user filtering');
settingsUpdated = true;
}
// Check if existing documents have user field indexed
try {
const searchResult = await messagesIndex.search('', { limit: 1 });
if (searchResult.hits.length > 0 && !searchResult.hits[0].user) {
logger.info(
'[indexSync] Existing messages missing user field, will clean up orphaned documents...',
);
hasOrphanedDocs = true;
}
} catch (searchError) {
logger.debug('[indexSync] Could not check message documents:', searchError.message);
}
} catch (error) {
if (error.code !== 'index_not_found') {
logger.warn('[indexSync] Could not check/update messages index settings:', error.message);
}
}
// Check and update conversations index
try {
const convosIndex = client.index('convos');
const settings = await convosIndex.getSettings();
if (!settings.filterableAttributes || !settings.filterableAttributes.includes('user')) {
logger.info('[indexSync] Configuring convos index to filter by user...');
await convosIndex.updateSettings({
filterableAttributes: ['user'],
});
logger.info('[indexSync] Convos index configured for user filtering');
settingsUpdated = true;
}
// Check if existing documents have user field indexed
try {
const searchResult = await convosIndex.search('', { limit: 1 });
if (searchResult.hits.length > 0 && !searchResult.hits[0].user) {
logger.info(
'[indexSync] Existing conversations missing user field, will clean up orphaned documents...',
);
hasOrphanedDocs = true;
}
} catch (searchError) {
logger.debug('[indexSync] Could not check conversation documents:', searchError.message);
}
} catch (error) {
if (error.code !== 'index_not_found') {
logger.warn('[indexSync] Could not check/update convos index settings:', error.message);
}
}
// If either index has orphaned documents, clean them up (but don't force resync)
if (hasOrphanedDocs) {
try {
const messagesIndex = client.index('messages');
await deleteDocumentsWithoutUserField(messagesIndex, 'messages');
} catch (error) {
logger.debug('[indexSync] Could not clean up messages:', error.message);
}
try {
const convosIndex = client.index('convos');
await deleteDocumentsWithoutUserField(convosIndex, 'convos');
} catch (error) {
logger.debug('[indexSync] Could not clean up convos:', error.message);
}
logger.info('[indexSync] Orphaned documents cleaned up without forcing resync.');
}
if (settingsUpdated) {
logger.info('[indexSync] Index settings updated. Full re-sync will be triggered.');
}
} catch (error) {
logger.error('[indexSync] Error ensuring filterable attributes:', error);
}
return { settingsUpdated, orphanedDocsFound: hasOrphanedDocs };
}
/**
* Performs the actual sync operations for messages and conversations
* @param {FlowStateManager} flowManager - Flow state manager instance
* @param {string} flowId - Flow identifier
* @param {string} flowType - Flow type
*/
async function performSync(flowManager, flowId, flowType) {
try {
const client = MeiliSearchClient.getInstance();
const { status } = await client.health();
if (status !== 'available') {
throw new Error('Meilisearch not available');
}
if (indexingDisabled === true) {
logger.info('[indexSync] Indexing is disabled, skipping...');
return { messagesSync: false, convosSync: false };
}
/** Ensures indexes have proper filterable attributes configured */
const { settingsUpdated, orphanedDocsFound: _orphanedDocsFound } =
await ensureFilterableAttributes(client);
let messagesSync = false;
let convosSync = false;
// Only reset flags if settings were actually updated (not just for orphaned doc cleanup)
if (settingsUpdated) {
logger.info(
'[indexSync] Settings updated. Forcing full re-sync to reindex with new configuration...',
);
// Reset sync flags to force full re-sync
await Message.collection.updateMany({ _meiliIndex: true }, { $set: { _meiliIndex: false } });
await Conversation.collection.updateMany(
{ _meiliIndex: true },
{ $set: { _meiliIndex: false } },
);
}
// Check if we need to sync messages
const messageProgress = await Message.getSyncProgress();
if (!messageProgress.isComplete || settingsUpdated) {
logger.info(
`[indexSync] Messages need syncing: ${messageProgress.totalProcessed}/${messageProgress.totalDocuments} indexed`,
);
// Check if we should do a full sync or incremental
const messageCount = await Message.countDocuments();
const messagesIndexed = messageProgress.totalProcessed;
const syncThreshold = parseInt(process.env.MEILI_SYNC_THRESHOLD || '1000', 10);
if (messageCount - messagesIndexed > syncThreshold) {
logger.info('[indexSync] Starting full message sync due to large difference');
await Message.syncWithMeili();
messagesSync = true;
} else if (messageCount !== messagesIndexed) {
logger.warn('[indexSync] Messages out of sync, performing incremental sync');
await Message.syncWithMeili();
messagesSync = true;
}
} else {
logger.info(
`[indexSync] Messages are fully synced: ${messageProgress.totalProcessed}/${messageProgress.totalDocuments}`,
);
}
// Check if we need to sync conversations
const convoProgress = await Conversation.getSyncProgress();
if (!convoProgress.isComplete || settingsUpdated) {
logger.info(
`[indexSync] Conversations need syncing: ${convoProgress.totalProcessed}/${convoProgress.totalDocuments} indexed`,
);
const convoCount = await Conversation.countDocuments();
const convosIndexed = convoProgress.totalProcessed;
const syncThreshold = parseInt(process.env.MEILI_SYNC_THRESHOLD || '1000', 10);
if (convoCount - convosIndexed > syncThreshold) {
logger.info('[indexSync] Starting full conversation sync due to large difference');
await Conversation.syncWithMeili();
convosSync = true;
} else if (convoCount !== convosIndexed) {
logger.warn('[indexSync] Convos out of sync, performing incremental sync');
await Conversation.syncWithMeili();
convosSync = true;
}
} else {
logger.info(
`[indexSync] Conversations are fully synced: ${convoProgress.totalProcessed}/${convoProgress.totalDocuments}`,
);
}
return { messagesSync, convosSync };
} finally {
if (indexingDisabled === true) {
logger.info('[indexSync] Indexing is disabled, skipping cleanup...');
} else if (flowManager && flowId && flowType) {
try {
await flowManager.deleteFlow(flowId, flowType);
logger.debug('[indexSync] Flow state cleaned up');
} catch (cleanupErr) {
logger.debug('[indexSync] Could not clean up flow state:', cleanupErr.message);
}
}
}
}
/**
@@ -116,24 +300,26 @@ async function indexSync() {
logger.info('[indexSync] Starting index synchronization check...');
// Get or create FlowStateManager instance
const flowsCache = getLogStores(CacheKeys.FLOWS);
if (!flowsCache) {
logger.warn('[indexSync] Flows cache not available, falling back to direct sync');
return await performSync(null, null, null);
}
const flowManager = new FlowStateManager(flowsCache, {
ttl: 60000 * 10, // 10 minutes TTL for sync operations
});
// Use a unique flow ID for the sync operation
const flowId = 'meili-index-sync';
const flowType = 'MEILI_SYNC';
try {
// Get or create FlowStateManager instance
const flowsCache = getLogStores(CacheKeys.FLOWS);
if (!flowsCache) {
logger.warn('[indexSync] Flows cache not available, falling back to direct sync');
return await performSync();
}
const flowManager = new FlowStateManager(flowsCache, {
ttl: 60000 * 10, // 10 minutes TTL for sync operations
});
// Use a unique flow ID for the sync operation
const flowId = 'meili-index-sync';
const flowType = 'MEILI_SYNC';
// This will only execute the handler if no other instance is running the sync
const result = await flowManager.createFlowWithHandler(flowId, flowType, performSync);
const result = await flowManager.createFlowWithHandler(flowId, flowType, () =>
performSync(flowManager, flowId, flowType),
);
if (result.messagesSync || result.convosSync) {
logger.info('[indexSync] Sync completed successfully');

View File

@@ -2,7 +2,7 @@ const mongoose = require('mongoose');
const crypto = require('node:crypto');
const { logger } = require('@librechat/data-schemas');
const { ResourceType, SystemRoles, Tools, actionDelimiter } = require('librechat-data-provider');
const { GLOBAL_PROJECT_NAME, EPHEMERAL_AGENT_ID, mcp_delimiter } =
const { GLOBAL_PROJECT_NAME, EPHEMERAL_AGENT_ID, mcp_all, mcp_delimiter } =
require('librechat-data-provider').Constants;
const {
removeAgentFromAllProjects,
@@ -11,7 +11,7 @@ const {
getProjectByName,
} = require('./Project');
const { removeAllPermissions } = require('~/server/services/PermissionService');
const { getCachedTools } = require('~/server/services/Config');
const { getMCPServerTools } = require('~/server/services/Config');
const { getActions } = require('./Action');
const { Agent } = require('~/db/models');
@@ -49,44 +49,68 @@ const createAgent = async (agentData) => {
*/
const getAgent = async (searchParameter) => await Agent.findOne(searchParameter).lean();
/**
* Get multiple agent documents based on the provided search parameters.
*
* @param {Object} searchParameter - The search parameters to find agents.
* @returns {Promise<Agent[]>} Array of agent documents as plain objects.
*/
const getAgents = async (searchParameter) => await Agent.find(searchParameter).lean();
/**
* Load an agent based on the provided ID
*
* @param {Object} params
* @param {ServerRequest} params.req
* @param {string} params.spec
* @param {string} params.agent_id
* @param {string} params.endpoint
* @param {import('@librechat/agents').ClientOptions} [params.model_parameters]
* @returns {Promise<Agent|null>} The agent document as a plain object, or null if not found.
*/
const loadEphemeralAgent = async ({ req, agent_id, endpoint, model_parameters: _m }) => {
const loadEphemeralAgent = async ({ req, spec, agent_id, endpoint, model_parameters: _m }) => {
const { model, ...model_parameters } = _m;
/** @type {Record<string, FunctionTool>} */
const availableTools = await getCachedTools({ userId: req.user.id, includeGlobal: true });
const modelSpecs = req.config?.modelSpecs?.list;
/** @type {TModelSpec | null} */
let modelSpec = null;
if (spec != null && spec !== '') {
modelSpec = modelSpecs?.find((s) => s.name === spec) || null;
}
/** @type {TEphemeralAgent | null} */
const ephemeralAgent = req.body.ephemeralAgent;
const mcpServers = new Set(ephemeralAgent?.mcp);
const userId = req.user?.id; // note: userId cannot be undefined at runtime
if (modelSpec?.mcpServers) {
for (const mcpServer of modelSpec.mcpServers) {
mcpServers.add(mcpServer);
}
}
/** @type {string[]} */
const tools = [];
if (ephemeralAgent?.execute_code === true) {
if (ephemeralAgent?.execute_code === true || modelSpec?.executeCode === true) {
tools.push(Tools.execute_code);
}
if (ephemeralAgent?.file_search === true) {
if (ephemeralAgent?.file_search === true || modelSpec?.fileSearch === true) {
tools.push(Tools.file_search);
}
if (ephemeralAgent?.web_search === true) {
if (ephemeralAgent?.web_search === true || modelSpec?.webSearch === true) {
tools.push(Tools.web_search);
}
const addedServers = new Set();
if (mcpServers.size > 0) {
for (const toolName of Object.keys(availableTools)) {
if (!toolName.includes(mcp_delimiter)) {
for (const mcpServer of mcpServers) {
if (addedServers.has(mcpServer)) {
continue;
}
const mcpServer = toolName.split(mcp_delimiter)?.[1];
if (mcpServer && mcpServers.has(mcpServer)) {
tools.push(toolName);
const serverTools = await getMCPServerTools(userId, mcpServer);
if (!serverTools) {
tools.push(`${mcp_all}${mcp_delimiter}${mcpServer}`);
addedServers.add(mcpServer);
continue;
}
tools.push(...Object.keys(serverTools));
addedServers.add(mcpServer);
}
}
@@ -111,17 +135,18 @@ const loadEphemeralAgent = async ({ req, agent_id, endpoint, model_parameters: _
*
* @param {Object} params
* @param {ServerRequest} params.req
* @param {string} params.spec
* @param {string} params.agent_id
* @param {string} params.endpoint
* @param {import('@librechat/agents').ClientOptions} [params.model_parameters]
* @returns {Promise<Agent|null>} The agent document as a plain object, or null if not found.
*/
const loadAgent = async ({ req, agent_id, endpoint, model_parameters }) => {
const loadAgent = async ({ req, spec, agent_id, endpoint, model_parameters }) => {
if (!agent_id) {
return null;
}
if (agent_id === EPHEMERAL_AGENT_ID) {
return await loadEphemeralAgent({ req, agent_id, endpoint, model_parameters });
return await loadEphemeralAgent({ req, spec, agent_id, endpoint, model_parameters });
}
const agent = await getAgent({
id: agent_id,
@@ -364,17 +389,10 @@ const updateAgent = async (searchParameter, updateData, options = {}) => {
if (shouldCreateVersion) {
const duplicateVersion = isDuplicateVersion(updateData, versionData, versions, actionsHash);
if (duplicateVersion && !forceVersion) {
const error = new Error(
'Duplicate version: This would create a version identical to an existing one',
);
error.statusCode = 409;
error.details = {
duplicateVersion,
versionIndex: versions.findIndex(
(v) => JSON.stringify(duplicateVersion) === JSON.stringify(v),
),
};
throw error;
// No changes detected, return the current agent without creating a new version
const agentObj = currentAgent.toObject();
agentObj.version = versions.length;
return agentObj;
}
}
@@ -540,11 +558,7 @@ const getListAgentsByAccess = async ({
const normalizedLimit = isPaginated ? Math.min(Math.max(1, parseInt(limit) || 20), 100) : null;
// Build base query combining ACL accessible agents with other filters
const baseQuery = { ...otherParams };
if (accessibleIds.length > 0) {
baseQuery._id = { $in: accessibleIds };
}
const baseQuery = { ...otherParams, _id: { $in: accessibleIds } };
// Add cursor condition
if (after) {
@@ -683,7 +697,7 @@ const getListAgents = async (searchParameter) => {
* This function also updates the corresponding projects to include or exclude the agent ID.
*
* @param {Object} params - Parameters for updating the agent's projects.
* @param {MongoUser} params.user - Parameters for updating the agent's projects.
* @param {IUser} params.user - Parameters for updating the agent's projects.
* @param {string} params.agentId - The ID of the agent to update.
* @param {string[]} [params.projectIds] - Array of project IDs to add to the agent.
* @param {string[]} [params.removeProjectIds] - Array of project IDs to remove from the agent.
@@ -837,6 +851,7 @@ const countPromotedAgents = async () => {
module.exports = {
getAgent,
getAgents,
loadAgent,
createAgent,
updateAgent,

View File

@@ -8,6 +8,7 @@ process.env.CREDS_IV = '0123456789abcdef';
jest.mock('~/server/services/Config', () => ({
getCachedTools: jest.fn(),
getMCPServerTools: jest.fn(),
}));
const mongoose = require('mongoose');
@@ -22,6 +23,7 @@ const {
updateAgent,
deleteAgent,
getListAgents,
getListAgentsByAccess,
revertAgentVersion,
updateAgentProjects,
addAgentResourceFile,
@@ -29,7 +31,7 @@ const {
generateActionMetadataHash,
} = require('./Agent');
const permissionService = require('~/server/services/PermissionService');
const { getCachedTools } = require('~/server/services/Config');
const { getCachedTools, getMCPServerTools } = require('~/server/services/Config');
const { AclEntry } = require('~/db/models');
/**
@@ -941,45 +943,31 @@ describe('models/Agent', () => {
expect(emptyParamsAgent.model_parameters).toEqual({});
});
test('should detect duplicate versions and reject updates', async () => {
const originalConsoleError = console.error;
console.error = jest.fn();
test('should not create new version for duplicate updates', async () => {
const authorId = new mongoose.Types.ObjectId();
const testCases = generateVersionTestCases();
try {
const authorId = new mongoose.Types.ObjectId();
const testCases = generateVersionTestCases();
for (const testCase of testCases) {
const testAgentId = `agent_${uuidv4()}`;
for (const testCase of testCases) {
const testAgentId = `agent_${uuidv4()}`;
await createAgent({
id: testAgentId,
provider: 'test',
model: 'test-model',
author: authorId,
...testCase.initial,
});
await createAgent({
id: testAgentId,
provider: 'test',
model: 'test-model',
author: authorId,
...testCase.initial,
});
const updatedAgent = await updateAgent({ id: testAgentId }, testCase.update);
expect(updatedAgent.versions).toHaveLength(2); // No new version created
await updateAgent({ id: testAgentId }, testCase.update);
// Update with duplicate data should succeed but not create a new version
const duplicateUpdate = await updateAgent({ id: testAgentId }, testCase.duplicate);
let error;
try {
await updateAgent({ id: testAgentId }, testCase.duplicate);
} catch (e) {
error = e;
}
expect(duplicateUpdate.versions).toHaveLength(2); // No new version created
expect(error).toBeDefined();
expect(error.message).toContain('Duplicate version');
expect(error.statusCode).toBe(409);
expect(error.details).toBeDefined();
expect(error.details.duplicateVersion).toBeDefined();
const agent = await getAgent({ id: testAgentId });
expect(agent.versions).toHaveLength(2);
}
} finally {
console.error = originalConsoleError;
const agent = await getAgent({ id: testAgentId });
expect(agent.versions).toHaveLength(2);
}
});
@@ -1155,20 +1143,13 @@ describe('models/Agent', () => {
expect(secondUpdate.versions).toHaveLength(3);
// Update without forceVersion and no changes should not create a version
let error;
try {
await updateAgent(
{ id: agentId },
{ tools: ['listEvents_action_test.com', 'createEvent_action_test.com'] },
{ updatingUserId: authorId.toString(), forceVersion: false },
);
} catch (e) {
error = e;
}
const duplicateUpdate = await updateAgent(
{ id: agentId },
{ tools: ['listEvents_action_test.com', 'createEvent_action_test.com'] },
{ updatingUserId: authorId.toString(), forceVersion: false },
);
expect(error).toBeDefined();
expect(error.message).toContain('Duplicate version');
expect(error.statusCode).toBe(409);
expect(duplicateUpdate.versions).toHaveLength(3); // No new version created
});
test('should handle isDuplicateVersion with arrays containing null/undefined values', async () => {
@@ -1366,18 +1347,21 @@ describe('models/Agent', () => {
expect(secondUpdate.versions).toHaveLength(3);
expect(secondUpdate.support_contact.email).toBe('updated@support.com');
// Try to update with same support_contact - should be detected as duplicate
await expect(
updateAgent(
{ id: agentId },
{
support_contact: {
name: 'Updated Support',
email: 'updated@support.com',
},
// Try to update with same support_contact - should be detected as duplicate but return successfully
const duplicateUpdate = await updateAgent(
{ id: agentId },
{
support_contact: {
name: 'Updated Support',
email: 'updated@support.com',
},
),
).rejects.toThrow('Duplicate version');
},
);
// Should not create a new version
expect(duplicateUpdate.versions).toHaveLength(3);
expect(duplicateUpdate.version).toBe(3);
expect(duplicateUpdate.support_contact.email).toBe('updated@support.com');
});
test('should handle support_contact from empty to populated', async () => {
@@ -1561,18 +1545,22 @@ describe('models/Agent', () => {
expect(updated.support_contact.name).toBe('New Name');
expect(updated.support_contact.email).toBe('');
// Verify isDuplicateVersion works with partial changes
await expect(
updateAgent(
{ id: agentId },
{
support_contact: {
name: 'New Name',
email: '',
},
// Verify isDuplicateVersion works with partial changes - should return successfully without creating new version
const duplicateUpdate = await updateAgent(
{ id: agentId },
{
support_contact: {
name: 'New Name',
email: '',
},
),
).rejects.toThrow('Duplicate version');
},
);
// Should not create a new version since content is the same
expect(duplicateUpdate.versions).toHaveLength(2);
expect(duplicateUpdate.version).toBe(2);
expect(duplicateUpdate.support_contact.name).toBe('New Name');
expect(duplicateUpdate.support_contact.email).toBe('');
});
// Edge Cases
@@ -1942,6 +1930,16 @@ describe('models/Agent', () => {
another_tool: {},
});
// Mock getMCPServerTools to return tools for each server
getMCPServerTools.mockImplementation(async (_userId, server) => {
if (server === 'server1') {
return { tool1_mcp_server1: {} };
} else if (server === 'server2') {
return { tool2_mcp_server2: {} };
}
return null;
});
const mockReq = {
user: { id: 'user123' },
body: {
@@ -2126,6 +2124,14 @@ describe('models/Agent', () => {
getCachedTools.mockResolvedValue(availableTools);
// Mock getMCPServerTools to return all tools for server1
getMCPServerTools.mockImplementation(async (_userId, server) => {
if (server === 'server1') {
return availableTools; // All 100 tools belong to server1
}
return null;
});
const mockReq = {
user: { id: 'user123' },
body: {
@@ -2667,6 +2673,17 @@ describe('models/Agent', () => {
tool_mcp_server2: {}, // Different server
});
// Mock getMCPServerTools to return only tools matching the server
getMCPServerTools.mockImplementation(async (_userId, server) => {
if (server === 'server1') {
// Only return tool that correctly matches server1 format
return { tool_mcp_server1: {} };
} else if (server === 'server2') {
return { tool_mcp_server2: {} };
}
return null;
});
const mockReq = {
user: { id: 'user123' },
body: {
@@ -2792,11 +2809,18 @@ describe('models/Agent', () => {
agent_ids: ['agent1', 'agent2'],
});
await updateAgent({ id: agentId }, { agent_ids: ['agent1', 'agent2', 'agent3'] });
const updatedAgent = await updateAgent(
{ id: agentId },
{ agent_ids: ['agent1', 'agent2', 'agent3'] },
);
expect(updatedAgent.versions).toHaveLength(2);
await expect(
updateAgent({ id: agentId }, { agent_ids: ['agent1', 'agent2', 'agent3'] }),
).rejects.toThrow('Duplicate version');
// Update with same agent_ids should succeed but not create a new version
const duplicateUpdate = await updateAgent(
{ id: agentId },
{ agent_ids: ['agent1', 'agent2', 'agent3'] },
);
expect(duplicateUpdate.versions).toHaveLength(2); // No new version created
});
test('should handle agent_ids field alongside other fields', async () => {
@@ -2935,9 +2959,10 @@ describe('models/Agent', () => {
expect(updated.versions).toHaveLength(2);
expect(updated.agent_ids).toEqual([]);
await expect(updateAgent({ id: agentId }, { agent_ids: [] })).rejects.toThrow(
'Duplicate version',
);
// Update with same empty agent_ids should succeed but not create a new version
const duplicateUpdate = await updateAgent({ id: agentId }, { agent_ids: [] });
expect(duplicateUpdate.versions).toHaveLength(2); // No new version created
expect(duplicateUpdate.agent_ids).toEqual([]);
});
test('should handle agent without agent_ids field', async () => {
@@ -3047,6 +3072,212 @@ describe('Support Contact Field', () => {
// Verify support_contact is undefined when not provided
expect(agent.support_contact).toBeUndefined();
});
describe('getListAgentsByAccess - Security Tests', () => {
let userA, userB;
let agentA1, agentA2, agentA3;
beforeEach(async () => {
Agent = mongoose.models.Agent || mongoose.model('Agent', agentSchema);
await Agent.deleteMany({});
await AclEntry.deleteMany({});
// Create two users
userA = new mongoose.Types.ObjectId();
userB = new mongoose.Types.ObjectId();
// Create agents for user A
agentA1 = await createAgent({
id: `agent_${uuidv4().slice(0, 12)}`,
name: 'Agent A1',
description: 'User A agent 1',
provider: 'openai',
model: 'gpt-4',
author: userA,
});
agentA2 = await createAgent({
id: `agent_${uuidv4().slice(0, 12)}`,
name: 'Agent A2',
description: 'User A agent 2',
provider: 'openai',
model: 'gpt-4',
author: userA,
});
agentA3 = await createAgent({
id: `agent_${uuidv4().slice(0, 12)}`,
name: 'Agent A3',
description: 'User A agent 3',
provider: 'openai',
model: 'gpt-4',
author: userA,
});
});
test('should return empty list when user has no accessible agents (empty accessibleIds)', async () => {
// User B has no agents and no shared agents
const result = await getListAgentsByAccess({
accessibleIds: [],
otherParams: {},
});
expect(result.data).toHaveLength(0);
expect(result.has_more).toBe(false);
expect(result.first_id).toBeNull();
expect(result.last_id).toBeNull();
});
test('should not return other users agents when accessibleIds is empty', async () => {
// User B trying to list agents with empty accessibleIds should not see User A's agents
const result = await getListAgentsByAccess({
accessibleIds: [],
otherParams: { author: userB },
});
expect(result.data).toHaveLength(0);
expect(result.has_more).toBe(false);
});
test('should only return agents in accessibleIds list', async () => {
// Give User B access to only one of User A's agents
const accessibleIds = [agentA1._id];
const result = await getListAgentsByAccess({
accessibleIds,
otherParams: {},
});
expect(result.data).toHaveLength(1);
expect(result.data[0].id).toBe(agentA1.id);
expect(result.data[0].name).toBe('Agent A1');
});
test('should return multiple accessible agents when provided', async () => {
// Give User B access to two of User A's agents
const accessibleIds = [agentA1._id, agentA3._id];
const result = await getListAgentsByAccess({
accessibleIds,
otherParams: {},
});
expect(result.data).toHaveLength(2);
const returnedIds = result.data.map((agent) => agent.id);
expect(returnedIds).toContain(agentA1.id);
expect(returnedIds).toContain(agentA3.id);
expect(returnedIds).not.toContain(agentA2.id);
});
test('should respect other query parameters while enforcing accessibleIds', async () => {
// Give access to all agents but filter by name
const accessibleIds = [agentA1._id, agentA2._id, agentA3._id];
const result = await getListAgentsByAccess({
accessibleIds,
otherParams: { name: 'Agent A2' },
});
expect(result.data).toHaveLength(1);
expect(result.data[0].id).toBe(agentA2.id);
});
test('should handle pagination correctly with accessibleIds filter', async () => {
// Create more agents
const moreAgents = [];
for (let i = 4; i <= 10; i++) {
const agent = await createAgent({
id: `agent_${uuidv4().slice(0, 12)}`,
name: `Agent A${i}`,
description: `User A agent ${i}`,
provider: 'openai',
model: 'gpt-4',
author: userA,
});
moreAgents.push(agent);
}
// Give access to all agents
const allAgentIds = [agentA1, agentA2, agentA3, ...moreAgents].map((a) => a._id);
// First page
const page1 = await getListAgentsByAccess({
accessibleIds: allAgentIds,
otherParams: {},
limit: 5,
});
expect(page1.data).toHaveLength(5);
expect(page1.has_more).toBe(true);
expect(page1.after).toBeTruthy();
// Second page
const page2 = await getListAgentsByAccess({
accessibleIds: allAgentIds,
otherParams: {},
limit: 5,
after: page1.after,
});
expect(page2.data).toHaveLength(5);
expect(page2.has_more).toBe(false);
// Verify no overlap between pages
const page1Ids = page1.data.map((a) => a.id);
const page2Ids = page2.data.map((a) => a.id);
const intersection = page1Ids.filter((id) => page2Ids.includes(id));
expect(intersection).toHaveLength(0);
});
test('should return empty list when accessibleIds contains non-existent IDs', async () => {
// Try with non-existent agent IDs
const fakeIds = [new mongoose.Types.ObjectId(), new mongoose.Types.ObjectId()];
const result = await getListAgentsByAccess({
accessibleIds: fakeIds,
otherParams: {},
});
expect(result.data).toHaveLength(0);
expect(result.has_more).toBe(false);
});
test('should handle undefined accessibleIds as empty array', async () => {
// When accessibleIds is undefined, it should be treated as empty array
const result = await getListAgentsByAccess({
accessibleIds: undefined,
otherParams: {},
});
expect(result.data).toHaveLength(0);
expect(result.has_more).toBe(false);
});
test('should combine accessibleIds with author filter correctly', async () => {
// Create an agent for User B
const agentB1 = await createAgent({
id: `agent_${uuidv4().slice(0, 12)}`,
name: 'Agent B1',
description: 'User B agent 1',
provider: 'openai',
model: 'gpt-4',
author: userB,
});
// Give User B access to one of User A's agents
const accessibleIds = [agentA1._id, agentB1._id];
// Filter by author should further restrict the results
const result = await getListAgentsByAccess({
accessibleIds,
otherParams: { author: userB },
});
expect(result.data).toHaveLength(1);
expect(result.data[0].id).toBe(agentB1.id);
expect(result.data[0].author).toBe(userB.toString());
});
});
});
function createBasicAgent(overrides = {}) {

View File

@@ -1,4 +1,4 @@
const { logger } = require('~/config');
const { logger } = require('@librechat/data-schemas');
const options = [
{

View File

@@ -1,6 +1,5 @@
const { logger } = require('@librechat/data-schemas');
const { createTempChatExpirationDate } = require('@librechat/api');
const { getCustomConfig } = require('~/server/services/Config/getCustomConfig');
const { getMessages, deleteMessages } = require('./Message');
const { Conversation } = require('~/db/models');
@@ -102,8 +101,8 @@ module.exports = {
if (req?.body?.isTemporary) {
try {
const customConfig = await getCustomConfig();
update.expiredAt = createTempChatExpirationDate(customConfig);
const appConfig = req.config;
update.expiredAt = createTempChatExpirationDate(appConfig?.interfaceConfig);
} catch (err) {
logger.error('Error creating temporary chat expiration date:', err);
logger.info(`---\`saveConvo\` context: ${metadata?.context}`);
@@ -175,7 +174,7 @@ module.exports = {
if (search) {
try {
const meiliResults = await Conversation.meiliSearch(search);
const meiliResults = await Conversation.meiliSearch(search, { filter: `user = "${user}"` });
const matchingIds = Array.isArray(meiliResults.hits)
? meiliResults.hits.map((result) => result.conversationId)
: [];

View File

@@ -13,9 +13,8 @@ const {
saveConvo,
getConvo,
} = require('./Conversation');
jest.mock('~/server/services/Config/getCustomConfig');
jest.mock('~/server/services/Config/app');
jest.mock('./Message');
const { getCustomConfig } = require('~/server/services/Config/getCustomConfig');
const { getMessages, deleteMessages } = require('./Message');
const { Conversation } = require('~/db/models');
@@ -50,6 +49,11 @@ describe('Conversation Operations', () => {
mockReq = {
user: { id: 'user123' },
body: {},
config: {
interfaceConfig: {
temporaryChatRetention: 24, // Default 24 hours
},
},
};
mockConversationData = {
@@ -118,12 +122,8 @@ describe('Conversation Operations', () => {
describe('isTemporary conversation handling', () => {
it('should save a conversation with expiredAt when isTemporary is true', async () => {
// Mock custom config with 24 hour retention
getCustomConfig.mockResolvedValue({
interface: {
temporaryChatRetention: 24,
},
});
// Mock app config with 24 hour retention
mockReq.config.interfaceConfig.temporaryChatRetention = 24;
mockReq.body = { isTemporary: true };
@@ -167,12 +167,8 @@ describe('Conversation Operations', () => {
});
it('should use custom retention period from config', async () => {
// Mock custom config with 48 hour retention
getCustomConfig.mockResolvedValue({
interface: {
temporaryChatRetention: 48,
},
});
// Mock app config with 48 hour retention
mockReq.config.interfaceConfig.temporaryChatRetention = 48;
mockReq.body = { isTemporary: true };
@@ -194,12 +190,8 @@ describe('Conversation Operations', () => {
});
it('should handle minimum retention period (1 hour)', async () => {
// Mock custom config with less than minimum retention
getCustomConfig.mockResolvedValue({
interface: {
temporaryChatRetention: 0.5, // Half hour - should be clamped to 1 hour
},
});
// Mock app config with less than minimum retention
mockReq.config.interfaceConfig.temporaryChatRetention = 0.5; // Half hour - should be clamped to 1 hour
mockReq.body = { isTemporary: true };
@@ -221,12 +213,8 @@ describe('Conversation Operations', () => {
});
it('should handle maximum retention period (8760 hours)', async () => {
// Mock custom config with more than maximum retention
getCustomConfig.mockResolvedValue({
interface: {
temporaryChatRetention: 10000, // Should be clamped to 8760 hours
},
});
// Mock app config with more than maximum retention
mockReq.config.interfaceConfig.temporaryChatRetention = 10000; // Should be clamped to 8760 hours
mockReq.body = { isTemporary: true };
@@ -247,22 +235,36 @@ describe('Conversation Operations', () => {
);
});
it('should handle getCustomConfig errors gracefully', async () => {
// Mock getCustomConfig to throw an error
getCustomConfig.mockRejectedValue(new Error('Config service unavailable'));
it('should handle missing config gracefully', async () => {
// Simulate missing config - should use default retention period
delete mockReq.config;
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveConvo(mockReq, mockConversationData);
const afterSave = new Date();
// Should still save the conversation but with expiredAt as null
// Should still save the conversation with default retention period (30 days)
expect(result.conversationId).toBe(mockConversationData.conversationId);
expect(result.expiredAt).toBeNull();
expect(result.expiredAt).toBeDefined();
expect(result.expiredAt).toBeInstanceOf(Date);
// Verify expiredAt is approximately 30 days in the future (720 hours)
const expectedExpirationTime = new Date(beforeSave.getTime() + 720 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
new Date(afterSave.getTime() + 720 * 60 * 60 * 1000 + 1000).getTime(),
);
});
it('should use default retention when config is not provided', async () => {
// Mock getCustomConfig to return empty config
getCustomConfig.mockResolvedValue({});
// Mock getAppConfig to return empty config
mockReq.config = {}; // Empty config
mockReq.body = { isTemporary: true };
@@ -285,11 +287,7 @@ describe('Conversation Operations', () => {
it('should update expiredAt when saving existing temporary conversation', async () => {
// First save a temporary conversation
getCustomConfig.mockResolvedValue({
interface: {
temporaryChatRetention: 24,
},
});
mockReq.config.interfaceConfig.temporaryChatRetention = 24;
mockReq.body = { isTemporary: true };
const firstSave = await saveConvo(mockReq, mockConversationData);

View File

@@ -239,10 +239,46 @@ const updateTagsForConversation = async (user, conversationId, tags) => {
}
};
/**
* Increments tag counts for existing tags only.
* @param {string} user - The user ID.
* @param {string[]} tags - Array of tag names to increment
* @returns {Promise<void>}
*/
const bulkIncrementTagCounts = async (user, tags) => {
if (!tags || tags.length === 0) {
return;
}
try {
const uniqueTags = [...new Set(tags.filter(Boolean))];
if (uniqueTags.length === 0) {
return;
}
const bulkOps = uniqueTags.map((tag) => ({
updateOne: {
filter: { user, tag },
update: { $inc: { count: 1 } },
},
}));
const result = await ConversationTag.bulkWrite(bulkOps);
if (result && result.modifiedCount > 0) {
logger.debug(
`user: ${user} | Incremented tag counts - modified ${result.modifiedCount} tags`,
);
}
} catch (error) {
logger.error('[bulkIncrementTagCounts] Error incrementing tag counts', error);
}
};
module.exports = {
getConversationTags,
createConversationTag,
updateConversationTag,
deleteConversationTag,
bulkIncrementTagCounts,
updateTagsForConversation,
};

View File

@@ -42,7 +42,7 @@ const getToolFilesByIds = async (fileIds, toolResourceSet) => {
$or: [],
};
if (toolResourceSet.has(EToolResources.ocr)) {
if (toolResourceSet.has(EToolResources.context)) {
filter.$or.push({ text: { $exists: true, $ne: null }, context: FileContext.agents });
}
if (toolResourceSet.has(EToolResources.file_search)) {

View File

@@ -211,7 +211,67 @@ describe('File Access Control', () => {
expect(accessMap.get(fileIds[1])).toBe(false);
});
it('should deny access when user only has VIEW permission', async () => {
it('should deny access when user only has VIEW permission and needs access for deletion', async () => {
const userId = new mongoose.Types.ObjectId();
const authorId = new mongoose.Types.ObjectId();
const agentId = uuidv4();
const fileIds = [uuidv4(), uuidv4()];
// Create users
await User.create({
_id: userId,
email: 'user@example.com',
emailVerified: true,
provider: 'local',
});
await User.create({
_id: authorId,
email: 'author@example.com',
emailVerified: true,
provider: 'local',
});
// Create agent with files
const agent = await createAgent({
id: agentId,
name: 'View-Only Agent',
author: authorId,
model: 'gpt-4',
provider: 'openai',
tool_resources: {
file_search: {
file_ids: fileIds,
},
},
});
// Grant only VIEW permission to user on the agent
await grantPermission({
principalType: PrincipalType.USER,
principalId: userId,
resourceType: ResourceType.AGENT,
resourceId: agent._id,
accessRoleId: AccessRoleIds.AGENT_VIEWER,
grantedBy: authorId,
});
// Check access for files
const { hasAccessToFilesViaAgent } = require('~/server/services/Files/permissions');
const accessMap = await hasAccessToFilesViaAgent({
userId: userId,
role: SystemRoles.USER,
fileIds,
agentId,
isDelete: true,
});
// Should have no access to any files when only VIEW permission
expect(accessMap.get(fileIds[0])).toBe(false);
expect(accessMap.get(fileIds[1])).toBe(false);
});
it('should grant access when user has VIEW permission', async () => {
const userId = new mongoose.Types.ObjectId();
const authorId = new mongoose.Types.ObjectId();
const agentId = uuidv4();
@@ -265,9 +325,8 @@ describe('File Access Control', () => {
agentId,
});
// Should have no access to any files when only VIEW permission
expect(accessMap.get(fileIds[0])).toBe(false);
expect(accessMap.get(fileIds[1])).toBe(false);
expect(accessMap.get(fileIds[0])).toBe(true);
expect(accessMap.get(fileIds[1])).toBe(true);
});
});

View File

@@ -1,7 +1,6 @@
const { z } = require('zod');
const { logger } = require('@librechat/data-schemas');
const { createTempChatExpirationDate } = require('@librechat/api');
const { getCustomConfig } = require('~/server/services/Config/getCustomConfig');
const { Message } = require('~/db/models');
const idSchema = z.string().uuid();
@@ -11,7 +10,7 @@ const idSchema = z.string().uuid();
*
* @async
* @function saveMessage
* @param {Express.Request} req - The request object containing user information.
* @param {ServerRequest} req - The request object containing user information.
* @param {Object} params - The message data object.
* @param {string} params.endpoint - The endpoint where the message originated.
* @param {string} params.iconURL - The URL of the sender's icon.
@@ -57,8 +56,8 @@ async function saveMessage(req, params, metadata) {
if (req?.body?.isTemporary) {
try {
const customConfig = await getCustomConfig();
update.expiredAt = createTempChatExpirationDate(customConfig);
const appConfig = req.config;
update.expiredAt = createTempChatExpirationDate(appConfig?.interfaceConfig);
} catch (err) {
logger.error('Error creating temporary chat expiration date:', err);
logger.info(`---\`saveMessage\` context: ${metadata?.context}`);

View File

@@ -13,8 +13,7 @@ const {
deleteMessagesSince,
} = require('./Message');
jest.mock('~/server/services/Config/getCustomConfig');
const { getCustomConfig } = require('~/server/services/Config/getCustomConfig');
jest.mock('~/server/services/Config/app');
/**
* @type {import('mongoose').Model<import('@librechat/data-schemas').IMessage>}
@@ -44,6 +43,11 @@ describe('Message Operations', () => {
mockReq = {
user: { id: 'user123' },
config: {
interfaceConfig: {
temporaryChatRetention: 24, // Default 24 hours
},
},
};
mockMessageData = {
@@ -326,12 +330,8 @@ describe('Message Operations', () => {
});
it('should save a message with expiredAt when isTemporary is true', async () => {
// Mock custom config with 24 hour retention
getCustomConfig.mockResolvedValue({
interface: {
temporaryChatRetention: 24,
},
});
// Mock app config with 24 hour retention
mockReq.config.interfaceConfig.temporaryChatRetention = 24;
mockReq.body = { isTemporary: true };
@@ -375,12 +375,8 @@ describe('Message Operations', () => {
});
it('should use custom retention period from config', async () => {
// Mock custom config with 48 hour retention
getCustomConfig.mockResolvedValue({
interface: {
temporaryChatRetention: 48,
},
});
// Mock app config with 48 hour retention
mockReq.config.interfaceConfig.temporaryChatRetention = 48;
mockReq.body = { isTemporary: true };
@@ -402,12 +398,8 @@ describe('Message Operations', () => {
});
it('should handle minimum retention period (1 hour)', async () => {
// Mock custom config with less than minimum retention
getCustomConfig.mockResolvedValue({
interface: {
temporaryChatRetention: 0.5, // Half hour - should be clamped to 1 hour
},
});
// Mock app config with less than minimum retention
mockReq.config.interfaceConfig.temporaryChatRetention = 0.5; // Half hour - should be clamped to 1 hour
mockReq.body = { isTemporary: true };
@@ -429,12 +421,8 @@ describe('Message Operations', () => {
});
it('should handle maximum retention period (8760 hours)', async () => {
// Mock custom config with more than maximum retention
getCustomConfig.mockResolvedValue({
interface: {
temporaryChatRetention: 10000, // Should be clamped to 8760 hours
},
});
// Mock app config with more than maximum retention
mockReq.config.interfaceConfig.temporaryChatRetention = 10000; // Should be clamped to 8760 hours
mockReq.body = { isTemporary: true };
@@ -455,22 +443,36 @@ describe('Message Operations', () => {
);
});
it('should handle getCustomConfig errors gracefully', async () => {
// Mock getCustomConfig to throw an error
getCustomConfig.mockRejectedValue(new Error('Config service unavailable'));
it('should handle missing config gracefully', async () => {
// Simulate missing config - should use default retention period
delete mockReq.config;
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveMessage(mockReq, mockMessageData);
const afterSave = new Date();
// Should still save the message but with expiredAt as null
// Should still save the message with default retention period (30 days)
expect(result.messageId).toBe('msg123');
expect(result.expiredAt).toBeNull();
expect(result.expiredAt).toBeDefined();
expect(result.expiredAt).toBeInstanceOf(Date);
// Verify expiredAt is approximately 30 days in the future (720 hours)
const expectedExpirationTime = new Date(beforeSave.getTime() + 720 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
new Date(afterSave.getTime() + 720 * 60 * 60 * 1000 + 1000).getTime(),
);
});
it('should use default retention when config is not provided', async () => {
// Mock getCustomConfig to return empty config
getCustomConfig.mockResolvedValue({});
// Mock getAppConfig to return empty config
mockReq.config = {}; // Empty config
mockReq.body = { isTemporary: true };
@@ -493,11 +495,7 @@ describe('Message Operations', () => {
it('should not update expiredAt on message update', async () => {
// First save a temporary message
getCustomConfig.mockResolvedValue({
interface: {
temporaryChatRetention: 24,
},
});
mockReq.config.interfaceConfig.temporaryChatRetention = 24;
mockReq.body = { isTemporary: true };
const savedMessage = await saveMessage(mockReq, mockMessageData);
@@ -520,11 +518,7 @@ describe('Message Operations', () => {
it('should preserve expiredAt when saving existing temporary message', async () => {
// First save a temporary message
getCustomConfig.mockResolvedValue({
interface: {
temporaryChatRetention: 24,
},
});
mockReq.config.interfaceConfig.temporaryChatRetention = 24;
mockReq.body = { isTemporary: true };
const firstSave = await saveMessage(mockReq, mockMessageData);

View File

@@ -247,10 +247,130 @@ const deletePromptGroup = async ({ _id, author, role }) => {
return { message: 'Prompt group deleted successfully' };
};
/**
* Get prompt groups by accessible IDs with optional cursor-based pagination.
* @param {Object} params - The parameters for getting accessible prompt groups.
* @param {Array} [params.accessibleIds] - Array of prompt group ObjectIds the user has ACL access to.
* @param {Object} [params.otherParams] - Additional query parameters (including author filter).
* @param {number} [params.limit] - Number of prompt groups to return (max 100). If not provided, returns all prompt groups.
* @param {string} [params.after] - Cursor for pagination - get prompt groups after this cursor. // base64 encoded JSON string with updatedAt and _id.
* @returns {Promise<Object>} A promise that resolves to an object containing the prompt groups data and pagination info.
*/
async function getListPromptGroupsByAccess({
accessibleIds = [],
otherParams = {},
limit = null,
after = null,
}) {
const isPaginated = limit !== null && limit !== undefined;
const normalizedLimit = isPaginated ? Math.min(Math.max(1, parseInt(limit) || 20), 100) : null;
// Build base query combining ACL accessible prompt groups with other filters
const baseQuery = { ...otherParams, _id: { $in: accessibleIds } };
// Add cursor condition
if (after && typeof after === 'string' && after !== 'undefined' && after !== 'null') {
try {
const cursor = JSON.parse(Buffer.from(after, 'base64').toString('utf8'));
const { updatedAt, _id } = cursor;
const cursorCondition = {
$or: [
{ updatedAt: { $lt: new Date(updatedAt) } },
{ updatedAt: new Date(updatedAt), _id: { $gt: new ObjectId(_id) } },
],
};
// Merge cursor condition with base query
if (Object.keys(baseQuery).length > 0) {
baseQuery.$and = [{ ...baseQuery }, cursorCondition];
// Remove the original conditions from baseQuery to avoid duplication
Object.keys(baseQuery).forEach((key) => {
if (key !== '$and') delete baseQuery[key];
});
} else {
Object.assign(baseQuery, cursorCondition);
}
} catch (error) {
logger.warn('Invalid cursor:', error.message);
}
}
// Build aggregation pipeline
const pipeline = [{ $match: baseQuery }, { $sort: { updatedAt: -1, _id: 1 } }];
// Only apply limit if pagination is requested
if (isPaginated) {
pipeline.push({ $limit: normalizedLimit + 1 });
}
// Add lookup for production prompt
pipeline.push(
{
$lookup: {
from: 'prompts',
localField: 'productionId',
foreignField: '_id',
as: 'productionPrompt',
},
},
{ $unwind: { path: '$productionPrompt', preserveNullAndEmptyArrays: true } },
{
$project: {
name: 1,
numberOfGenerations: 1,
oneliner: 1,
category: 1,
projectIds: 1,
productionId: 1,
author: 1,
authorName: 1,
createdAt: 1,
updatedAt: 1,
'productionPrompt.prompt': 1,
},
},
);
const promptGroups = await PromptGroup.aggregate(pipeline).exec();
const hasMore = isPaginated ? promptGroups.length > normalizedLimit : false;
const data = (isPaginated ? promptGroups.slice(0, normalizedLimit) : promptGroups).map(
(group) => {
if (group.author) {
group.author = group.author.toString();
}
return group;
},
);
// Generate next cursor only if paginated
let nextCursor = null;
if (isPaginated && hasMore && data.length > 0) {
const lastGroup = promptGroups[normalizedLimit - 1];
nextCursor = Buffer.from(
JSON.stringify({
updatedAt: lastGroup.updatedAt.toISOString(),
_id: lastGroup._id.toString(),
}),
).toString('base64');
}
return {
object: 'list',
data,
first_id: data.length > 0 ? data[0]._id.toString() : null,
last_id: data.length > 0 ? data[data.length - 1]._id.toString() : null,
has_more: hasMore,
after: nextCursor,
};
}
module.exports = {
getPromptGroups,
deletePromptGroup,
getAllPromptGroups,
getListPromptGroupsByAccess,
/**
* Create a prompt and its respective group
* @param {TCreatePromptRecord} saveData

View File

@@ -16,7 +16,7 @@ const { Role } = require('~/db/models');
*
* @param {string} roleName - The name of the role to find or create.
* @param {string|string[]} [fieldsToSelect] - The fields to include or exclude in the returned document.
* @returns {Promise<Object>} A plain object representing the role document.
* @returns {Promise<IRole>} Role document.
*/
const getRoleByName = async function (roleName, fieldsToSelect = null) {
const cache = getLogStores(CacheKeys.ROLES);
@@ -72,8 +72,9 @@ const updateRoleByName = async function (roleName, updates) {
* Updates access permissions for a specific role and multiple permission types.
* @param {string} roleName - The role to update.
* @param {Object.<PermissionTypes, Object.<Permissions, boolean>>} permissionsUpdate - Permissions to update and their values.
* @param {IRole} [roleData] - Optional role data to use instead of fetching from the database.
*/
async function updateAccessPermissions(roleName, permissionsUpdate) {
async function updateAccessPermissions(roleName, permissionsUpdate, roleData) {
// Filter and clean the permission updates based on our schema definition.
const updates = {};
for (const [permissionType, permissions] of Object.entries(permissionsUpdate)) {
@@ -86,7 +87,7 @@ async function updateAccessPermissions(roleName, permissionsUpdate) {
}
try {
const role = await getRoleByName(roleName);
const role = roleData ?? (await getRoleByName(roleName));
if (!role) {
return;
}
@@ -113,7 +114,6 @@ async function updateAccessPermissions(roleName, permissionsUpdate) {
}
}
// Process the current updates
for (const [permissionType, permissions] of Object.entries(updates)) {
const currentTypePermissions = currentPermissions[permissionType] || {};
updatedPermissions[permissionType] = { ...currentTypePermissions };

View File

@@ -1,5 +1,4 @@
const { logger } = require('@librechat/data-schemas');
const { getBalanceConfig } = require('~/server/services/Config');
const { getMultiplier, getCacheMultiplier } = require('./tx');
const { Transaction, Balance } = require('~/db/models');
@@ -187,20 +186,23 @@ async function createAutoRefillTransaction(txData) {
/**
* Static method to create a transaction and update the balance
* @param {txData} txData - Transaction data.
* @param {txData} _txData - Transaction data.
*/
async function createTransaction(txData) {
async function createTransaction(_txData) {
const { balance, transactions, ...txData } = _txData;
if (txData.rawAmount != null && isNaN(txData.rawAmount)) {
return;
}
if (transactions?.enabled === false) {
return;
}
const transaction = new Transaction(txData);
transaction.endpointTokenConfig = txData.endpointTokenConfig;
calculateTokenValue(transaction);
await transaction.save();
const balance = await getBalanceConfig();
if (!balance?.enabled) {
return;
}
@@ -221,9 +223,14 @@ async function createTransaction(txData) {
/**
* Static method to create a structured transaction and update the balance
* @param {txData} txData - Transaction data.
* @param {txData} _txData - Transaction data.
*/
async function createStructuredTransaction(txData) {
async function createStructuredTransaction(_txData) {
const { balance, transactions, ...txData } = _txData;
if (transactions?.enabled === false) {
return;
}
const transaction = new Transaction({
...txData,
endpointTokenConfig: txData.endpointTokenConfig,
@@ -233,7 +240,6 @@ async function createStructuredTransaction(txData) {
await transaction.save();
const balance = await getBalanceConfig();
if (!balance?.enabled) {
return;
}

View File

@@ -1,13 +1,9 @@
const mongoose = require('mongoose');
const { MongoMemoryServer } = require('mongodb-memory-server');
const { spendTokens, spendStructuredTokens } = require('./spendTokens');
const { getBalanceConfig } = require('~/server/services/Config');
const { getMultiplier, getCacheMultiplier } = require('./tx');
const { createTransaction } = require('./Transaction');
const { Balance } = require('~/db/models');
// Mock the custom config module so we can control the balance flag.
jest.mock('~/server/services/Config');
const { createTransaction, createStructuredTransaction } = require('./Transaction');
const { Balance, Transaction } = require('~/db/models');
let mongoServer;
beforeAll(async () => {
@@ -23,8 +19,6 @@ afterAll(async () => {
beforeEach(async () => {
await mongoose.connection.dropDatabase();
// Default: enable balance updates in tests.
getBalanceConfig.mockResolvedValue({ enabled: true });
});
describe('Regular Token Spending Tests', () => {
@@ -41,6 +35,7 @@ describe('Regular Token Spending Tests', () => {
model,
context: 'test',
endpointTokenConfig: null,
balance: { enabled: true },
};
const tokenUsage = {
@@ -74,6 +69,7 @@ describe('Regular Token Spending Tests', () => {
model,
context: 'test',
endpointTokenConfig: null,
balance: { enabled: true },
};
const tokenUsage = {
@@ -104,6 +100,7 @@ describe('Regular Token Spending Tests', () => {
model,
context: 'test',
endpointTokenConfig: null,
balance: { enabled: true },
};
const tokenUsage = {};
@@ -128,6 +125,7 @@ describe('Regular Token Spending Tests', () => {
model,
context: 'test',
endpointTokenConfig: null,
balance: { enabled: true },
};
const tokenUsage = { promptTokens: 100 };
@@ -143,8 +141,7 @@ describe('Regular Token Spending Tests', () => {
});
test('spendTokens should not update balance when balance feature is disabled', async () => {
// Arrange: Override the config to disable balance updates.
getBalanceConfig.mockResolvedValue({ balance: { enabled: false } });
// Arrange: Balance config is now passed directly in txData
const userId = new mongoose.Types.ObjectId();
const initialBalance = 10000000;
await Balance.create({ user: userId, tokenCredits: initialBalance });
@@ -156,6 +153,7 @@ describe('Regular Token Spending Tests', () => {
model,
context: 'test',
endpointTokenConfig: null,
balance: { enabled: false },
};
const tokenUsage = {
@@ -186,6 +184,7 @@ describe('Structured Token Spending Tests', () => {
model,
context: 'message',
endpointTokenConfig: null,
balance: { enabled: true },
};
const tokenUsage = {
@@ -239,6 +238,7 @@ describe('Structured Token Spending Tests', () => {
conversationId: 'test-convo',
model,
context: 'message',
balance: { enabled: true },
};
const tokenUsage = {
@@ -271,6 +271,7 @@ describe('Structured Token Spending Tests', () => {
conversationId: 'test-convo',
model,
context: 'message',
balance: { enabled: true },
};
const tokenUsage = {
@@ -302,6 +303,7 @@ describe('Structured Token Spending Tests', () => {
conversationId: 'test-convo',
model,
context: 'message',
balance: { enabled: true },
};
const tokenUsage = {};
@@ -328,6 +330,7 @@ describe('Structured Token Spending Tests', () => {
conversationId: 'test-convo',
model,
context: 'incomplete',
balance: { enabled: true },
};
const tokenUsage = {
@@ -364,6 +367,7 @@ describe('NaN Handling Tests', () => {
endpointTokenConfig: null,
rawAmount: NaN,
tokenType: 'prompt',
balance: { enabled: true },
};
// Act
@@ -375,3 +379,188 @@ describe('NaN Handling Tests', () => {
expect(balance.tokenCredits).toBe(initialBalance);
});
});
describe('Transactions Config Tests', () => {
test('createTransaction should not save when transactions.enabled is false', async () => {
// Arrange
const userId = new mongoose.Types.ObjectId();
const initialBalance = 10000000;
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'gpt-3.5-turbo';
const txData = {
user: userId,
conversationId: 'test-conversation-id',
model,
context: 'test',
endpointTokenConfig: null,
rawAmount: -100,
tokenType: 'prompt',
transactions: { enabled: false },
};
// Act
const result = await createTransaction(txData);
// Assert: No transaction should be created
expect(result).toBeUndefined();
const transactions = await Transaction.find({ user: userId });
expect(transactions).toHaveLength(0);
const balance = await Balance.findOne({ user: userId });
expect(balance.tokenCredits).toBe(initialBalance);
});
test('createTransaction should save when transactions.enabled is true', async () => {
// Arrange
const userId = new mongoose.Types.ObjectId();
const initialBalance = 10000000;
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'gpt-3.5-turbo';
const txData = {
user: userId,
conversationId: 'test-conversation-id',
model,
context: 'test',
endpointTokenConfig: null,
rawAmount: -100,
tokenType: 'prompt',
transactions: { enabled: true },
balance: { enabled: true },
};
// Act
const result = await createTransaction(txData);
// Assert: Transaction should be created
expect(result).toBeDefined();
expect(result.balance).toBeLessThan(initialBalance);
const transactions = await Transaction.find({ user: userId });
expect(transactions).toHaveLength(1);
expect(transactions[0].rawAmount).toBe(-100);
});
test('createTransaction should save when balance.enabled is true even if transactions config is missing', async () => {
// Arrange
const userId = new mongoose.Types.ObjectId();
const initialBalance = 10000000;
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'gpt-3.5-turbo';
const txData = {
user: userId,
conversationId: 'test-conversation-id',
model,
context: 'test',
endpointTokenConfig: null,
rawAmount: -100,
tokenType: 'prompt',
balance: { enabled: true },
// No transactions config provided
};
// Act
const result = await createTransaction(txData);
// Assert: Transaction should be created (backward compatibility)
expect(result).toBeDefined();
expect(result.balance).toBeLessThan(initialBalance);
const transactions = await Transaction.find({ user: userId });
expect(transactions).toHaveLength(1);
});
test('createTransaction should save transaction but not update balance when balance is disabled but transactions enabled', async () => {
// Arrange
const userId = new mongoose.Types.ObjectId();
const initialBalance = 10000000;
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'gpt-3.5-turbo';
const txData = {
user: userId,
conversationId: 'test-conversation-id',
model,
context: 'test',
endpointTokenConfig: null,
rawAmount: -100,
tokenType: 'prompt',
transactions: { enabled: true },
balance: { enabled: false },
};
// Act
const result = await createTransaction(txData);
// Assert: Transaction should be created but balance unchanged
expect(result).toBeUndefined();
const transactions = await Transaction.find({ user: userId });
expect(transactions).toHaveLength(1);
expect(transactions[0].rawAmount).toBe(-100);
const balance = await Balance.findOne({ user: userId });
expect(balance.tokenCredits).toBe(initialBalance);
});
test('createStructuredTransaction should not save when transactions.enabled is false', async () => {
// Arrange
const userId = new mongoose.Types.ObjectId();
const initialBalance = 10000000;
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'claude-3-5-sonnet';
const txData = {
user: userId,
conversationId: 'test-conversation-id',
model,
context: 'message',
tokenType: 'prompt',
inputTokens: -10,
writeTokens: -100,
readTokens: -5,
transactions: { enabled: false },
};
// Act
const result = await createStructuredTransaction(txData);
// Assert: No transaction should be created
expect(result).toBeUndefined();
const transactions = await Transaction.find({ user: userId });
expect(transactions).toHaveLength(0);
const balance = await Balance.findOne({ user: userId });
expect(balance.tokenCredits).toBe(initialBalance);
});
test('createStructuredTransaction should save transaction but not update balance when balance is disabled but transactions enabled', async () => {
// Arrange
const userId = new mongoose.Types.ObjectId();
const initialBalance = 10000000;
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'claude-3-5-sonnet';
const txData = {
user: userId,
conversationId: 'test-conversation-id',
model,
context: 'message',
tokenType: 'prompt',
inputTokens: -10,
writeTokens: -100,
readTokens: -5,
transactions: { enabled: true },
balance: { enabled: false },
};
// Act
const result = await createStructuredTransaction(txData);
// Assert: Transaction should be created but balance unchanged
expect(result).toBeUndefined();
const transactions = await Transaction.find({ user: userId });
expect(transactions).toHaveLength(1);
expect(transactions[0].inputTokens).toBe(-10);
expect(transactions[0].writeTokens).toBe(-100);
expect(transactions[0].readTokens).toBe(-5);
const balance = await Balance.findOne({ user: userId });
expect(balance.tokenCredits).toBe(initialBalance);
});
});

View File

@@ -118,7 +118,7 @@ const addIntervalToDate = (date, value, unit) => {
* @async
* @function
* @param {Object} params - The function parameters.
* @param {Express.Request} params.req - The Express request object.
* @param {ServerRequest} params.req - The Express request object.
* @param {Express.Response} params.res - The Express response object.
* @param {Object} params.txData - The transaction data.
* @param {string} params.txData.user - The user ID or identifier.

View File

@@ -1,47 +1,9 @@
const mongoose = require('mongoose');
const { buildTree } = require('librechat-data-provider');
const { MongoMemoryServer } = require('mongodb-memory-server');
const { getMessages, bulkSaveMessages } = require('./Message');
const { Message } = require('~/db/models');
// Original version of buildTree function
function buildTree({ messages, fileMap }) {
if (messages === null) {
return null;
}
const messageMap = {};
const rootMessages = [];
const childrenCount = {};
messages.forEach((message) => {
const parentId = message.parentMessageId ?? '';
childrenCount[parentId] = (childrenCount[parentId] || 0) + 1;
const extendedMessage = {
...message,
children: [],
depth: 0,
siblingIndex: childrenCount[parentId] - 1,
};
if (message.files && fileMap) {
extendedMessage.files = message.files.map((file) => fileMap[file.file_id ?? ''] ?? file);
}
messageMap[message.messageId] = extendedMessage;
const parentMessage = messageMap[parentId];
if (parentMessage) {
parentMessage.children.push(extendedMessage);
extendedMessage.depth = parentMessage.depth + 1;
} else {
rootMessages.push(extendedMessage);
}
});
return rootMessages;
}
let mongod;
beforeAll(async () => {
mongod = await MongoMemoryServer.create();

View File

@@ -24,8 +24,15 @@ const { getConvoTitle, getConvo, saveConvo, deleteConvos } = require('./Conversa
const { getPreset, getPresets, savePreset, deletePresets } = require('./Preset');
const { File } = require('~/db/models');
const seedDatabase = async () => {
await methods.initializeRoles();
await methods.seedDefaultRoles();
await methods.ensureDefaultCategories();
};
module.exports = {
...methods,
seedDatabase,
comparePassword,
findFileById,
createFile,

24
api/models/interface.js Normal file
View File

@@ -0,0 +1,24 @@
const { logger } = require('@librechat/data-schemas');
const { updateInterfacePermissions: updateInterfacePerms } = require('@librechat/api');
const { getRoleByName, updateAccessPermissions } = require('./Role');
/**
* Update interface permissions based on app configuration.
* Must be done independently from loading the app config.
* @param {AppConfig} appConfig
*/
async function updateInterfacePermissions(appConfig) {
try {
await updateInterfacePerms({
appConfig,
getRoleByName,
updateAccessPermissions,
});
} catch (error) {
logger.error('Error updating interface permissions:', error);
}
}
module.exports = {
updateInterfacePermissions,
};

View File

@@ -1,17 +1,11 @@
const { logger } = require('~/config');
const { logger } = require('@librechat/data-schemas');
const { createTransaction, createStructuredTransaction } = require('./Transaction');
/**
* Creates up to two transactions to record the spending of tokens.
*
* @function
* @async
* @param {Object} txData - Transaction data.
* @param {mongoose.Schema.Types.ObjectId} txData.user - The user ID.
* @param {String} txData.conversationId - The ID of the conversation.
* @param {String} txData.model - The model name.
* @param {String} txData.context - The context in which the transaction is made.
* @param {EndpointTokenConfig} [txData.endpointTokenConfig] - The current endpoint token config.
* @param {String} [txData.valueKey] - The value key (optional).
* @param {txData} txData - Transaction data.
* @param {Object} tokenUsage - The number of tokens used.
* @param {Number} tokenUsage.promptTokens - The number of prompt tokens used.
* @param {Number} tokenUsage.completionTokens - The number of completion tokens used.
@@ -69,13 +63,7 @@ const spendTokens = async (txData, tokenUsage) => {
*
* @function
* @async
* @param {Object} txData - Transaction data.
* @param {mongoose.Schema.Types.ObjectId} txData.user - The user ID.
* @param {String} txData.conversationId - The ID of the conversation.
* @param {String} txData.model - The model name.
* @param {String} txData.context - The context in which the transaction is made.
* @param {EndpointTokenConfig} [txData.endpointTokenConfig] - The current endpoint token config.
* @param {String} [txData.valueKey] - The value key (optional).
* @param {txData} txData - Transaction data.
* @param {Object} tokenUsage - The number of tokens used.
* @param {Object} tokenUsage.promptTokens - The number of prompt tokens used.
* @param {Number} tokenUsage.promptTokens.input - The number of input tokens.

View File

@@ -5,7 +5,6 @@ const { createTransaction, createAutoRefillTransaction } = require('./Transactio
require('~/db/models');
// Mock the logger to prevent console output during tests
jest.mock('~/config', () => ({
logger: {
debug: jest.fn(),
@@ -13,10 +12,6 @@ jest.mock('~/config', () => ({
},
}));
// Mock the Config service
const { getBalanceConfig } = require('~/server/services/Config');
jest.mock('~/server/services/Config');
describe('spendTokens', () => {
let mongoServer;
let userId;
@@ -44,8 +39,7 @@ describe('spendTokens', () => {
// Create a new user ID for each test
userId = new mongoose.Types.ObjectId();
// Mock the balance config to be enabled by default
getBalanceConfig.mockResolvedValue({ enabled: true });
// Balance config is now passed directly in txData
});
it('should create transactions for both prompt and completion tokens', async () => {
@@ -60,6 +54,7 @@ describe('spendTokens', () => {
conversationId: 'test-convo',
model: 'gpt-3.5-turbo',
context: 'test',
balance: { enabled: true },
};
const tokenUsage = {
promptTokens: 100,
@@ -98,6 +93,7 @@ describe('spendTokens', () => {
conversationId: 'test-convo',
model: 'gpt-3.5-turbo',
context: 'test',
balance: { enabled: true },
};
const tokenUsage = {
promptTokens: 100,
@@ -127,6 +123,7 @@ describe('spendTokens', () => {
conversationId: 'test-convo',
model: 'gpt-3.5-turbo',
context: 'test',
balance: { enabled: true },
};
const tokenUsage = {};
@@ -138,8 +135,7 @@ describe('spendTokens', () => {
});
it('should not update balance when the balance feature is disabled', async () => {
// Override configuration: disable balance updates
getBalanceConfig.mockResolvedValue({ enabled: false });
// Balance is now passed directly in txData
// Create a balance for the user
await Balance.create({
user: userId,
@@ -151,6 +147,7 @@ describe('spendTokens', () => {
conversationId: 'test-convo',
model: 'gpt-3.5-turbo',
context: 'test',
balance: { enabled: false },
};
const tokenUsage = {
promptTokens: 100,
@@ -180,6 +177,7 @@ describe('spendTokens', () => {
conversationId: 'test-convo',
model: 'gpt-4', // Using a more expensive model
context: 'test',
balance: { enabled: true },
};
// Spending more tokens than the user has balance for
@@ -233,6 +231,7 @@ describe('spendTokens', () => {
conversationId: 'test-convo-1',
model: 'gpt-4',
context: 'test',
balance: { enabled: true },
};
const tokenUsage1 = {
@@ -252,6 +251,7 @@ describe('spendTokens', () => {
conversationId: 'test-convo-2',
model: 'gpt-4',
context: 'test',
balance: { enabled: true },
};
const tokenUsage2 = {
@@ -292,6 +292,7 @@ describe('spendTokens', () => {
tokenType: 'completion',
rawAmount: -100,
context: 'test',
balance: { enabled: true },
});
console.log('Direct Transaction.create result:', directResult);
@@ -316,6 +317,7 @@ describe('spendTokens', () => {
conversationId: `test-convo-${model}`,
model,
context: 'test',
balance: { enabled: true },
};
const tokenUsage = {
@@ -352,6 +354,7 @@ describe('spendTokens', () => {
conversationId: 'test-convo-1',
model: 'claude-3-5-sonnet',
context: 'test',
balance: { enabled: true },
};
const tokenUsage1 = {
@@ -375,6 +378,7 @@ describe('spendTokens', () => {
conversationId: 'test-convo-2',
model: 'claude-3-5-sonnet',
context: 'test',
balance: { enabled: true },
};
const tokenUsage2 = {
@@ -426,6 +430,7 @@ describe('spendTokens', () => {
conversationId: 'test-convo',
model: 'claude-3-5-sonnet', // Using a model that supports structured tokens
context: 'test',
balance: { enabled: true },
};
// Spending more tokens than the user has balance for
@@ -505,6 +510,7 @@ describe('spendTokens', () => {
conversationId,
user: userId,
model: usage.model,
balance: { enabled: true },
};
// Calculate expected spend for this transaction
@@ -617,6 +623,7 @@ describe('spendTokens', () => {
tokenType: 'credits',
context: 'concurrent-refill-test',
rawAmount: refillAmount,
balance: { enabled: true },
}),
);
}
@@ -683,6 +690,7 @@ describe('spendTokens', () => {
conversationId: 'test-convo',
model: 'claude-3-5-sonnet',
context: 'test',
balance: { enabled: true },
};
const tokenUsage = {
promptTokens: {

View File

@@ -1,4 +1,4 @@
const { matchModelName } = require('../utils');
const { matchModelName, findMatchingPattern } = require('@librechat/api');
const defaultRate = 6;
/**
@@ -6,44 +6,58 @@ const defaultRate = 6;
* source: https://aws.amazon.com/bedrock/pricing/
* */
const bedrockValues = {
// Basic llama2 patterns
// Basic llama2 patterns (base defaults to smallest variant)
llama2: { prompt: 0.75, completion: 1.0 },
'llama-2': { prompt: 0.75, completion: 1.0 },
'llama2-13b': { prompt: 0.75, completion: 1.0 },
'llama2:13b': { prompt: 0.75, completion: 1.0 },
'llama2:70b': { prompt: 1.95, completion: 2.56 },
'llama2-70b': { prompt: 1.95, completion: 2.56 },
// Basic llama3 patterns
// Basic llama3 patterns (base defaults to smallest variant)
llama3: { prompt: 0.3, completion: 0.6 },
'llama-3': { prompt: 0.3, completion: 0.6 },
'llama3-8b': { prompt: 0.3, completion: 0.6 },
'llama3:8b': { prompt: 0.3, completion: 0.6 },
'llama3-70b': { prompt: 2.65, completion: 3.5 },
'llama3:70b': { prompt: 2.65, completion: 3.5 },
// llama3-x-Nb pattern
// llama3-x-Nb pattern (base defaults to smallest variant)
'llama3-1': { prompt: 0.22, completion: 0.22 },
'llama3-1-8b': { prompt: 0.22, completion: 0.22 },
'llama3-1-70b': { prompt: 0.72, completion: 0.72 },
'llama3-1-405b': { prompt: 2.4, completion: 2.4 },
'llama3-2': { prompt: 0.1, completion: 0.1 },
'llama3-2-1b': { prompt: 0.1, completion: 0.1 },
'llama3-2-3b': { prompt: 0.15, completion: 0.15 },
'llama3-2-11b': { prompt: 0.16, completion: 0.16 },
'llama3-2-90b': { prompt: 0.72, completion: 0.72 },
'llama3-3': { prompt: 2.65, completion: 3.5 },
'llama3-3-70b': { prompt: 2.65, completion: 3.5 },
// llama3.x:Nb pattern
// llama3.x:Nb pattern (base defaults to smallest variant)
'llama3.1': { prompt: 0.22, completion: 0.22 },
'llama3.1:8b': { prompt: 0.22, completion: 0.22 },
'llama3.1:70b': { prompt: 0.72, completion: 0.72 },
'llama3.1:405b': { prompt: 2.4, completion: 2.4 },
'llama3.2': { prompt: 0.1, completion: 0.1 },
'llama3.2:1b': { prompt: 0.1, completion: 0.1 },
'llama3.2:3b': { prompt: 0.15, completion: 0.15 },
'llama3.2:11b': { prompt: 0.16, completion: 0.16 },
'llama3.2:90b': { prompt: 0.72, completion: 0.72 },
'llama3.3': { prompt: 2.65, completion: 3.5 },
'llama3.3:70b': { prompt: 2.65, completion: 3.5 },
// llama-3.x-Nb pattern
// llama-3.x-Nb pattern (base defaults to smallest variant)
'llama-3.1': { prompt: 0.22, completion: 0.22 },
'llama-3.1-8b': { prompt: 0.22, completion: 0.22 },
'llama-3.1-70b': { prompt: 0.72, completion: 0.72 },
'llama-3.1-405b': { prompt: 2.4, completion: 2.4 },
'llama-3.2': { prompt: 0.1, completion: 0.1 },
'llama-3.2-1b': { prompt: 0.1, completion: 0.1 },
'llama-3.2-3b': { prompt: 0.15, completion: 0.15 },
'llama-3.2-11b': { prompt: 0.16, completion: 0.16 },
'llama-3.2-90b': { prompt: 0.72, completion: 0.72 },
'llama-3.3': { prompt: 2.65, completion: 3.5 },
'llama-3.3-70b': { prompt: 2.65, completion: 3.5 },
'mistral-7b': { prompt: 0.15, completion: 0.2 },
'mistral-small': { prompt: 0.15, completion: 0.2 },
@@ -52,15 +66,19 @@ const bedrockValues = {
'mistral-large-2407': { prompt: 3.0, completion: 9.0 },
'command-text': { prompt: 1.5, completion: 2.0 },
'command-light': { prompt: 0.3, completion: 0.6 },
'ai21.j2-mid-v1': { prompt: 12.5, completion: 12.5 },
'ai21.j2-ultra-v1': { prompt: 18.8, completion: 18.8 },
'ai21.jamba-instruct-v1:0': { prompt: 0.5, completion: 0.7 },
'amazon.titan-text-lite-v1': { prompt: 0.15, completion: 0.2 },
'amazon.titan-text-express-v1': { prompt: 0.2, completion: 0.6 },
'amazon.titan-text-premier-v1:0': { prompt: 0.5, completion: 1.5 },
'amazon.nova-micro-v1:0': { prompt: 0.035, completion: 0.14 },
'amazon.nova-lite-v1:0': { prompt: 0.06, completion: 0.24 },
'amazon.nova-pro-v1:0': { prompt: 0.8, completion: 3.2 },
// AI21 models
'j2-mid': { prompt: 12.5, completion: 12.5 },
'j2-ultra': { prompt: 18.8, completion: 18.8 },
'jamba-instruct': { prompt: 0.5, completion: 0.7 },
// Amazon Titan models
'titan-text-lite': { prompt: 0.15, completion: 0.2 },
'titan-text-express': { prompt: 0.2, completion: 0.6 },
'titan-text-premier': { prompt: 0.5, completion: 1.5 },
// Amazon Nova models
'nova-micro': { prompt: 0.035, completion: 0.14 },
'nova-lite': { prompt: 0.06, completion: 0.24 },
'nova-pro': { prompt: 0.8, completion: 3.2 },
'nova-premier': { prompt: 2.5, completion: 12.5 },
'deepseek.r1': { prompt: 1.35, completion: 5.4 },
};
@@ -71,82 +89,136 @@ const bedrockValues = {
*/
const tokenValues = Object.assign(
{
// Legacy token size mappings (generic patterns - check LAST)
'8k': { prompt: 30, completion: 60 },
'32k': { prompt: 60, completion: 120 },
'4k': { prompt: 1.5, completion: 2 },
'16k': { prompt: 3, completion: 4 },
// Generic fallback patterns (check LAST)
'claude-': { prompt: 0.8, completion: 2.4 },
deepseek: { prompt: 0.28, completion: 0.42 },
command: { prompt: 0.38, completion: 0.38 },
gemma: { prompt: 0.02, completion: 0.04 }, // Base pattern (using gemma-3n-e4b pricing)
gemini: { prompt: 0.5, completion: 1.5 },
'gpt-oss': { prompt: 0.05, completion: 0.2 },
// Specific model variants (check FIRST - more specific patterns at end)
'gpt-3.5-turbo-1106': { prompt: 1, completion: 2 },
'o4-mini': { prompt: 1.1, completion: 4.4 },
'o3-mini': { prompt: 1.1, completion: 4.4 },
o3: { prompt: 2, completion: 8 },
'o1-mini': { prompt: 1.1, completion: 4.4 },
'o1-preview': { prompt: 15, completion: 60 },
o1: { prompt: 15, completion: 60 },
'gpt-3.5-turbo-0125': { prompt: 0.5, completion: 1.5 },
'gpt-4-1106': { prompt: 10, completion: 30 },
'gpt-4.1': { prompt: 2, completion: 8 },
'gpt-4.1-nano': { prompt: 0.1, completion: 0.4 },
'gpt-4.1-mini': { prompt: 0.4, completion: 1.6 },
'gpt-4.1': { prompt: 2, completion: 8 },
'gpt-4.5': { prompt: 75, completion: 150 },
'gpt-4o-mini': { prompt: 0.15, completion: 0.6 },
'gpt-4o': { prompt: 2.5, completion: 10 },
'gpt-4o-2024-05-13': { prompt: 5, completion: 15 },
'gpt-4-1106': { prompt: 10, completion: 30 },
'gpt-3.5-turbo-0125': { prompt: 0.5, completion: 1.5 },
'claude-3-opus': { prompt: 15, completion: 75 },
'gpt-4o-mini': { prompt: 0.15, completion: 0.6 },
'gpt-5': { prompt: 1.25, completion: 10 },
'gpt-5-nano': { prompt: 0.05, completion: 0.4 },
'gpt-5-mini': { prompt: 0.25, completion: 2 },
'gpt-5-pro': { prompt: 15, completion: 120 },
o1: { prompt: 15, completion: 60 },
'o1-mini': { prompt: 1.1, completion: 4.4 },
'o1-preview': { prompt: 15, completion: 60 },
o3: { prompt: 2, completion: 8 },
'o3-mini': { prompt: 1.1, completion: 4.4 },
'o4-mini': { prompt: 1.1, completion: 4.4 },
'claude-instant': { prompt: 0.8, completion: 2.4 },
'claude-2': { prompt: 8, completion: 24 },
'claude-2.1': { prompt: 8, completion: 24 },
'claude-3-haiku': { prompt: 0.25, completion: 1.25 },
'claude-3-sonnet': { prompt: 3, completion: 15 },
'claude-3-opus': { prompt: 15, completion: 75 },
'claude-3-5-haiku': { prompt: 0.8, completion: 4 },
'claude-3.5-haiku': { prompt: 0.8, completion: 4 },
'claude-3-5-sonnet': { prompt: 3, completion: 15 },
'claude-3.5-sonnet': { prompt: 3, completion: 15 },
'claude-3-7-sonnet': { prompt: 3, completion: 15 },
'claude-3.7-sonnet': { prompt: 3, completion: 15 },
'claude-3-5-haiku': { prompt: 0.8, completion: 4 },
'claude-3.5-haiku': { prompt: 0.8, completion: 4 },
'claude-3-haiku': { prompt: 0.25, completion: 1.25 },
'claude-sonnet-4': { prompt: 3, completion: 15 },
'claude-haiku-4-5': { prompt: 1, completion: 5 },
'claude-opus-4': { prompt: 15, completion: 75 },
'claude-2.1': { prompt: 8, completion: 24 },
'claude-2': { prompt: 8, completion: 24 },
'claude-instant': { prompt: 0.8, completion: 2.4 },
'claude-': { prompt: 0.8, completion: 2.4 },
'command-r-plus': { prompt: 3, completion: 15 },
'claude-sonnet-4': { prompt: 3, completion: 15 },
'command-r': { prompt: 0.5, completion: 1.5 },
'deepseek-reasoner': { prompt: 0.55, completion: 2.19 },
deepseek: { prompt: 0.14, completion: 0.28 },
/* cohere doesn't have rates for the older command models,
so this was from https://artificialanalysis.ai/models/command-light/providers */
command: { prompt: 0.38, completion: 0.38 },
gemma: { prompt: 0, completion: 0 }, // https://ai.google.dev/pricing
'gemma-2': { prompt: 0, completion: 0 }, // https://ai.google.dev/pricing
'gemma-3': { prompt: 0, completion: 0 }, // https://ai.google.dev/pricing
'gemma-3-27b': { prompt: 0, completion: 0 }, // https://ai.google.dev/pricing
'gemini-2.0-flash-lite': { prompt: 0.075, completion: 0.3 },
'gemini-2.0-flash': { prompt: 0.1, completion: 0.4 },
'gemini-2.0': { prompt: 0, completion: 0 }, // https://ai.google.dev/pricing
'gemini-2.5-pro': { prompt: 1.25, completion: 10 },
'gemini-2.5-flash': { prompt: 0.15, completion: 3.5 },
'gemini-2.5': { prompt: 0, completion: 0 }, // Free for a period of time
'gemini-1.5-flash-8b': { prompt: 0.075, completion: 0.3 },
'gemini-1.5-flash': { prompt: 0.15, completion: 0.6 },
'command-r-plus': { prompt: 3, completion: 15 },
'command-text': { prompt: 1.5, completion: 2.0 },
'deepseek-reasoner': { prompt: 0.28, completion: 0.42 },
'deepseek-r1': { prompt: 0.4, completion: 2.0 },
'deepseek-v3': { prompt: 0.2, completion: 0.8 },
'gemma-2': { prompt: 0.01, completion: 0.03 }, // Base pattern (using gemma-2-9b pricing)
'gemma-3': { prompt: 0.02, completion: 0.04 }, // Base pattern (using gemma-3n-e4b pricing)
'gemma-3-27b': { prompt: 0.09, completion: 0.16 },
'gemini-1.5': { prompt: 2.5, completion: 10 },
'gemini-1.5-flash': { prompt: 0.15, completion: 0.6 },
'gemini-1.5-flash-8b': { prompt: 0.075, completion: 0.3 },
'gemini-2.0': { prompt: 0.1, completion: 0.4 }, // Base pattern (using 2.0-flash pricing)
'gemini-2.0-flash': { prompt: 0.1, completion: 0.4 },
'gemini-2.0-flash-lite': { prompt: 0.075, completion: 0.3 },
'gemini-2.5': { prompt: 0.3, completion: 2.5 }, // Base pattern (using 2.5-flash pricing)
'gemini-2.5-flash': { prompt: 0.3, completion: 2.5 },
'gemini-2.5-flash-lite': { prompt: 0.1, completion: 0.4 },
'gemini-2.5-pro': { prompt: 1.25, completion: 10 },
'gemini-pro-vision': { prompt: 0.5, completion: 1.5 },
gemini: { prompt: 0.5, completion: 1.5 },
'grok-2-vision-1212': { prompt: 2.0, completion: 10.0 },
'grok-2-vision-latest': { prompt: 2.0, completion: 10.0 },
'grok-2-vision': { prompt: 2.0, completion: 10.0 },
grok: { prompt: 2.0, completion: 10.0 }, // Base pattern defaults to grok-2
'grok-beta': { prompt: 5.0, completion: 15.0 },
'grok-vision-beta': { prompt: 5.0, completion: 15.0 },
'grok-2': { prompt: 2.0, completion: 10.0 },
'grok-2-1212': { prompt: 2.0, completion: 10.0 },
'grok-2-latest': { prompt: 2.0, completion: 10.0 },
'grok-2': { prompt: 2.0, completion: 10.0 },
'grok-3-mini-fast': { prompt: 0.6, completion: 4 },
'grok-3-mini': { prompt: 0.3, completion: 0.5 },
'grok-3-fast': { prompt: 5.0, completion: 25.0 },
'grok-2-vision': { prompt: 2.0, completion: 10.0 },
'grok-2-vision-1212': { prompt: 2.0, completion: 10.0 },
'grok-2-vision-latest': { prompt: 2.0, completion: 10.0 },
'grok-3': { prompt: 3.0, completion: 15.0 },
'grok-3-fast': { prompt: 5.0, completion: 25.0 },
'grok-3-mini': { prompt: 0.3, completion: 0.5 },
'grok-3-mini-fast': { prompt: 0.6, completion: 4 },
'grok-4': { prompt: 3.0, completion: 15.0 },
'grok-beta': { prompt: 5.0, completion: 15.0 },
'mistral-large': { prompt: 2.0, completion: 6.0 },
'pixtral-large': { prompt: 2.0, completion: 6.0 },
'mistral-saba': { prompt: 0.2, completion: 0.6 },
codestral: { prompt: 0.3, completion: 0.9 },
'ministral-8b': { prompt: 0.1, completion: 0.1 },
'ministral-3b': { prompt: 0.04, completion: 0.04 },
'ministral-8b': { prompt: 0.1, completion: 0.1 },
'mistral-nemo': { prompt: 0.15, completion: 0.15 },
'mistral-saba': { prompt: 0.2, completion: 0.6 },
'pixtral-large': { prompt: 2.0, completion: 6.0 },
'mistral-large': { prompt: 2.0, completion: 6.0 },
'mixtral-8x22b': { prompt: 0.65, completion: 0.65 },
kimi: { prompt: 0.14, completion: 2.49 }, // Base pattern (using kimi-k2 pricing)
// GPT-OSS models (specific sizes)
'gpt-oss:20b': { prompt: 0.05, completion: 0.2 },
'gpt-oss-20b': { prompt: 0.05, completion: 0.2 },
'gpt-oss:120b': { prompt: 0.15, completion: 0.6 },
'gpt-oss-120b': { prompt: 0.15, completion: 0.6 },
// GLM models (Zhipu AI) - general to specific
glm4: { prompt: 0.1, completion: 0.1 },
'glm-4': { prompt: 0.1, completion: 0.1 },
'glm-4-32b': { prompt: 0.1, completion: 0.1 },
'glm-4.5': { prompt: 0.35, completion: 1.55 },
'glm-4.5-air': { prompt: 0.14, completion: 0.86 },
'glm-4.5v': { prompt: 0.6, completion: 1.8 },
'glm-4.6': { prompt: 0.5, completion: 1.75 },
// Qwen models
qwen: { prompt: 0.08, completion: 0.33 }, // Qwen base pattern (using qwen2.5-72b pricing)
'qwen2.5': { prompt: 0.08, completion: 0.33 }, // Qwen 2.5 base pattern
'qwen-turbo': { prompt: 0.05, completion: 0.2 },
'qwen-plus': { prompt: 0.4, completion: 1.2 },
'qwen-max': { prompt: 1.6, completion: 6.4 },
'qwq-32b': { prompt: 0.15, completion: 0.4 },
// Qwen3 models
qwen3: { prompt: 0.035, completion: 0.138 }, // Qwen3 base pattern (using qwen3-4b pricing)
'qwen3-8b': { prompt: 0.035, completion: 0.138 },
'qwen3-14b': { prompt: 0.05, completion: 0.22 },
'qwen3-30b-a3b': { prompt: 0.06, completion: 0.22 },
'qwen3-32b': { prompt: 0.05, completion: 0.2 },
'qwen3-235b-a22b': { prompt: 0.08, completion: 0.55 },
// Qwen3 VL (Vision-Language) models
'qwen3-vl-8b-thinking': { prompt: 0.18, completion: 2.1 },
'qwen3-vl-8b-instruct': { prompt: 0.18, completion: 0.69 },
'qwen3-vl-30b-a3b': { prompt: 0.29, completion: 1.0 },
'qwen3-vl-235b-a22b': { prompt: 0.3, completion: 1.2 },
// Qwen3 specialized models
'qwen3-max': { prompt: 1.2, completion: 6 },
'qwen3-coder': { prompt: 0.22, completion: 0.95 },
'qwen3-coder-30b-a3b': { prompt: 0.06, completion: 0.25 },
'qwen3-coder-plus': { prompt: 1, completion: 5 },
'qwen3-coder-flash': { prompt: 0.3, completion: 1.5 },
'qwen3-next-80b-a3b': { prompt: 0.1, completion: 0.8 },
},
bedrockValues,
);
@@ -177,61 +249,39 @@ const cacheTokenValues = {
* @returns {string|undefined} The key corresponding to the model name, or undefined if no match is found.
*/
const getValueKey = (model, endpoint) => {
if (!model || typeof model !== 'string') {
return undefined;
}
// Use findMatchingPattern directly against tokenValues for efficient lookup
if (!endpoint || (typeof endpoint === 'string' && !tokenValues[endpoint])) {
const matchedKey = findMatchingPattern(model, tokenValues);
if (matchedKey) {
return matchedKey;
}
}
// Fallback: use matchModelName for edge cases and legacy handling
const modelName = matchModelName(model, endpoint);
if (!modelName) {
return undefined;
}
// Legacy token size mappings and aliases for older models
if (modelName.includes('gpt-3.5-turbo-16k')) {
return '16k';
} else if (modelName.includes('gpt-3.5-turbo-0125')) {
return 'gpt-3.5-turbo-0125';
} else if (modelName.includes('gpt-3.5-turbo-1106')) {
return 'gpt-3.5-turbo-1106';
} else if (modelName.includes('gpt-3.5')) {
return '4k';
} else if (modelName.includes('o4-mini')) {
return 'o4-mini';
} else if (modelName.includes('o4')) {
return 'o4';
} else if (modelName.includes('o3-mini')) {
return 'o3-mini';
} else if (modelName.includes('o3')) {
return 'o3';
} else if (modelName.includes('o1-preview')) {
return 'o1-preview';
} else if (modelName.includes('o1-mini')) {
return 'o1-mini';
} else if (modelName.includes('o1')) {
return 'o1';
} else if (modelName.includes('gpt-4.5')) {
return 'gpt-4.5';
} else if (modelName.includes('gpt-4.1-nano')) {
return 'gpt-4.1-nano';
} else if (modelName.includes('gpt-4.1-mini')) {
return 'gpt-4.1-mini';
} else if (modelName.includes('gpt-4.1')) {
return 'gpt-4.1';
} else if (modelName.includes('gpt-4o-2024-05-13')) {
return 'gpt-4o-2024-05-13';
} else if (modelName.includes('gpt-4o-mini')) {
return 'gpt-4o-mini';
} else if (modelName.includes('gpt-4o')) {
return 'gpt-4o';
} else if (modelName.includes('gpt-4-vision')) {
return 'gpt-4-1106';
} else if (modelName.includes('gpt-4-1106')) {
return 'gpt-4-1106';
return 'gpt-4-1106'; // Alias for gpt-4-vision
} else if (modelName.includes('gpt-4-0125')) {
return 'gpt-4-1106';
return 'gpt-4-1106'; // Alias for gpt-4-0125
} else if (modelName.includes('gpt-4-turbo')) {
return 'gpt-4-1106';
return 'gpt-4-1106'; // Alias for gpt-4-turbo
} else if (modelName.includes('gpt-4-32k')) {
return '32k';
} else if (modelName.includes('gpt-4')) {
return '8k';
} else if (tokenValues[modelName]) {
return modelName;
}
return undefined;

View File

@@ -1,3 +1,4 @@
const { maxTokensMap } = require('@librechat/api');
const { EModelEndpoint } = require('librechat-data-provider');
const {
defaultRate,
@@ -25,8 +26,14 @@ describe('getValueKey', () => {
expect(getValueKey('gpt-4-some-other-info')).toBe('8k');
});
it('should return undefined for model names that do not match any known patterns', () => {
expect(getValueKey('gpt-5-some-other-info')).toBeUndefined();
it('should return "gpt-5" for model name containing "gpt-5"', () => {
expect(getValueKey('gpt-5-some-other-info')).toBe('gpt-5');
expect(getValueKey('gpt-5-2025-01-30')).toBe('gpt-5');
expect(getValueKey('gpt-5-2025-01-30-0130')).toBe('gpt-5');
expect(getValueKey('openai/gpt-5')).toBe('gpt-5');
expect(getValueKey('openai/gpt-5-2025-01-30')).toBe('gpt-5');
expect(getValueKey('gpt-5-turbo')).toBe('gpt-5');
expect(getValueKey('gpt-5-0130')).toBe('gpt-5');
});
it('should return "gpt-3.5-turbo-1106" for model name containing "gpt-3.5-turbo-1106"', () => {
@@ -84,6 +91,37 @@ describe('getValueKey', () => {
expect(getValueKey('gpt-4.1-nano-0125')).toBe('gpt-4.1-nano');
});
it('should return "gpt-5" for model type of "gpt-5"', () => {
expect(getValueKey('gpt-5-2025-01-30')).toBe('gpt-5');
expect(getValueKey('gpt-5-2025-01-30-0130')).toBe('gpt-5');
expect(getValueKey('openai/gpt-5')).toBe('gpt-5');
expect(getValueKey('openai/gpt-5-2025-01-30')).toBe('gpt-5');
expect(getValueKey('gpt-5-turbo')).toBe('gpt-5');
expect(getValueKey('gpt-5-0130')).toBe('gpt-5');
});
it('should return "gpt-5-mini" for model type of "gpt-5-mini"', () => {
expect(getValueKey('gpt-5-mini-2025-01-30')).toBe('gpt-5-mini');
expect(getValueKey('openai/gpt-5-mini')).toBe('gpt-5-mini');
expect(getValueKey('gpt-5-mini-0130')).toBe('gpt-5-mini');
expect(getValueKey('gpt-5-mini-2025-01-30-0130')).toBe('gpt-5-mini');
});
it('should return "gpt-5-nano" for model type of "gpt-5-nano"', () => {
expect(getValueKey('gpt-5-nano-2025-01-30')).toBe('gpt-5-nano');
expect(getValueKey('openai/gpt-5-nano')).toBe('gpt-5-nano');
expect(getValueKey('gpt-5-nano-0130')).toBe('gpt-5-nano');
expect(getValueKey('gpt-5-nano-2025-01-30-0130')).toBe('gpt-5-nano');
});
it('should return "gpt-5-pro" for model type of "gpt-5-pro"', () => {
expect(getValueKey('gpt-5-pro-2025-01-30')).toBe('gpt-5-pro');
expect(getValueKey('openai/gpt-5-pro')).toBe('gpt-5-pro');
expect(getValueKey('gpt-5-pro-0130')).toBe('gpt-5-pro');
expect(getValueKey('gpt-5-pro-2025-01-30-0130')).toBe('gpt-5-pro');
expect(getValueKey('gpt-5-pro-preview')).toBe('gpt-5-pro');
});
it('should return "gpt-4o" for model type of "gpt-4o"', () => {
expect(getValueKey('gpt-4o-2024-08-06')).toBe('gpt-4o');
expect(getValueKey('gpt-4o-2024-08-06-0718')).toBe('gpt-4o');
@@ -155,6 +193,16 @@ describe('getValueKey', () => {
expect(getValueKey('claude-3.5-haiku-turbo')).toBe('claude-3.5-haiku');
expect(getValueKey('claude-3.5-haiku-0125')).toBe('claude-3.5-haiku');
});
it('should return expected value keys for "gpt-oss" models', () => {
expect(getValueKey('openai/gpt-oss-120b')).toBe('gpt-oss-120b');
expect(getValueKey('openai/gpt-oss:120b')).toBe('gpt-oss:120b');
expect(getValueKey('openai/gpt-oss-570b')).toBe('gpt-oss');
expect(getValueKey('gpt-oss-570b')).toBe('gpt-oss');
expect(getValueKey('groq/gpt-oss-1080b')).toBe('gpt-oss');
expect(getValueKey('gpt-oss-20b')).toBe('gpt-oss-20b');
expect(getValueKey('oai/gpt-oss:20b')).toBe('gpt-oss:20b');
});
});
describe('getMultiplier', () => {
@@ -207,6 +255,62 @@ describe('getMultiplier', () => {
);
});
it('should return the correct multiplier for gpt-5', () => {
const valueKey = getValueKey('gpt-5-2025-01-30');
expect(getMultiplier({ valueKey, tokenType: 'prompt' })).toBe(tokenValues['gpt-5'].prompt);
expect(getMultiplier({ valueKey, tokenType: 'completion' })).toBe(
tokenValues['gpt-5'].completion,
);
expect(getMultiplier({ model: 'gpt-5-preview', tokenType: 'prompt' })).toBe(
tokenValues['gpt-5'].prompt,
);
expect(getMultiplier({ model: 'openai/gpt-5', tokenType: 'completion' })).toBe(
tokenValues['gpt-5'].completion,
);
});
it('should return the correct multiplier for gpt-5-mini', () => {
const valueKey = getValueKey('gpt-5-mini-2025-01-30');
expect(getMultiplier({ valueKey, tokenType: 'prompt' })).toBe(tokenValues['gpt-5-mini'].prompt);
expect(getMultiplier({ valueKey, tokenType: 'completion' })).toBe(
tokenValues['gpt-5-mini'].completion,
);
expect(getMultiplier({ model: 'gpt-5-mini-preview', tokenType: 'prompt' })).toBe(
tokenValues['gpt-5-mini'].prompt,
);
expect(getMultiplier({ model: 'openai/gpt-5-mini', tokenType: 'completion' })).toBe(
tokenValues['gpt-5-mini'].completion,
);
});
it('should return the correct multiplier for gpt-5-nano', () => {
const valueKey = getValueKey('gpt-5-nano-2025-01-30');
expect(getMultiplier({ valueKey, tokenType: 'prompt' })).toBe(tokenValues['gpt-5-nano'].prompt);
expect(getMultiplier({ valueKey, tokenType: 'completion' })).toBe(
tokenValues['gpt-5-nano'].completion,
);
expect(getMultiplier({ model: 'gpt-5-nano-preview', tokenType: 'prompt' })).toBe(
tokenValues['gpt-5-nano'].prompt,
);
expect(getMultiplier({ model: 'openai/gpt-5-nano', tokenType: 'completion' })).toBe(
tokenValues['gpt-5-nano'].completion,
);
});
it('should return the correct multiplier for gpt-5-pro', () => {
const valueKey = getValueKey('gpt-5-pro-2025-01-30');
expect(getMultiplier({ valueKey, tokenType: 'prompt' })).toBe(tokenValues['gpt-5-pro'].prompt);
expect(getMultiplier({ valueKey, tokenType: 'completion' })).toBe(
tokenValues['gpt-5-pro'].completion,
);
expect(getMultiplier({ model: 'gpt-5-pro-preview', tokenType: 'prompt' })).toBe(
tokenValues['gpt-5-pro'].prompt,
);
expect(getMultiplier({ model: 'openai/gpt-5-pro', tokenType: 'completion' })).toBe(
tokenValues['gpt-5-pro'].completion,
);
});
it('should return the correct multiplier for gpt-4o', () => {
const valueKey = getValueKey('gpt-4o-2024-08-06');
expect(getMultiplier({ valueKey, tokenType: 'prompt' })).toBe(tokenValues['gpt-4o'].prompt);
@@ -307,10 +411,34 @@ describe('getMultiplier', () => {
});
it('should return defaultRate if derived valueKey does not match any known patterns', () => {
expect(getMultiplier({ tokenType: 'prompt', model: 'gpt-5-some-other-info' })).toBe(
expect(getMultiplier({ tokenType: 'prompt', model: 'gpt-10-some-other-info' })).toBe(
defaultRate,
);
});
it('should return correct multipliers for GPT-OSS models', () => {
const models = ['gpt-oss-20b', 'gpt-oss-120b'];
models.forEach((key) => {
const expectedPrompt = tokenValues[key].prompt;
const expectedCompletion = tokenValues[key].completion;
expect(getMultiplier({ valueKey: key, tokenType: 'prompt' })).toBe(expectedPrompt);
expect(getMultiplier({ valueKey: key, tokenType: 'completion' })).toBe(expectedCompletion);
expect(getMultiplier({ model: key, tokenType: 'prompt' })).toBe(expectedPrompt);
expect(getMultiplier({ model: key, tokenType: 'completion' })).toBe(expectedCompletion);
});
});
it('should return correct multipliers for GLM models', () => {
const models = ['glm-4.6', 'glm-4.5v', 'glm-4.5-air', 'glm-4.5', 'glm-4-32b', 'glm-4', 'glm4'];
models.forEach((key) => {
const expectedPrompt = tokenValues[key].prompt;
const expectedCompletion = tokenValues[key].completion;
expect(getMultiplier({ valueKey: key, tokenType: 'prompt' })).toBe(expectedPrompt);
expect(getMultiplier({ valueKey: key, tokenType: 'completion' })).toBe(expectedCompletion);
expect(getMultiplier({ model: key, tokenType: 'prompt' })).toBe(expectedPrompt);
expect(getMultiplier({ model: key, tokenType: 'completion' })).toBe(expectedCompletion);
});
});
});
describe('AWS Bedrock Model Tests', () => {
@@ -366,6 +494,249 @@ describe('AWS Bedrock Model Tests', () => {
});
});
describe('Amazon Model Tests', () => {
describe('Amazon Nova Models', () => {
it('should return correct pricing for nova-premier', () => {
expect(getMultiplier({ model: 'nova-premier', tokenType: 'prompt' })).toBe(
tokenValues['nova-premier'].prompt,
);
expect(getMultiplier({ model: 'nova-premier', tokenType: 'completion' })).toBe(
tokenValues['nova-premier'].completion,
);
expect(getMultiplier({ model: 'amazon.nova-premier-v1:0', tokenType: 'prompt' })).toBe(
tokenValues['nova-premier'].prompt,
);
expect(getMultiplier({ model: 'amazon.nova-premier-v1:0', tokenType: 'completion' })).toBe(
tokenValues['nova-premier'].completion,
);
});
it('should return correct pricing for nova-pro', () => {
expect(getMultiplier({ model: 'nova-pro', tokenType: 'prompt' })).toBe(
tokenValues['nova-pro'].prompt,
);
expect(getMultiplier({ model: 'nova-pro', tokenType: 'completion' })).toBe(
tokenValues['nova-pro'].completion,
);
expect(getMultiplier({ model: 'amazon.nova-pro-v1:0', tokenType: 'prompt' })).toBe(
tokenValues['nova-pro'].prompt,
);
expect(getMultiplier({ model: 'amazon.nova-pro-v1:0', tokenType: 'completion' })).toBe(
tokenValues['nova-pro'].completion,
);
});
it('should return correct pricing for nova-lite', () => {
expect(getMultiplier({ model: 'nova-lite', tokenType: 'prompt' })).toBe(
tokenValues['nova-lite'].prompt,
);
expect(getMultiplier({ model: 'nova-lite', tokenType: 'completion' })).toBe(
tokenValues['nova-lite'].completion,
);
expect(getMultiplier({ model: 'amazon.nova-lite-v1:0', tokenType: 'prompt' })).toBe(
tokenValues['nova-lite'].prompt,
);
expect(getMultiplier({ model: 'amazon.nova-lite-v1:0', tokenType: 'completion' })).toBe(
tokenValues['nova-lite'].completion,
);
});
it('should return correct pricing for nova-micro', () => {
expect(getMultiplier({ model: 'nova-micro', tokenType: 'prompt' })).toBe(
tokenValues['nova-micro'].prompt,
);
expect(getMultiplier({ model: 'nova-micro', tokenType: 'completion' })).toBe(
tokenValues['nova-micro'].completion,
);
expect(getMultiplier({ model: 'amazon.nova-micro-v1:0', tokenType: 'prompt' })).toBe(
tokenValues['nova-micro'].prompt,
);
expect(getMultiplier({ model: 'amazon.nova-micro-v1:0', tokenType: 'completion' })).toBe(
tokenValues['nova-micro'].completion,
);
});
it('should match both short and full model names to the same pricing', () => {
const models = ['nova-micro', 'nova-lite', 'nova-pro', 'nova-premier'];
const fullModels = [
'amazon.nova-micro-v1:0',
'amazon.nova-lite-v1:0',
'amazon.nova-pro-v1:0',
'amazon.nova-premier-v1:0',
];
models.forEach((shortModel, i) => {
const fullModel = fullModels[i];
const shortPrompt = getMultiplier({ model: shortModel, tokenType: 'prompt' });
const fullPrompt = getMultiplier({ model: fullModel, tokenType: 'prompt' });
const shortCompletion = getMultiplier({ model: shortModel, tokenType: 'completion' });
const fullCompletion = getMultiplier({ model: fullModel, tokenType: 'completion' });
expect(shortPrompt).toBe(fullPrompt);
expect(shortCompletion).toBe(fullCompletion);
expect(shortPrompt).toBe(tokenValues[shortModel].prompt);
expect(shortCompletion).toBe(tokenValues[shortModel].completion);
});
});
});
describe('Amazon Titan Models', () => {
it('should return correct pricing for titan-text-premier', () => {
expect(getMultiplier({ model: 'titan-text-premier', tokenType: 'prompt' })).toBe(
tokenValues['titan-text-premier'].prompt,
);
expect(getMultiplier({ model: 'titan-text-premier', tokenType: 'completion' })).toBe(
tokenValues['titan-text-premier'].completion,
);
expect(getMultiplier({ model: 'amazon.titan-text-premier-v1:0', tokenType: 'prompt' })).toBe(
tokenValues['titan-text-premier'].prompt,
);
expect(
getMultiplier({ model: 'amazon.titan-text-premier-v1:0', tokenType: 'completion' }),
).toBe(tokenValues['titan-text-premier'].completion);
});
it('should return correct pricing for titan-text-express', () => {
expect(getMultiplier({ model: 'titan-text-express', tokenType: 'prompt' })).toBe(
tokenValues['titan-text-express'].prompt,
);
expect(getMultiplier({ model: 'titan-text-express', tokenType: 'completion' })).toBe(
tokenValues['titan-text-express'].completion,
);
expect(getMultiplier({ model: 'amazon.titan-text-express-v1', tokenType: 'prompt' })).toBe(
tokenValues['titan-text-express'].prompt,
);
expect(
getMultiplier({ model: 'amazon.titan-text-express-v1', tokenType: 'completion' }),
).toBe(tokenValues['titan-text-express'].completion);
});
it('should return correct pricing for titan-text-lite', () => {
expect(getMultiplier({ model: 'titan-text-lite', tokenType: 'prompt' })).toBe(
tokenValues['titan-text-lite'].prompt,
);
expect(getMultiplier({ model: 'titan-text-lite', tokenType: 'completion' })).toBe(
tokenValues['titan-text-lite'].completion,
);
expect(getMultiplier({ model: 'amazon.titan-text-lite-v1', tokenType: 'prompt' })).toBe(
tokenValues['titan-text-lite'].prompt,
);
expect(getMultiplier({ model: 'amazon.titan-text-lite-v1', tokenType: 'completion' })).toBe(
tokenValues['titan-text-lite'].completion,
);
});
it('should match both short and full model names to the same pricing', () => {
const models = ['titan-text-lite', 'titan-text-express', 'titan-text-premier'];
const fullModels = [
'amazon.titan-text-lite-v1',
'amazon.titan-text-express-v1',
'amazon.titan-text-premier-v1:0',
];
models.forEach((shortModel, i) => {
const fullModel = fullModels[i];
const shortPrompt = getMultiplier({ model: shortModel, tokenType: 'prompt' });
const fullPrompt = getMultiplier({ model: fullModel, tokenType: 'prompt' });
const shortCompletion = getMultiplier({ model: shortModel, tokenType: 'completion' });
const fullCompletion = getMultiplier({ model: fullModel, tokenType: 'completion' });
expect(shortPrompt).toBe(fullPrompt);
expect(shortCompletion).toBe(fullCompletion);
expect(shortPrompt).toBe(tokenValues[shortModel].prompt);
expect(shortCompletion).toBe(tokenValues[shortModel].completion);
});
});
});
});
describe('AI21 Model Tests', () => {
describe('AI21 J2 Models', () => {
it('should return correct pricing for j2-mid', () => {
expect(getMultiplier({ model: 'j2-mid', tokenType: 'prompt' })).toBe(
tokenValues['j2-mid'].prompt,
);
expect(getMultiplier({ model: 'j2-mid', tokenType: 'completion' })).toBe(
tokenValues['j2-mid'].completion,
);
expect(getMultiplier({ model: 'ai21.j2-mid-v1', tokenType: 'prompt' })).toBe(
tokenValues['j2-mid'].prompt,
);
expect(getMultiplier({ model: 'ai21.j2-mid-v1', tokenType: 'completion' })).toBe(
tokenValues['j2-mid'].completion,
);
});
it('should return correct pricing for j2-ultra', () => {
expect(getMultiplier({ model: 'j2-ultra', tokenType: 'prompt' })).toBe(
tokenValues['j2-ultra'].prompt,
);
expect(getMultiplier({ model: 'j2-ultra', tokenType: 'completion' })).toBe(
tokenValues['j2-ultra'].completion,
);
expect(getMultiplier({ model: 'ai21.j2-ultra-v1', tokenType: 'prompt' })).toBe(
tokenValues['j2-ultra'].prompt,
);
expect(getMultiplier({ model: 'ai21.j2-ultra-v1', tokenType: 'completion' })).toBe(
tokenValues['j2-ultra'].completion,
);
});
it('should match both short and full model names to the same pricing', () => {
const models = ['j2-mid', 'j2-ultra'];
const fullModels = ['ai21.j2-mid-v1', 'ai21.j2-ultra-v1'];
models.forEach((shortModel, i) => {
const fullModel = fullModels[i];
const shortPrompt = getMultiplier({ model: shortModel, tokenType: 'prompt' });
const fullPrompt = getMultiplier({ model: fullModel, tokenType: 'prompt' });
const shortCompletion = getMultiplier({ model: shortModel, tokenType: 'completion' });
const fullCompletion = getMultiplier({ model: fullModel, tokenType: 'completion' });
expect(shortPrompt).toBe(fullPrompt);
expect(shortCompletion).toBe(fullCompletion);
expect(shortPrompt).toBe(tokenValues[shortModel].prompt);
expect(shortCompletion).toBe(tokenValues[shortModel].completion);
});
});
});
describe('AI21 Jamba Models', () => {
it('should return correct pricing for jamba-instruct', () => {
expect(getMultiplier({ model: 'jamba-instruct', tokenType: 'prompt' })).toBe(
tokenValues['jamba-instruct'].prompt,
);
expect(getMultiplier({ model: 'jamba-instruct', tokenType: 'completion' })).toBe(
tokenValues['jamba-instruct'].completion,
);
expect(getMultiplier({ model: 'ai21.jamba-instruct-v1:0', tokenType: 'prompt' })).toBe(
tokenValues['jamba-instruct'].prompt,
);
expect(getMultiplier({ model: 'ai21.jamba-instruct-v1:0', tokenType: 'completion' })).toBe(
tokenValues['jamba-instruct'].completion,
);
});
it('should match both short and full model names to the same pricing', () => {
const shortPrompt = getMultiplier({ model: 'jamba-instruct', tokenType: 'prompt' });
const fullPrompt = getMultiplier({
model: 'ai21.jamba-instruct-v1:0',
tokenType: 'prompt',
});
const shortCompletion = getMultiplier({ model: 'jamba-instruct', tokenType: 'completion' });
const fullCompletion = getMultiplier({
model: 'ai21.jamba-instruct-v1:0',
tokenType: 'completion',
});
expect(shortPrompt).toBe(fullPrompt);
expect(shortCompletion).toBe(fullCompletion);
expect(shortPrompt).toBe(tokenValues['jamba-instruct'].prompt);
expect(shortCompletion).toBe(tokenValues['jamba-instruct'].completion);
});
});
});
describe('Deepseek Model Tests', () => {
const deepseekModels = ['deepseek-chat', 'deepseek-coder', 'deepseek-reasoner', 'deepseek.r1'];
@@ -397,6 +768,187 @@ describe('Deepseek Model Tests', () => {
});
});
describe('Qwen3 Model Tests', () => {
describe('Qwen3 Base Models', () => {
it('should return correct pricing for qwen3 base pattern', () => {
expect(getMultiplier({ model: 'qwen3', tokenType: 'prompt' })).toBe(
tokenValues['qwen3'].prompt,
);
expect(getMultiplier({ model: 'qwen3', tokenType: 'completion' })).toBe(
tokenValues['qwen3'].completion,
);
});
it('should return correct pricing for qwen3-4b (falls back to qwen3)', () => {
expect(getMultiplier({ model: 'qwen3-4b', tokenType: 'prompt' })).toBe(
tokenValues['qwen3'].prompt,
);
expect(getMultiplier({ model: 'qwen3-4b', tokenType: 'completion' })).toBe(
tokenValues['qwen3'].completion,
);
});
it('should return correct pricing for qwen3-8b', () => {
expect(getMultiplier({ model: 'qwen3-8b', tokenType: 'prompt' })).toBe(
tokenValues['qwen3-8b'].prompt,
);
expect(getMultiplier({ model: 'qwen3-8b', tokenType: 'completion' })).toBe(
tokenValues['qwen3-8b'].completion,
);
});
it('should return correct pricing for qwen3-14b', () => {
expect(getMultiplier({ model: 'qwen3-14b', tokenType: 'prompt' })).toBe(
tokenValues['qwen3-14b'].prompt,
);
expect(getMultiplier({ model: 'qwen3-14b', tokenType: 'completion' })).toBe(
tokenValues['qwen3-14b'].completion,
);
});
it('should return correct pricing for qwen3-235b-a22b', () => {
expect(getMultiplier({ model: 'qwen3-235b-a22b', tokenType: 'prompt' })).toBe(
tokenValues['qwen3-235b-a22b'].prompt,
);
expect(getMultiplier({ model: 'qwen3-235b-a22b', tokenType: 'completion' })).toBe(
tokenValues['qwen3-235b-a22b'].completion,
);
});
it('should handle model name variations with provider prefixes', () => {
const models = [
{ input: 'qwen3', expected: 'qwen3' },
{ input: 'qwen3-4b', expected: 'qwen3' },
{ input: 'qwen3-8b', expected: 'qwen3-8b' },
{ input: 'qwen3-32b', expected: 'qwen3-32b' },
];
models.forEach(({ input, expected }) => {
const withPrefix = `alibaba/${input}`;
expect(getMultiplier({ model: withPrefix, tokenType: 'prompt' })).toBe(
tokenValues[expected].prompt,
);
expect(getMultiplier({ model: withPrefix, tokenType: 'completion' })).toBe(
tokenValues[expected].completion,
);
});
});
});
describe('Qwen3 VL (Vision-Language) Models', () => {
it('should return correct pricing for qwen3-vl-8b-thinking', () => {
expect(getMultiplier({ model: 'qwen3-vl-8b-thinking', tokenType: 'prompt' })).toBe(
tokenValues['qwen3-vl-8b-thinking'].prompt,
);
expect(getMultiplier({ model: 'qwen3-vl-8b-thinking', tokenType: 'completion' })).toBe(
tokenValues['qwen3-vl-8b-thinking'].completion,
);
});
it('should return correct pricing for qwen3-vl-8b-instruct', () => {
expect(getMultiplier({ model: 'qwen3-vl-8b-instruct', tokenType: 'prompt' })).toBe(
tokenValues['qwen3-vl-8b-instruct'].prompt,
);
expect(getMultiplier({ model: 'qwen3-vl-8b-instruct', tokenType: 'completion' })).toBe(
tokenValues['qwen3-vl-8b-instruct'].completion,
);
});
it('should return correct pricing for qwen3-vl-30b-a3b', () => {
expect(getMultiplier({ model: 'qwen3-vl-30b-a3b', tokenType: 'prompt' })).toBe(
tokenValues['qwen3-vl-30b-a3b'].prompt,
);
expect(getMultiplier({ model: 'qwen3-vl-30b-a3b', tokenType: 'completion' })).toBe(
tokenValues['qwen3-vl-30b-a3b'].completion,
);
});
it('should return correct pricing for qwen3-vl-235b-a22b', () => {
expect(getMultiplier({ model: 'qwen3-vl-235b-a22b', tokenType: 'prompt' })).toBe(
tokenValues['qwen3-vl-235b-a22b'].prompt,
);
expect(getMultiplier({ model: 'qwen3-vl-235b-a22b', tokenType: 'completion' })).toBe(
tokenValues['qwen3-vl-235b-a22b'].completion,
);
});
});
describe('Qwen3 Specialized Models', () => {
it('should return correct pricing for qwen3-max', () => {
expect(getMultiplier({ model: 'qwen3-max', tokenType: 'prompt' })).toBe(
tokenValues['qwen3-max'].prompt,
);
expect(getMultiplier({ model: 'qwen3-max', tokenType: 'completion' })).toBe(
tokenValues['qwen3-max'].completion,
);
});
it('should return correct pricing for qwen3-coder', () => {
expect(getMultiplier({ model: 'qwen3-coder', tokenType: 'prompt' })).toBe(
tokenValues['qwen3-coder'].prompt,
);
expect(getMultiplier({ model: 'qwen3-coder', tokenType: 'completion' })).toBe(
tokenValues['qwen3-coder'].completion,
);
});
it('should return correct pricing for qwen3-coder-plus', () => {
expect(getMultiplier({ model: 'qwen3-coder-plus', tokenType: 'prompt' })).toBe(
tokenValues['qwen3-coder-plus'].prompt,
);
expect(getMultiplier({ model: 'qwen3-coder-plus', tokenType: 'completion' })).toBe(
tokenValues['qwen3-coder-plus'].completion,
);
});
it('should return correct pricing for qwen3-coder-flash', () => {
expect(getMultiplier({ model: 'qwen3-coder-flash', tokenType: 'prompt' })).toBe(
tokenValues['qwen3-coder-flash'].prompt,
);
expect(getMultiplier({ model: 'qwen3-coder-flash', tokenType: 'completion' })).toBe(
tokenValues['qwen3-coder-flash'].completion,
);
});
it('should return correct pricing for qwen3-next-80b-a3b', () => {
expect(getMultiplier({ model: 'qwen3-next-80b-a3b', tokenType: 'prompt' })).toBe(
tokenValues['qwen3-next-80b-a3b'].prompt,
);
expect(getMultiplier({ model: 'qwen3-next-80b-a3b', tokenType: 'completion' })).toBe(
tokenValues['qwen3-next-80b-a3b'].completion,
);
});
});
describe('Qwen3 Model Variations', () => {
it('should handle all qwen3 models with provider prefixes', () => {
const models = ['qwen3', 'qwen3-8b', 'qwen3-max', 'qwen3-coder', 'qwen3-vl-8b-instruct'];
const prefixes = ['alibaba', 'qwen', 'openrouter'];
models.forEach((model) => {
prefixes.forEach((prefix) => {
const fullModel = `${prefix}/${model}`;
expect(getMultiplier({ model: fullModel, tokenType: 'prompt' })).toBe(
tokenValues[model].prompt,
);
expect(getMultiplier({ model: fullModel, tokenType: 'completion' })).toBe(
tokenValues[model].completion,
);
});
});
});
it('should handle qwen3-4b falling back to qwen3 base pattern', () => {
const testCases = ['qwen3-4b', 'alibaba/qwen3-4b', 'qwen/qwen3-4b-preview'];
testCases.forEach((model) => {
expect(getMultiplier({ model, tokenType: 'prompt' })).toBe(tokenValues['qwen3'].prompt);
expect(getMultiplier({ model, tokenType: 'completion' })).toBe(
tokenValues['qwen3'].completion,
);
});
});
});
});
describe('getCacheMultiplier', () => {
it('should return the correct cache multiplier for a given valueKey and cacheType', () => {
expect(getCacheMultiplier({ valueKey: 'claude-3-5-sonnet', cacheType: 'write' })).toBe(
@@ -488,6 +1040,9 @@ describe('getCacheMultiplier', () => {
describe('Google Model Tests', () => {
const googleModels = [
'gemini-2.5-pro',
'gemini-2.5-flash',
'gemini-2.5-flash-lite',
'gemini-2.5-pro-preview-05-06',
'gemini-2.5-flash-preview-04-17',
'gemini-2.5-exp',
@@ -528,6 +1083,9 @@ describe('Google Model Tests', () => {
it('should map to the correct model keys', () => {
const expected = {
'gemini-2.5-pro': 'gemini-2.5-pro',
'gemini-2.5-flash': 'gemini-2.5-flash',
'gemini-2.5-flash-lite': 'gemini-2.5-flash-lite',
'gemini-2.5-pro-preview-05-06': 'gemini-2.5-pro',
'gemini-2.5-flash-preview-04-17': 'gemini-2.5-flash',
'gemini-2.5-exp': 'gemini-2.5',
@@ -683,6 +1241,110 @@ describe('Grok Model Tests - Pricing', () => {
});
});
describe('GLM Model Tests', () => {
it('should return expected value keys for GLM models', () => {
expect(getValueKey('glm-4.6')).toBe('glm-4.6');
expect(getValueKey('glm-4.5')).toBe('glm-4.5');
expect(getValueKey('glm-4.5v')).toBe('glm-4.5v');
expect(getValueKey('glm-4.5-air')).toBe('glm-4.5-air');
expect(getValueKey('glm-4-32b')).toBe('glm-4-32b');
expect(getValueKey('glm-4')).toBe('glm-4');
expect(getValueKey('glm4')).toBe('glm4');
});
it('should match GLM model variations with provider prefixes', () => {
expect(getValueKey('z-ai/glm-4.6')).toBe('glm-4.6');
expect(getValueKey('z-ai/glm-4.5')).toBe('glm-4.5');
expect(getValueKey('z-ai/glm-4.5-air')).toBe('glm-4.5-air');
expect(getValueKey('z-ai/glm-4.5v')).toBe('glm-4.5v');
expect(getValueKey('z-ai/glm-4-32b')).toBe('glm-4-32b');
expect(getValueKey('zai/glm-4.6')).toBe('glm-4.6');
expect(getValueKey('zai/glm-4.5')).toBe('glm-4.5');
expect(getValueKey('zai/glm-4.5-air')).toBe('glm-4.5-air');
expect(getValueKey('zai/glm-4.5v')).toBe('glm-4.5v');
expect(getValueKey('zai-org/GLM-4.6')).toBe('glm-4.6');
expect(getValueKey('zai-org/GLM-4.5')).toBe('glm-4.5');
expect(getValueKey('zai-org/GLM-4.5-Air')).toBe('glm-4.5-air');
expect(getValueKey('zai-org/GLM-4.5V')).toBe('glm-4.5v');
expect(getValueKey('zai-org/GLM-4-32B-0414')).toBe('glm-4-32b');
});
it('should match GLM model variations with suffixes', () => {
expect(getValueKey('glm-4.6-fp8')).toBe('glm-4.6');
expect(getValueKey('zai-org/GLM-4.6-FP8')).toBe('glm-4.6');
expect(getValueKey('zai-org/GLM-4.5-Air-FP8')).toBe('glm-4.5-air');
});
it('should prioritize more specific GLM model patterns', () => {
expect(getValueKey('glm-4.5-air-something')).toBe('glm-4.5-air');
expect(getValueKey('glm-4.5-something')).toBe('glm-4.5');
expect(getValueKey('glm-4.5v-something')).toBe('glm-4.5v');
});
it('should return correct multipliers for all GLM models', () => {
expect(getMultiplier({ model: 'glm-4.6', tokenType: 'prompt' })).toBe(
tokenValues['glm-4.6'].prompt,
);
expect(getMultiplier({ model: 'glm-4.6', tokenType: 'completion' })).toBe(
tokenValues['glm-4.6'].completion,
);
expect(getMultiplier({ model: 'glm-4.5v', tokenType: 'prompt' })).toBe(
tokenValues['glm-4.5v'].prompt,
);
expect(getMultiplier({ model: 'glm-4.5v', tokenType: 'completion' })).toBe(
tokenValues['glm-4.5v'].completion,
);
expect(getMultiplier({ model: 'glm-4.5-air', tokenType: 'prompt' })).toBe(
tokenValues['glm-4.5-air'].prompt,
);
expect(getMultiplier({ model: 'glm-4.5-air', tokenType: 'completion' })).toBe(
tokenValues['glm-4.5-air'].completion,
);
expect(getMultiplier({ model: 'glm-4.5', tokenType: 'prompt' })).toBe(
tokenValues['glm-4.5'].prompt,
);
expect(getMultiplier({ model: 'glm-4.5', tokenType: 'completion' })).toBe(
tokenValues['glm-4.5'].completion,
);
expect(getMultiplier({ model: 'glm-4-32b', tokenType: 'prompt' })).toBe(
tokenValues['glm-4-32b'].prompt,
);
expect(getMultiplier({ model: 'glm-4-32b', tokenType: 'completion' })).toBe(
tokenValues['glm-4-32b'].completion,
);
expect(getMultiplier({ model: 'glm-4', tokenType: 'prompt' })).toBe(
tokenValues['glm-4'].prompt,
);
expect(getMultiplier({ model: 'glm-4', tokenType: 'completion' })).toBe(
tokenValues['glm-4'].completion,
);
expect(getMultiplier({ model: 'glm4', tokenType: 'prompt' })).toBe(tokenValues['glm4'].prompt);
expect(getMultiplier({ model: 'glm4', tokenType: 'completion' })).toBe(
tokenValues['glm4'].completion,
);
});
it('should return correct multipliers for GLM models with provider prefixes', () => {
expect(getMultiplier({ model: 'z-ai/glm-4.6', tokenType: 'prompt' })).toBe(
tokenValues['glm-4.6'].prompt,
);
expect(getMultiplier({ model: 'zai/glm-4.5-air', tokenType: 'completion' })).toBe(
tokenValues['glm-4.5-air'].completion,
);
expect(getMultiplier({ model: 'zai-org/GLM-4.5V', tokenType: 'prompt' })).toBe(
tokenValues['glm-4.5v'].prompt,
);
});
});
describe('Claude Model Tests', () => {
it('should return correct prompt and completion rates for Claude 4 models', () => {
expect(getMultiplier({ model: 'claude-sonnet-4', tokenType: 'prompt' })).toBe(
@@ -699,6 +1361,37 @@ describe('Claude Model Tests', () => {
);
});
it('should return correct prompt and completion rates for Claude Haiku 4.5', () => {
expect(getMultiplier({ model: 'claude-haiku-4-5', tokenType: 'prompt' })).toBe(
tokenValues['claude-haiku-4-5'].prompt,
);
expect(getMultiplier({ model: 'claude-haiku-4-5', tokenType: 'completion' })).toBe(
tokenValues['claude-haiku-4-5'].completion,
);
});
it('should handle Claude Haiku 4.5 model name variations', () => {
const modelVariations = [
'claude-haiku-4-5',
'claude-haiku-4-5-20250420',
'claude-haiku-4-5-latest',
'anthropic/claude-haiku-4-5',
'claude-haiku-4-5/anthropic',
'claude-haiku-4-5-preview',
];
modelVariations.forEach((model) => {
const valueKey = getValueKey(model);
expect(valueKey).toBe('claude-haiku-4-5');
expect(getMultiplier({ model, tokenType: 'prompt' })).toBe(
tokenValues['claude-haiku-4-5'].prompt,
);
expect(getMultiplier({ model, tokenType: 'completion' })).toBe(
tokenValues['claude-haiku-4-5'].completion,
);
});
});
it('should handle Claude 4 model name variations with different prefixes and suffixes', () => {
const modelVariations = [
'claude-sonnet-4',
@@ -776,3 +1469,119 @@ describe('Claude Model Tests', () => {
});
});
});
describe('tokens.ts and tx.js sync validation', () => {
it('should resolve all models in maxTokensMap to pricing via getValueKey', () => {
const tokensKeys = Object.keys(maxTokensMap[EModelEndpoint.openAI]);
const txKeys = Object.keys(tokenValues);
const unresolved = [];
tokensKeys.forEach((key) => {
// Skip legacy token size mappings (e.g., '4k', '8k', '16k', '32k')
if (/^\d+k$/.test(key)) return;
// Skip generic pattern keys (end with '-' or ':')
if (key.endsWith('-') || key.endsWith(':')) return;
// Try to resolve via getValueKey
const resolvedKey = getValueKey(key);
// If it resolves and the resolved key has pricing, success
if (resolvedKey && txKeys.includes(resolvedKey)) return;
// If it resolves to a legacy key (4k, 8k, etc), also OK
if (resolvedKey && /^\d+k$/.test(resolvedKey)) return;
// If we get here, this model can't get pricing - flag it
unresolved.push({
key,
resolvedKey: resolvedKey || 'undefined',
context: maxTokensMap[EModelEndpoint.openAI][key],
});
});
if (unresolved.length > 0) {
console.log('\nModels that cannot resolve to pricing via getValueKey:');
unresolved.forEach(({ key, resolvedKey, context }) => {
console.log(` - '${key}' → '${resolvedKey}' (context: ${context})`);
});
}
expect(unresolved).toEqual([]);
});
it('should not have redundant dated variants with same pricing and context as base model', () => {
const txKeys = Object.keys(tokenValues);
const redundant = [];
txKeys.forEach((key) => {
// Check if this is a dated variant (ends with -YYYY-MM-DD)
if (key.match(/.*-\d{4}-\d{2}-\d{2}$/)) {
const baseKey = key.replace(/-\d{4}-\d{2}-\d{2}$/, '');
if (txKeys.includes(baseKey)) {
const variantPricing = tokenValues[key];
const basePricing = tokenValues[baseKey];
const variantContext = maxTokensMap[EModelEndpoint.openAI][key];
const baseContext = maxTokensMap[EModelEndpoint.openAI][baseKey];
const samePricing =
variantPricing.prompt === basePricing.prompt &&
variantPricing.completion === basePricing.completion;
const sameContext = variantContext === baseContext;
if (samePricing && sameContext) {
redundant.push({
key,
baseKey,
pricing: `${variantPricing.prompt}/${variantPricing.completion}`,
context: variantContext,
});
}
}
}
});
if (redundant.length > 0) {
console.log('\nRedundant dated variants found (same pricing and context as base):');
redundant.forEach(({ key, baseKey, pricing, context }) => {
console.log(` - '${key}' → '${baseKey}' (pricing: ${pricing}, context: ${context})`);
console.log(` Can be removed - pattern matching will handle it`);
});
}
expect(redundant).toEqual([]);
});
it('should have context windows in tokens.ts for all models with pricing in tx.js (openAI catch-all)', () => {
const txKeys = Object.keys(tokenValues);
const missingContext = [];
txKeys.forEach((key) => {
// Skip legacy token size mappings (4k, 8k, 16k, 32k)
if (/^\d+k$/.test(key)) return;
// Check if this model has a context window defined
const context = maxTokensMap[EModelEndpoint.openAI][key];
if (!context) {
const pricing = tokenValues[key];
missingContext.push({
key,
pricing: `${pricing.prompt}/${pricing.completion}`,
});
}
});
if (missingContext.length > 0) {
console.log('\nModels with pricing but missing context in tokens.ts:');
missingContext.forEach(({ key, pricing }) => {
console.log(` - '${key}' (pricing: ${pricing})`);
console.log(` Add to tokens.ts openAIModels/bedrockModels/etc.`);
});
}
expect(missingContext).toEqual([]);
});
});

View File

@@ -3,7 +3,7 @@ const bcrypt = require('bcryptjs');
/**
* Compares the provided password with the user's password.
*
* @param {MongoUser} user - The user to compare the password for.
* @param {IUser} user - The user to compare the password for.
* @param {string} candidatePassword - The password to test against the user's password.
* @returns {Promise<boolean>} A promise that resolves to a boolean indicating if the password matches.
*/

View File

@@ -1,6 +1,6 @@
{
"name": "@librechat/backend",
"version": "v0.7.9",
"version": "v0.8.1-rc1",
"description": "",
"scripts": {
"start": "echo 'please run this from the root directory'",
@@ -47,16 +47,15 @@
"@langchain/core": "^0.3.62",
"@langchain/google-genai": "^0.2.13",
"@langchain/google-vertexai": "^0.2.13",
"@langchain/openai": "^0.5.18",
"@langchain/textsplitters": "^0.1.0",
"@librechat/agents": "^2.4.69",
"@librechat/agents": "^2.4.90",
"@librechat/api": "*",
"@librechat/data-schemas": "*",
"@microsoft/microsoft-graph-client": "^3.0.7",
"@modelcontextprotocol/sdk": "^1.17.1",
"@node-saml/passport-saml": "^5.1.0",
"@microsoft/microsoft-graph-client": "^3.0.7",
"@waylaidwanderer/fetch-event-source": "^3.0.1",
"axios": "^1.8.2",
"axios": "^1.12.1",
"bcryptjs": "^2.4.3",
"compression": "^1.8.1",
"connect-redis": "^8.1.0",
@@ -94,10 +93,9 @@
"multer": "^2.0.2",
"nanoid": "^3.3.7",
"node-fetch": "^2.7.0",
"nodemailer": "^6.9.15",
"nodemailer": "^7.0.9",
"ollama": "^0.5.0",
"openai": "^5.10.1",
"openai-chat-tokens": "^0.2.8",
"openid-client": "^6.5.0",
"passport": "^0.6.0",
"passport-apple": "^2.0.2",

View File

@@ -1,6 +1,6 @@
const { logger } = require('~/config');
const { logger } = require('@librechat/data-schemas');
// WeakMap to hold temporary data associated with requests
/** WeakMap to hold temporary data associated with requests */
const requestDataMap = new WeakMap();
const FinalizationRegistry = global.FinalizationRegistry || null;
@@ -23,7 +23,7 @@ const clientRegistry = FinalizationRegistry
} else {
logger.debug('[FinalizationRegistry] Cleaning up client');
}
} catch (e) {
} catch {
// Ignore errors
}
})
@@ -55,6 +55,9 @@ function disposeClient(client) {
if (client.responseMessageId) {
client.responseMessageId = null;
}
if (client.parentMessageId) {
client.parentMessageId = null;
}
if (client.message_file_map) {
client.message_file_map = null;
}
@@ -334,7 +337,7 @@ function disposeClient(client) {
}
}
client.options = null;
} catch (e) {
} catch {
// Ignore errors during disposal
}
}

View File

@@ -1,8 +1,8 @@
const cookies = require('cookie');
const jwt = require('jsonwebtoken');
const openIdClient = require('openid-client');
const { isEnabled } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { isEnabled, findOpenIDUser } = require('@librechat/api');
const {
requestPasswordReset,
setOpenIDAuthTokens,
@@ -11,8 +11,9 @@ const {
registerUser,
} = require('~/server/services/AuthService');
const { findUser, getUserById, deleteAllUserSessions, findSession } = require('~/models');
const { getOpenIdConfig } = require('~/strategies');
const { getGraphApiToken } = require('~/server/services/GraphTokenService');
const { getOAuthReconnectionManager } = require('~/config');
const { getOpenIdConfig } = require('~/strategies');
const registrationController = async (req, res) => {
try {
@@ -71,11 +72,17 @@ const refreshController = async (req, res) => {
const openIdConfig = getOpenIdConfig();
const tokenset = await openIdClient.refreshTokenGrant(openIdConfig, refreshToken);
const claims = tokenset.claims();
const user = await findUser({ email: claims.email });
if (!user) {
const { user, error } = await findOpenIDUser({
findUser,
email: claims.email,
openidId: claims.sub,
idOnTheSource: claims.oid,
strategyName: 'refreshController',
});
if (error || !user) {
return res.status(401).redirect('/login');
}
const token = setOpenIDAuthTokens(tokenset, res);
const token = setOpenIDAuthTokens(tokenset, res, user._id.toString());
return res.status(200).send({ token, user });
} catch (error) {
logger.error('[refreshController] OpenID token refresh error', error);
@@ -84,7 +91,7 @@ const refreshController = async (req, res) => {
}
try {
const payload = jwt.verify(refreshToken, process.env.JWT_REFRESH_SECRET);
const user = await getUserById(payload.id, '-password -__v -totpSecret');
const user = await getUserById(payload.id, '-password -__v -totpSecret -backupCodes');
if (!user) {
return res.status(401).redirect('/login');
}
@@ -96,14 +103,29 @@ const refreshController = async (req, res) => {
return res.status(200).send({ token, user });
}
// Find the session with the hashed refresh token
const session = await findSession({
userId: userId,
refreshToken: refreshToken,
});
/** Session with the hashed refresh token */
const session = await findSession(
{
userId: userId,
refreshToken: refreshToken,
},
{ lean: false },
);
if (session && session.expiration > new Date()) {
const token = await setAuthTokens(userId, res, session._id);
const token = await setAuthTokens(userId, res, session);
// trigger OAuth MCP server reconnection asynchronously (best effort)
try {
void getOAuthReconnectionManager()
.reconnectServers(userId)
.catch((err) => {
logger.error('[refreshController] Error reconnecting OAuth MCP servers:', err);
});
} catch (err) {
logger.warn(`[refreshController] Cannot attempt OAuth MCP servers reconnection:`, err);
}
res.status(200).send({ token, user });
} else if (req?.query?.retry) {
// Retrying from a refresh token request that failed (401)
@@ -114,7 +136,7 @@ const refreshController = async (req, res) => {
res.status(401).send('Refresh token expired or not found for this user');
}
} catch (err) {
logger.error(`[refreshController] Refresh token: ${refreshToken}`, err);
logger.error(`[refreshController] Invalid refresh token:`, err);
res.status(403).send('Invalid refresh token');
}
};

View File

@@ -1,46 +0,0 @@
const { logger } = require('~/config');
//handle duplicates
const handleDuplicateKeyError = (err, res) => {
logger.error('Duplicate key error:', err.keyValue);
const field = `${JSON.stringify(Object.keys(err.keyValue))}`;
const code = 409;
res
.status(code)
.send({ messages: `An document with that ${field} already exists.`, fields: field });
};
//handle validation errors
const handleValidationError = (err, res) => {
logger.error('Validation error:', err.errors);
let errors = Object.values(err.errors).map((el) => el.message);
let fields = `${JSON.stringify(Object.values(err.errors).map((el) => el.path))}`;
let code = 400;
if (errors.length > 1) {
errors = errors.join(' ');
res.status(code).send({ messages: `${JSON.stringify(errors)}`, fields: fields });
} else {
res.status(code).send({ messages: `${JSON.stringify(errors)}`, fields: fields });
}
};
module.exports = (err, _req, res, _next) => {
try {
if (err.name === 'ValidationError') {
return handleValidationError(err, res);
}
if (err.code && err.code == 11000) {
return handleDuplicateKeyError(err, res);
}
// Special handling for errors like SyntaxError
if (err.statusCode && err.body) {
return res.status(err.statusCode).send(err.body);
}
logger.error('ErrorController => error', err);
return res.status(500).send('An unknown error occurred.');
} catch (err) {
logger.error('ErrorController => processing error', err);
return res.status(500).send('Processing error in ErrorController.');
}
};

View File

@@ -1,10 +1,11 @@
const { logger } = require('@librechat/data-schemas');
const { CacheKeys } = require('librechat-data-provider');
const { loadDefaultModels, loadConfigModels } = require('~/server/services/Config');
const { getLogStores } = require('~/cache');
const { logger } = require('~/config');
/**
* @param {ServerRequest} req
* @returns {Promise<TModelsConfig>} The models config.
*/
const getModelsConfig = async (req) => {
const cache = getLogStores(CacheKeys.CONFIG_STORE);

View File

@@ -1,27 +0,0 @@
const { CacheKeys } = require('librechat-data-provider');
const { loadOverrideConfig } = require('~/server/services/Config');
const { getLogStores } = require('~/cache');
async function overrideController(req, res) {
const cache = getLogStores(CacheKeys.CONFIG_STORE);
let overrideConfig = await cache.get(CacheKeys.OVERRIDE_CONFIG);
if (overrideConfig) {
res.send(overrideConfig);
return;
} else if (overrideConfig === false) {
res.send(false);
return;
}
overrideConfig = await loadOverrideConfig();
const { endpointsConfig, modelsConfig } = overrideConfig;
if (endpointsConfig) {
await cache.set(CacheKeys.ENDPOINT_CONFIG, endpointsConfig);
}
if (modelsConfig) {
await cache.set(CacheKeys.MODELS_CONFIG, modelsConfig);
}
await cache.set(CacheKeys.OVERRIDE_CONFIG, overrideConfig);
res.send(JSON.stringify(overrideConfig));
}
module.exports = overrideController;

View File

@@ -364,7 +364,7 @@ const getUserEffectivePermissions = async (req, res) => {
*/
const searchPrincipals = async (req, res) => {
try {
const { q: query, limit = 20, type } = req.query;
const { q: query, limit = 20, types } = req.query;
if (!query || query.trim().length === 0) {
return res.status(400).json({
@@ -379,22 +379,34 @@ const searchPrincipals = async (req, res) => {
}
const searchLimit = Math.min(Math.max(1, parseInt(limit) || 10), 50);
const typeFilter = [PrincipalType.USER, PrincipalType.GROUP, PrincipalType.ROLE].includes(type)
? type
: null;
const localResults = await searchLocalPrincipals(query.trim(), searchLimit, typeFilter);
let typeFilters = null;
if (types) {
const typesArray = Array.isArray(types) ? types : types.split(',');
const validTypes = typesArray.filter((t) =>
[PrincipalType.USER, PrincipalType.GROUP, PrincipalType.ROLE].includes(t),
);
typeFilters = validTypes.length > 0 ? validTypes : null;
}
const localResults = await searchLocalPrincipals(query.trim(), searchLimit, typeFilters);
let allPrincipals = [...localResults];
const useEntraId = entraIdPrincipalFeatureEnabled(req.user);
if (useEntraId && localResults.length < searchLimit) {
try {
const graphTypeMap = {
user: 'users',
group: 'groups',
null: 'all',
};
let graphType = 'all';
if (typeFilters && typeFilters.length === 1) {
const graphTypeMap = {
[PrincipalType.USER]: 'users',
[PrincipalType.GROUP]: 'groups',
};
const mappedType = graphTypeMap[typeFilters[0]];
if (mappedType) {
graphType = mappedType;
}
}
const authHeader = req.headers.authorization;
const accessToken =
@@ -405,7 +417,7 @@ const searchPrincipals = async (req, res) => {
accessToken,
req.user.openidId,
query.trim(),
graphTypeMap[typeFilter],
graphType,
searchLimit - localResults.length,
);
@@ -436,21 +448,22 @@ const searchPrincipals = async (req, res) => {
_searchScore: calculateRelevanceScore(item, query.trim()),
}));
allPrincipals = sortPrincipalsByRelevance(scoredResults)
const finalResults = sortPrincipalsByRelevance(scoredResults)
.slice(0, searchLimit)
.map((result) => {
const { _searchScore, ...resultWithoutScore } = result;
return resultWithoutScore;
});
res.status(200).json({
query: query.trim(),
limit: searchLimit,
type: typeFilter,
results: allPrincipals,
count: allPrincipals.length,
types: typeFilters,
results: finalResults,
count: finalResults.length,
sources: {
local: allPrincipals.filter((r) => r.source === 'local').length,
entra: allPrincipals.filter((r) => r.source === 'entra').length,
local: finalResults.filter((r) => r.source === 'local').length,
entra: finalResults.filter((r) => r.source === 'entra').length,
},
});
} catch (error) {

View File

@@ -1,54 +1,11 @@
const { logger } = require('@librechat/data-schemas');
const { CacheKeys, AuthType, Constants } = require('librechat-data-provider');
const { getCustomConfig, getCachedTools } = require('~/server/services/Config');
const { getToolkitKey } = require('~/server/services/ToolService');
const { getMCPManager, getFlowStateManager } = require('~/config');
const { availableTools } = require('~/app/clients/tools');
const { CacheKeys } = require('librechat-data-provider');
const { getToolkitKey, checkPluginAuth, filterUniquePlugins } = require('@librechat/api');
const { getCachedTools, setCachedTools } = require('~/server/services/Config');
const { availableTools, toolkits } = require('~/app/clients/tools');
const { getAppConfig } = require('~/server/services/Config');
const { getLogStores } = require('~/cache');
/**
* Filters out duplicate plugins from the list of plugins.
*
* @param {TPlugin[]} plugins The list of plugins to filter.
* @returns {TPlugin[]} The list of plugins with duplicates removed.
*/
const filterUniquePlugins = (plugins) => {
const seen = new Set();
return plugins.filter((plugin) => {
const duplicate = seen.has(plugin.pluginKey);
seen.add(plugin.pluginKey);
return !duplicate;
});
};
/**
* Determines if a plugin is authenticated by checking if all required authentication fields have non-empty values.
* Supports alternate authentication fields, allowing validation against multiple possible environment variables.
*
* @param {TPlugin} plugin The plugin object containing the authentication configuration.
* @returns {boolean} True if the plugin is authenticated for all required fields, false otherwise.
*/
const checkPluginAuth = (plugin) => {
if (!plugin.authConfig || plugin.authConfig.length === 0) {
return false;
}
return plugin.authConfig.every((authFieldObj) => {
const authFieldOptions = authFieldObj.authField.split('||');
let isFieldAuthenticated = false;
for (const fieldOption of authFieldOptions) {
const envValue = process.env[fieldOption];
if (envValue && envValue.trim() !== '' && envValue !== AuthType.USER_PROVIDED) {
isFieldAuthenticated = true;
break;
}
}
return isFieldAuthenticated;
});
};
const getAvailablePluginsController = async (req, res) => {
try {
const cache = getLogStores(CacheKeys.CONFIG_STORE);
@@ -58,8 +15,10 @@ const getAvailablePluginsController = async (req, res) => {
return;
}
const appConfig = await getAppConfig({ role: req.user?.role });
/** @type {{ filteredTools: string[], includedTools: string[] }} */
const { filteredTools = [], includedTools = [] } = req.app.locals;
const { filteredTools = [], includedTools = [] } = appConfig;
/** @type {import('@librechat/api').LCManifestTool[]} */
const pluginManifest = availableTools;
const uniquePlugins = filterUniquePlugins(pluginManifest);
@@ -85,45 +44,6 @@ const getAvailablePluginsController = async (req, res) => {
}
};
function createServerToolsCallback() {
/**
* @param {string} serverName
* @param {TPlugin[] | null} serverTools
*/
return async function (serverName, serverTools) {
try {
const mcpToolsCache = getLogStores(CacheKeys.MCP_TOOLS);
if (!serverName || !mcpToolsCache) {
return;
}
await mcpToolsCache.set(serverName, serverTools);
logger.debug(`MCP tools for ${serverName} added to cache.`);
} catch (error) {
logger.error('Error retrieving MCP tools from cache:', error);
}
};
}
function createGetServerTools() {
/**
* Retrieves cached server tools
* @param {string} serverName
* @returns {Promise<TPlugin[] | null>}
*/
return async function (serverName) {
try {
const mcpToolsCache = getLogStores(CacheKeys.MCP_TOOLS);
if (!mcpToolsCache) {
return null;
}
return await mcpToolsCache.get(serverName);
} catch (error) {
logger.error('Error retrieving MCP tools from cache:', error);
return null;
}
};
}
/**
* Retrieves and returns a list of available tools, either from a cache or by reading a plugin manifest file.
*
@@ -139,37 +59,35 @@ function createGetServerTools() {
const getAvailableTools = async (req, res) => {
try {
const userId = req.user?.id;
const customConfig = await getCustomConfig();
if (!userId) {
logger.warn('[getAvailableTools] User ID not found in request');
return res.status(401).json({ message: 'Unauthorized' });
}
const cache = getLogStores(CacheKeys.CONFIG_STORE);
const cachedToolsArray = await cache.get(CacheKeys.TOOLS);
const cachedUserTools = await getCachedTools({ userId });
const userPlugins = convertMCPToolsToPlugins(cachedUserTools, customConfig);
if (cachedToolsArray && userPlugins) {
const dedupedTools = filterUniquePlugins([...userPlugins, ...cachedToolsArray]);
res.status(200).json(dedupedTools);
const appConfig = req.config ?? (await getAppConfig({ role: req.user?.role }));
// Return early if we have cached tools
if (cachedToolsArray != null) {
res.status(200).json(cachedToolsArray);
return;
}
// If not in cache, build from manifest
let pluginManifest = availableTools;
if (customConfig?.mcpServers != null) {
const mcpManager = getMCPManager();
const flowsCache = getLogStores(CacheKeys.FLOWS);
const flowManager = flowsCache ? getFlowStateManager(flowsCache) : null;
const serverToolsCallback = createServerToolsCallback();
const getServerTools = createGetServerTools();
const mcpTools = await mcpManager.loadManifestTools({
flowManager,
serverToolsCallback,
getServerTools,
});
pluginManifest = [...mcpTools, ...pluginManifest];
/** @type {Record<string, FunctionTool> | null} Get tool definitions to filter which tools are actually available */
let toolDefinitions = await getCachedTools();
if (toolDefinitions == null && appConfig?.availableTools != null) {
logger.warn('[getAvailableTools] Tool cache was empty, re-initializing from app config');
await setCachedTools(appConfig.availableTools);
toolDefinitions = appConfig.availableTools;
}
/** @type {TPlugin[]} */
const uniquePlugins = filterUniquePlugins(pluginManifest);
/** @type {import('@librechat/api').LCManifestTool[]} */
let pluginManifest = availableTools;
/** @type {TPlugin[]} Deduplicate and authenticate plugins */
const uniquePlugins = filterUniquePlugins(pluginManifest);
const authenticatedPlugins = uniquePlugins.map((plugin) => {
if (checkPluginAuth(plugin)) {
return { ...plugin, authenticated: true };
@@ -178,115 +96,33 @@ const getAvailableTools = async (req, res) => {
}
});
const toolDefinitions = (await getCachedTools({ includeGlobal: true })) || {};
/** Filter plugins based on availability */
const toolsOutput = [];
for (const plugin of authenticatedPlugins) {
const isToolDefined = toolDefinitions[plugin.pluginKey] !== undefined;
const isToolDefined = toolDefinitions?.[plugin.pluginKey] !== undefined;
const isToolkit =
plugin.toolkit === true &&
Object.keys(toolDefinitions).some((key) => getToolkitKey(key) === plugin.pluginKey);
Object.keys(toolDefinitions ?? {}).some(
(key) => getToolkitKey({ toolkits, toolName: key }) === plugin.pluginKey,
);
if (!isToolDefined && !isToolkit) {
continue;
}
const toolToAdd = { ...plugin };
if (!plugin.pluginKey.includes(Constants.mcp_delimiter)) {
toolsOutput.push(toolToAdd);
continue;
}
const parts = plugin.pluginKey.split(Constants.mcp_delimiter);
const serverName = parts[parts.length - 1];
const serverConfig = customConfig?.mcpServers?.[serverName];
if (!serverConfig?.customUserVars) {
toolsOutput.push(toolToAdd);
continue;
}
const customVarKeys = Object.keys(serverConfig.customUserVars);
if (customVarKeys.length === 0) {
toolToAdd.authConfig = [];
toolToAdd.authenticated = true;
} else {
toolToAdd.authConfig = Object.entries(serverConfig.customUserVars).map(([key, value]) => ({
authField: key,
label: value.title || key,
description: value.description || '',
}));
toolToAdd.authenticated = false;
}
toolsOutput.push(toolToAdd);
toolsOutput.push(plugin);
}
const finalTools = filterUniquePlugins(toolsOutput);
await cache.set(CacheKeys.TOOLS, finalTools);
const dedupedTools = filterUniquePlugins([...userPlugins, ...finalTools]);
res.status(200).json(dedupedTools);
res.status(200).json(finalTools);
} catch (error) {
logger.error('[getAvailableTools]', error);
res.status(500).json({ message: error.message });
}
};
/**
* Converts MCP function format tools to plugin format
* @param {Object} functionTools - Object with function format tools
* @param {Object} customConfig - Custom configuration for MCP servers
* @returns {Array} Array of plugin objects
*/
function convertMCPToolsToPlugins(functionTools, customConfig) {
const plugins = [];
for (const [toolKey, toolData] of Object.entries(functionTools)) {
if (!toolData.function || !toolKey.includes(Constants.mcp_delimiter)) {
continue;
}
const functionData = toolData.function;
const parts = toolKey.split(Constants.mcp_delimiter);
const serverName = parts[parts.length - 1];
const serverConfig = customConfig?.mcpServers?.[serverName];
const plugin = {
name: parts[0], // Use the tool name without server suffix
pluginKey: toolKey,
description: functionData.description || '',
authenticated: true,
icon: serverConfig?.iconPath,
};
// Build authConfig for MCP tools
if (!serverConfig?.customUserVars) {
plugin.authConfig = [];
plugins.push(plugin);
continue;
}
const customVarKeys = Object.keys(serverConfig.customUserVars);
if (customVarKeys.length === 0) {
plugin.authConfig = [];
} else {
plugin.authConfig = Object.entries(serverConfig.customUserVars).map(([key, value]) => ({
authField: key,
label: value.title || key,
description: value.description || '',
}));
}
plugins.push(plugin);
}
return plugins;
}
module.exports = {
getAvailableTools,
getAvailablePluginsController,

View File

@@ -1,89 +1,493 @@
const { Constants } = require('librechat-data-provider');
const { getCustomConfig, getCachedTools } = require('~/server/services/Config');
const { getCachedTools, getAppConfig } = require('~/server/services/Config');
const { getLogStores } = require('~/cache');
// Mock the dependencies
jest.mock('@librechat/data-schemas', () => ({
logger: {
debug: jest.fn(),
error: jest.fn(),
warn: jest.fn(),
},
}));
jest.mock('~/server/services/Config', () => ({
getCustomConfig: jest.fn(),
getCachedTools: jest.fn(),
getAppConfig: jest.fn().mockResolvedValue({
filteredTools: [],
includedTools: [],
}),
setCachedTools: jest.fn(),
}));
jest.mock('~/server/services/ToolService', () => ({
getToolkitKey: jest.fn(),
}));
jest.mock('~/config', () => ({
getMCPManager: jest.fn(() => ({
loadManifestTools: jest.fn().mockResolvedValue([]),
})),
getFlowStateManager: jest.fn(),
}));
// loadAndFormatTools mock removed - no longer used in PluginController
// getMCPManager mock removed - no longer used in PluginController
jest.mock('~/app/clients/tools', () => ({
availableTools: [],
toolkits: [],
}));
jest.mock('~/cache', () => ({
getLogStores: jest.fn(),
}));
// Import the actual module with the function we want to test
const { getAvailableTools } = require('./PluginController');
const { getAvailableTools, getAvailablePluginsController } = require('./PluginController');
describe('PluginController', () => {
describe('plugin.icon behavior', () => {
let mockReq, mockRes, mockCache;
let mockReq, mockRes, mockCache;
beforeEach(() => {
jest.clearAllMocks();
mockReq = {
user: { id: 'test-user-id' },
config: {
filteredTools: [],
includedTools: [],
},
};
mockRes = { status: jest.fn().mockReturnThis(), json: jest.fn() };
mockCache = { get: jest.fn(), set: jest.fn() };
getLogStores.mockReturnValue(mockCache);
// Clear availableTools and toolkits arrays before each test
require('~/app/clients/tools').availableTools.length = 0;
require('~/app/clients/tools').toolkits.length = 0;
// Reset getCachedTools mock to ensure clean state
getCachedTools.mockReset();
// Reset getAppConfig mock to ensure clean state with default values
getAppConfig.mockReset();
getAppConfig.mockResolvedValue({
filteredTools: [],
includedTools: [],
});
});
describe('getAvailablePluginsController', () => {
it('should use filterUniquePlugins to remove duplicate plugins', async () => {
// Add plugins with duplicates to availableTools
const mockPlugins = [
{ name: 'Plugin1', pluginKey: 'key1', description: 'First' },
{ name: 'Plugin1', pluginKey: 'key1', description: 'First duplicate' },
{ name: 'Plugin2', pluginKey: 'key2', description: 'Second' },
];
require('~/app/clients/tools').availableTools.push(...mockPlugins);
const callGetAvailableToolsWithMCPServer = async (mcpServers) => {
mockCache.get.mockResolvedValue(null);
getCustomConfig.mockResolvedValue({ mcpServers });
const functionTools = {
[`test-tool${Constants.mcp_delimiter}test-server`]: {
function: { name: 'test-tool', description: 'A test tool' },
},
};
getCachedTools.mockResolvedValueOnce(functionTools);
getCachedTools.mockResolvedValueOnce({
[`test-tool${Constants.mcp_delimiter}test-server`]: true,
// Configure getAppConfig to return the expected config
getAppConfig.mockResolvedValueOnce({
filteredTools: [],
includedTools: [],
});
await getAvailableTools(mockReq, mockRes);
const responseData = mockRes.json.mock.calls[0][0];
return responseData.find((tool) => tool.name === 'test-tool');
};
await getAvailablePluginsController(mockReq, mockRes);
beforeEach(() => {
jest.clearAllMocks();
mockReq = { user: { id: 'test-user-id' } };
mockRes = { status: jest.fn().mockReturnThis(), json: jest.fn() };
mockCache = { get: jest.fn(), set: jest.fn() };
getLogStores.mockReturnValue(mockCache);
expect(mockRes.status).toHaveBeenCalledWith(200);
const responseData = mockRes.json.mock.calls[0][0];
// The real filterUniquePlugins should have removed the duplicate
expect(responseData).toHaveLength(2);
expect(responseData[0].pluginKey).toBe('key1');
expect(responseData[1].pluginKey).toBe('key2');
});
it('should set plugin.icon when iconPath is defined', async () => {
const mcpServers = {
'test-server': {
iconPath: '/path/to/icon.png',
it('should use checkPluginAuth to verify plugin authentication', async () => {
// checkPluginAuth returns false for plugins without authConfig
// so authenticated property won't be added
const mockPlugin = { name: 'Plugin1', pluginKey: 'key1', description: 'First' };
require('~/app/clients/tools').availableTools.push(mockPlugin);
mockCache.get.mockResolvedValue(null);
// Configure getAppConfig to return the expected config
getAppConfig.mockResolvedValueOnce({
filteredTools: [],
includedTools: [],
});
await getAvailablePluginsController(mockReq, mockRes);
const responseData = mockRes.json.mock.calls[0][0];
// The real checkPluginAuth returns false for plugins without authConfig, so authenticated property is not added
expect(responseData[0].authenticated).toBeUndefined();
});
it('should return cached plugins when available', async () => {
const cachedPlugins = [
{ name: 'CachedPlugin', pluginKey: 'cached', description: 'Cached plugin' },
];
mockCache.get.mockResolvedValue(cachedPlugins);
await getAvailablePluginsController(mockReq, mockRes);
// When cache is hit, we return immediately without processing
expect(mockRes.json).toHaveBeenCalledWith(cachedPlugins);
});
it('should filter plugins based on includedTools', async () => {
const mockPlugins = [
{ name: 'Plugin1', pluginKey: 'key1', description: 'First' },
{ name: 'Plugin2', pluginKey: 'key2', description: 'Second' },
];
require('~/app/clients/tools').availableTools.push(...mockPlugins);
mockCache.get.mockResolvedValue(null);
// Configure getAppConfig to return config with includedTools
getAppConfig.mockResolvedValueOnce({
filteredTools: [],
includedTools: ['key1'],
});
await getAvailablePluginsController(mockReq, mockRes);
const responseData = mockRes.json.mock.calls[0][0];
expect(responseData).toHaveLength(1);
expect(responseData[0].pluginKey).toBe('key1');
});
});
describe('getAvailableTools', () => {
it('should use filterUniquePlugins to deduplicate combined tools', async () => {
const mockUserTools = {
'user-tool': {
type: 'function',
function: {
name: 'user-tool',
description: 'User tool',
parameters: { type: 'object', properties: {} },
},
},
};
const testTool = await callGetAvailableToolsWithMCPServer(mcpServers);
expect(testTool.icon).toBe('/path/to/icon.png');
const mockCachedPlugins = [
{ name: 'user-tool', pluginKey: 'user-tool', description: 'Duplicate user tool' },
{ name: 'ManifestTool', pluginKey: 'manifest-tool', description: 'Manifest tool' },
];
mockCache.get.mockResolvedValue(mockCachedPlugins);
getCachedTools.mockResolvedValueOnce(mockUserTools);
mockReq.config = {
mcpConfig: null,
paths: { structuredTools: '/mock/path' },
};
await getAvailableTools(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(200);
const responseData = mockRes.json.mock.calls[0][0];
expect(Array.isArray(responseData)).toBe(true);
// The real filterUniquePlugins should have deduplicated tools with same pluginKey
const userToolCount = responseData.filter((tool) => tool.pluginKey === 'user-tool').length;
expect(userToolCount).toBe(1);
});
it('should set plugin.icon to undefined when iconPath is not defined', async () => {
const mcpServers = {
'test-server': {},
it('should use checkPluginAuth to verify authentication status', async () => {
// Add a plugin to availableTools that will be checked
const mockPlugin = {
name: 'Tool1',
pluginKey: 'tool1',
description: 'Tool 1',
// No authConfig means checkPluginAuth returns false
};
const testTool = await callGetAvailableToolsWithMCPServer(mcpServers);
expect(testTool.icon).toBeUndefined();
require('~/app/clients/tools').availableTools.push(mockPlugin);
mockCache.get.mockResolvedValue(null);
// getCachedTools returns the tool definitions
getCachedTools.mockResolvedValueOnce({
tool1: {
type: 'function',
function: {
name: 'tool1',
description: 'Tool 1',
parameters: {},
},
},
});
mockReq.config = {
mcpConfig: null,
paths: { structuredTools: '/mock/path' },
};
await getAvailableTools(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(200);
const responseData = mockRes.json.mock.calls[0][0];
expect(Array.isArray(responseData)).toBe(true);
const tool = responseData.find((t) => t.pluginKey === 'tool1');
expect(tool).toBeDefined();
// The real checkPluginAuth returns false for plugins without authConfig, so authenticated property is not added
expect(tool.authenticated).toBeUndefined();
});
it('should use getToolkitKey for toolkit validation', async () => {
const mockToolkit = {
name: 'Toolkit1',
pluginKey: 'toolkit1',
description: 'Toolkit 1',
toolkit: true,
};
require('~/app/clients/tools').availableTools.push(mockToolkit);
// Mock toolkits to have a mapping
require('~/app/clients/tools').toolkits.push({
name: 'Toolkit1',
pluginKey: 'toolkit1',
tools: ['toolkit1_function'],
});
mockCache.get.mockResolvedValue(null);
// getCachedTools returns the tool definitions
getCachedTools.mockResolvedValueOnce({
toolkit1_function: {
type: 'function',
function: {
name: 'toolkit1_function',
description: 'Toolkit function',
parameters: {},
},
},
});
mockReq.config = {
mcpConfig: null,
paths: { structuredTools: '/mock/path' },
};
await getAvailableTools(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(200);
const responseData = mockRes.json.mock.calls[0][0];
expect(Array.isArray(responseData)).toBe(true);
const toolkit = responseData.find((t) => t.pluginKey === 'toolkit1');
expect(toolkit).toBeDefined();
});
});
describe('helper function integration', () => {
it('should handle error cases gracefully', async () => {
mockCache.get.mockRejectedValue(new Error('Cache error'));
await getAvailableTools(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(500);
expect(mockRes.json).toHaveBeenCalledWith({ message: 'Cache error' });
});
});
describe('edge cases with undefined/null values', () => {
it('should handle undefined cache gracefully', async () => {
getLogStores.mockReturnValue(undefined);
await getAvailableTools(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(500);
});
it('should handle null cachedTools and cachedUserTools', async () => {
mockCache.get.mockResolvedValue(null);
// getCachedTools returns empty object instead of null
getCachedTools.mockResolvedValueOnce({});
mockReq.config = {
mcpConfig: null,
paths: { structuredTools: '/mock/path' },
};
await getAvailableTools(mockReq, mockRes);
// Should handle null values gracefully
expect(mockRes.status).toHaveBeenCalledWith(200);
expect(mockRes.json).toHaveBeenCalledWith([]);
});
it('should handle when getCachedTools returns undefined', async () => {
mockCache.get.mockResolvedValue(null);
mockReq.config = {
mcpConfig: null,
paths: { structuredTools: '/mock/path' },
};
// Mock getCachedTools to return undefined
getCachedTools.mockReset();
getCachedTools.mockResolvedValueOnce(undefined);
await getAvailableTools(mockReq, mockRes);
// Should handle undefined values gracefully
expect(mockRes.status).toHaveBeenCalledWith(200);
expect(mockRes.json).toHaveBeenCalledWith([]);
});
it('should handle empty toolDefinitions object', async () => {
mockCache.get.mockResolvedValue(null);
// Reset getCachedTools to ensure clean state
getCachedTools.mockReset();
getCachedTools.mockResolvedValue({});
mockReq.config = {}; // No mcpConfig at all
// Ensure no plugins are available
require('~/app/clients/tools').availableTools.length = 0;
await getAvailableTools(mockReq, mockRes);
// With empty tool definitions, no tools should be in the final output
expect(mockRes.json).toHaveBeenCalledWith([]);
});
it('should handle undefined filteredTools and includedTools', async () => {
mockReq.config = {};
mockCache.get.mockResolvedValue(null);
// Configure getAppConfig to return config with undefined properties
// The controller will use default values [] for filteredTools and includedTools
getAppConfig.mockResolvedValueOnce({});
await getAvailablePluginsController(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(200);
expect(mockRes.json).toHaveBeenCalledWith([]);
});
it('should handle toolkit with undefined toolDefinitions keys', async () => {
const mockToolkit = {
name: 'Toolkit1',
pluginKey: 'toolkit1',
description: 'Toolkit 1',
toolkit: true,
};
// No need to mock app.locals anymore as it's not used
// Add the toolkit to availableTools
require('~/app/clients/tools').availableTools.push(mockToolkit);
mockCache.get.mockResolvedValue(null);
// getCachedTools returns empty object to avoid null reference error
getCachedTools.mockResolvedValueOnce({});
mockReq.config = {
mcpConfig: null,
paths: { structuredTools: '/mock/path' },
};
await getAvailableTools(mockReq, mockRes);
// Should handle null toolDefinitions gracefully
expect(mockRes.status).toHaveBeenCalledWith(200);
});
it('should handle undefined toolDefinitions when checking isToolDefined (traversaal_search bug)', async () => {
// This test reproduces the bug where toolDefinitions is undefined
// and accessing toolDefinitions[plugin.pluginKey] causes a TypeError
const mockPlugin = {
name: 'Traversaal Search',
pluginKey: 'traversaal_search',
description: 'Search plugin',
};
// Add the plugin to availableTools
require('~/app/clients/tools').availableTools.push(mockPlugin);
mockCache.get.mockResolvedValue(null);
mockReq.config = {
mcpConfig: null,
paths: { structuredTools: '/mock/path' },
};
// CRITICAL: getCachedTools returns undefined
// This is what causes the bug when trying to access toolDefinitions[plugin.pluginKey]
getCachedTools.mockResolvedValueOnce(undefined);
// This should not throw an error with the optional chaining fix
await getAvailableTools(mockReq, mockRes);
// Should handle undefined toolDefinitions gracefully and return empty array
expect(mockRes.status).toHaveBeenCalledWith(200);
expect(mockRes.json).toHaveBeenCalledWith([]);
});
it('should re-initialize tools from appConfig when cache returns null', async () => {
// Setup: Initial state with tools in appConfig
const mockAppTools = {
tool1: {
type: 'function',
function: {
name: 'tool1',
description: 'Tool 1',
parameters: {},
},
},
tool2: {
type: 'function',
function: {
name: 'tool2',
description: 'Tool 2',
parameters: {},
},
},
};
// Add matching plugins to availableTools
require('~/app/clients/tools').availableTools.push(
{ name: 'Tool 1', pluginKey: 'tool1', description: 'Tool 1' },
{ name: 'Tool 2', pluginKey: 'tool2', description: 'Tool 2' },
);
// Simulate cache cleared state (returns null)
mockCache.get.mockResolvedValue(null);
getCachedTools.mockResolvedValueOnce(null); // Global tools (cache cleared)
mockReq.config = {
filteredTools: [],
includedTools: [],
availableTools: mockAppTools,
};
// Mock setCachedTools to verify it's called to re-initialize
const { setCachedTools } = require('~/server/services/Config');
await getAvailableTools(mockReq, mockRes);
// Should have re-initialized the cache with tools from appConfig
expect(setCachedTools).toHaveBeenCalledWith(mockAppTools);
// Should still return tools successfully
expect(mockRes.status).toHaveBeenCalledWith(200);
const responseData = mockRes.json.mock.calls[0][0];
expect(responseData).toHaveLength(2);
expect(responseData.find((t) => t.pluginKey === 'tool1')).toBeDefined();
expect(responseData.find((t) => t.pluginKey === 'tool2')).toBeDefined();
});
it('should handle cache clear without appConfig.availableTools gracefully', async () => {
// Setup: appConfig without availableTools
getAppConfig.mockResolvedValue({
filteredTools: [],
includedTools: [],
// No availableTools property
});
// Clear availableTools array
require('~/app/clients/tools').availableTools.length = 0;
// Cache returns null (cleared state)
mockCache.get.mockResolvedValue(null);
getCachedTools.mockResolvedValueOnce(null); // Global tools (cache cleared)
mockReq.config = {
filteredTools: [],
includedTools: [],
// No availableTools
};
await getAvailableTools(mockReq, mockRes);
// Should handle gracefully without crashing
expect(mockRes.status).toHaveBeenCalledWith(200);
expect(mockRes.json).toHaveBeenCalledWith([]);
});
});
});

View File

@@ -47,7 +47,7 @@ const verify2FA = async (req, res) => {
try {
const userId = req.user.id;
const { token, backupCode } = req.body;
const user = await getUserById(userId);
const user = await getUserById(userId, '_id totpSecret backupCodes');
if (!user || !user.totpSecret) {
return res.status(400).json({ message: '2FA not initiated' });
@@ -79,7 +79,7 @@ const confirm2FA = async (req, res) => {
try {
const userId = req.user.id;
const { token } = req.body;
const user = await getUserById(userId);
const user = await getUserById(userId, '_id totpSecret');
if (!user || !user.totpSecret) {
return res.status(400).json({ message: '2FA not initiated' });
@@ -99,10 +99,36 @@ const confirm2FA = async (req, res) => {
/**
* Disable 2FA by clearing the stored secret and backup codes.
* Requires verification with either TOTP token or backup code if 2FA is fully enabled.
*/
const disable2FA = async (req, res) => {
try {
const userId = req.user.id;
const { token, backupCode } = req.body;
const user = await getUserById(userId, '_id totpSecret backupCodes');
if (!user || !user.totpSecret) {
return res.status(400).json({ message: '2FA is not setup for this user' });
}
if (user.twoFactorEnabled) {
const secret = await getTOTPSecret(user.totpSecret);
let isVerified = false;
if (token) {
isVerified = await verifyTOTP(secret, token);
} else if (backupCode) {
isVerified = await verifyBackupCode({ user, backupCode });
} else {
return res
.status(400)
.json({ message: 'Either token or backup code is required to disable 2FA' });
}
if (!isVerified) {
return res.status(401).json({ message: 'Invalid token or backup code' });
}
}
await updateUser(userId, { totpSecret: null, backupCodes: [], twoFactorEnabled: false });
return res.status(200).json();
} catch (err) {

View File

@@ -1,31 +1,47 @@
const { logger } = require('@librechat/data-schemas');
const { webSearchKeys, extractWebSearchEnvVars } = require('@librechat/api');
const { logger, webSearchKeys } = require('@librechat/data-schemas');
const { Tools, CacheKeys, Constants, FileSources } = require('librechat-data-provider');
const {
MCPOAuthHandler,
MCPTokenStorage,
normalizeHttpError,
extractWebSearchEnvVars,
} = require('@librechat/api');
const {
getFiles,
findToken,
updateUser,
deleteFiles,
deleteConvos,
deletePresets,
deleteMessages,
deleteUserById,
deleteAllSharedLinks,
deleteAllUserSessions,
} = require('~/models');
const { updateUserPluginAuth, deleteUserPluginAuth } = require('~/server/services/PluginService');
const { updateUserPluginsService, deleteUserKey } = require('~/server/services/UserService');
const { verifyEmail, resendVerificationEmail } = require('~/server/services/AuthService');
const { needsRefresh, getNewS3URL } = require('~/server/services/Files/S3/crud');
const { Tools, Constants, FileSources } = require('librechat-data-provider');
const { processDeleteRequest } = require('~/server/services/Files/process');
const { Transaction, Balance, User } = require('~/db/models');
const { Transaction, Balance, User, Token } = require('~/db/models');
const { getMCPManager, getFlowStateManager } = require('~/config');
const { getAppConfig } = require('~/server/services/Config');
const { deleteToolCalls } = require('~/models/ToolCall');
const { deleteAllSharedLinks } = require('~/models');
const { getMCPManager } = require('~/config');
const { getLogStores } = require('~/cache');
const { mcpServersRegistry } = require('@librechat/api');
const getUserController = async (req, res) => {
/** @type {MongoUser} */
const appConfig = await getAppConfig({ role: req.user?.role });
/** @type {IUser} */
const userData = req.user.toObject != null ? req.user.toObject() : { ...req.user };
/**
* These fields should not exist due to secure field selection, but deletion
* is done in case of alternate database incompatibility with Mongo API
* */
delete userData.password;
delete userData.totpSecret;
if (req.app.locals.fileStrategy === FileSources.s3 && userData.avatar) {
delete userData.backupCodes;
if (appConfig.fileStrategy === FileSources.s3 && userData.avatar) {
const avatarNeedsRefresh = needsRefresh(userData.avatar, 3600);
if (!avatarNeedsRefresh) {
return res.status(200).send(userData);
@@ -81,6 +97,7 @@ const deleteUserFiles = async (req) => {
};
const updateUserPluginsController = async (req, res) => {
const appConfig = await getAppConfig({ role: req.user?.role });
const { user } = req;
const { pluginKey, action, auth, isEntityTool } = req.body;
try {
@@ -89,8 +106,8 @@ const updateUserPluginsController = async (req, res) => {
if (userPluginsService instanceof Error) {
logger.error('[userPluginsService]', userPluginsService);
const { status, message } = userPluginsService;
res.status(status).send({ message });
const { status, message } = normalizeHttpError(userPluginsService);
return res.status(status).send({ message });
}
}
@@ -125,7 +142,7 @@ const updateUserPluginsController = async (req, res) => {
if (pluginKey === Tools.web_search) {
/** @type {TCustomConfig['webSearch']} */
const webSearchConfig = req.app.locals?.webSearch;
const webSearchConfig = appConfig?.webSearch;
keys = extractWebSearchEnvVars({
keys: action === 'install' ? keys : webSearchKeys,
config: webSearchConfig,
@@ -137,7 +154,7 @@ const updateUserPluginsController = async (req, res) => {
authService = await updateUserPluginAuth(user.id, keys[i], pluginKey, values[i]);
if (authService instanceof Error) {
logger.error('[authService]', authService);
({ status, message } = authService);
({ status, message } = normalizeHttpError(authService));
}
}
} else if (action === 'uninstall') {
@@ -151,7 +168,16 @@ const updateUserPluginsController = async (req, res) => {
`[authService] Error deleting all auth for MCP tool ${pluginKey}:`,
authService,
);
({ status, message } = authService);
({ status, message } = normalizeHttpError(authService));
}
try {
// if the MCP server uses OAuth, perform a full cleanup and token revocation
await maybeUninstallOAuthMCP(user.id, pluginKey, appConfig);
} catch (error) {
logger.error(
`[updateUserPluginsController] Error uninstalling OAuth MCP for ${pluginKey}:`,
error,
);
}
} else {
// This handles:
@@ -163,7 +189,7 @@ const updateUserPluginsController = async (req, res) => {
authService = await deleteUserPluginAuth(user.id, keys[i]); // Deletes by authField name
if (authService instanceof Error) {
logger.error('[authService] Error deleting specific auth key:', authService);
({ status, message } = authService);
({ status, message } = normalizeHttpError(authService));
}
}
}
@@ -173,12 +199,12 @@ const updateUserPluginsController = async (req, res) => {
// If auth was updated successfully, disconnect MCP sessions as they might use these credentials
if (pluginKey.startsWith(Constants.mcp_prefix)) {
try {
const mcpManager = getMCPManager(user.id);
const mcpManager = getMCPManager();
if (mcpManager) {
// Extract server name from pluginKey (format: "mcp_<serverName>")
const serverName = pluginKey.replace(Constants.mcp_prefix, '');
logger.info(
`[updateUserPluginsController] Disconnecting MCP server ${serverName} for user ${user.id} after plugin auth update for ${pluginKey}.`,
`[updateUserPluginsController] Attempting disconnect of MCP server "${serverName}" for user ${user.id} after plugin auth update.`,
);
await mcpManager.disconnectUserConnection(user.id, serverName);
}
@@ -193,7 +219,8 @@ const updateUserPluginsController = async (req, res) => {
return res.status(status).send();
}
res.status(status).send({ message });
const normalized = normalizeHttpError({ status, message });
return res.status(normalized.status).send({ message: normalized.message });
} catch (err) {
logger.error('[updateUserPluginsController]', err);
return res.status(500).json({ message: 'Something went wrong.' });
@@ -259,6 +286,108 @@ const resendVerificationController = async (req, res) => {
}
};
/**
* OAuth MCP specific uninstall logic
*/
const maybeUninstallOAuthMCP = async (userId, pluginKey, appConfig) => {
if (!pluginKey.startsWith(Constants.mcp_prefix)) {
// this is not an MCP server, so nothing to do here
return;
}
const serverName = pluginKey.replace(Constants.mcp_prefix, '');
const serverConfig =
(await mcpServersRegistry.getServerConfig(serverName, userId)) ??
appConfig?.mcpServers?.[serverName];
const oauthServers = await mcpServersRegistry.getOAuthServers();
if (!oauthServers.has(serverName)) {
// this server does not use OAuth, so nothing to do here as well
return;
}
// 1. get client info used for revocation (client id, secret)
const clientTokenData = await MCPTokenStorage.getClientInfoAndMetadata({
userId,
serverName,
findToken,
});
if (clientTokenData == null) {
return;
}
const { clientInfo, clientMetadata } = clientTokenData;
// 2. get decrypted tokens before deletion
const tokens = await MCPTokenStorage.getTokens({
userId,
serverName,
findToken,
});
// 3. revoke OAuth tokens at the provider
const revocationEndpoint =
serverConfig.oauth?.revocation_endpoint ?? clientMetadata.revocation_endpoint;
const revocationEndpointAuthMethodsSupported =
serverConfig.oauth?.revocation_endpoint_auth_methods_supported ??
clientMetadata.revocation_endpoint_auth_methods_supported;
const oauthHeaders = serverConfig.oauth_headers ?? {};
if (tokens?.access_token) {
try {
await MCPOAuthHandler.revokeOAuthToken(
serverName,
tokens.access_token,
'access',
{
serverUrl: serverConfig.url,
clientId: clientInfo.client_id,
clientSecret: clientInfo.client_secret ?? '',
revocationEndpoint,
revocationEndpointAuthMethodsSupported,
},
oauthHeaders,
);
} catch (error) {
logger.error(`Error revoking OAuth access token for ${serverName}:`, error);
}
}
if (tokens?.refresh_token) {
try {
await MCPOAuthHandler.revokeOAuthToken(
serverName,
tokens.refresh_token,
'refresh',
{
serverUrl: serverConfig.url,
clientId: clientInfo.client_id,
clientSecret: clientInfo.client_secret ?? '',
revocationEndpoint,
revocationEndpointAuthMethodsSupported,
},
oauthHeaders,
);
} catch (error) {
logger.error(`Error revoking OAuth refresh token for ${serverName}:`, error);
}
}
// 4. delete tokens from the DB after revocation attempts
await MCPTokenStorage.deleteUserTokens({
userId,
serverName,
deleteToken: async (filter) => {
await Token.deleteOne(filter);
},
});
// 5. clear the flow state for the OAuth tokens
const flowsCache = getLogStores(CacheKeys.FLOWS);
const flowManager = getFlowStateManager(flowsCache);
const flowId = MCPOAuthHandler.generateFlowId(userId, serverName);
await flowManager.deleteFlow(flowId, 'mcp_get_tokens');
await flowManager.deleteFlow(flowId, 'mcp_oauth');
};
module.exports = {
getUserController,
getTermsStatusController,

View File

@@ -0,0 +1,342 @@
const { Tools } = require('librechat-data-provider');
// Mock all dependencies before requiring the module
jest.mock('nanoid', () => ({
nanoid: jest.fn(() => 'mock-id'),
}));
jest.mock('@librechat/api', () => ({
sendEvent: jest.fn(),
}));
jest.mock('@librechat/data-schemas', () => ({
logger: {
error: jest.fn(),
},
}));
jest.mock('@librechat/agents', () => ({
EnvVar: { CODE_API_KEY: 'CODE_API_KEY' },
Providers: { GOOGLE: 'google' },
GraphEvents: {},
getMessageId: jest.fn(),
ToolEndHandler: jest.fn(),
handleToolCalls: jest.fn(),
ChatModelStreamHandler: jest.fn(),
}));
jest.mock('~/server/services/Files/Citations', () => ({
processFileCitations: jest.fn(),
}));
jest.mock('~/server/services/Files/Code/process', () => ({
processCodeOutput: jest.fn(),
}));
jest.mock('~/server/services/Tools/credentials', () => ({
loadAuthValues: jest.fn(),
}));
jest.mock('~/server/services/Files/process', () => ({
saveBase64Image: jest.fn(),
}));
describe('createToolEndCallback', () => {
let req, res, artifactPromises, createToolEndCallback;
let logger;
beforeEach(() => {
jest.clearAllMocks();
// Get the mocked logger
logger = require('@librechat/data-schemas').logger;
// Now require the module after all mocks are set up
const callbacks = require('../callbacks');
createToolEndCallback = callbacks.createToolEndCallback;
req = {
user: { id: 'user123' },
};
res = {
headersSent: false,
write: jest.fn(),
};
artifactPromises = [];
});
describe('ui_resources artifact handling', () => {
it('should process ui_resources artifact and return attachment when headers not sent', async () => {
const toolEndCallback = createToolEndCallback({ req, res, artifactPromises });
const output = {
tool_call_id: 'tool123',
artifact: {
[Tools.ui_resources]: {
data: {
0: { type: 'button', label: 'Click me' },
1: { type: 'input', placeholder: 'Enter text' },
},
},
},
};
const metadata = {
run_id: 'run456',
thread_id: 'thread789',
};
await toolEndCallback({ output }, metadata);
// Wait for all promises to resolve
const results = await Promise.all(artifactPromises);
// When headers are not sent, it returns attachment without writing
expect(res.write).not.toHaveBeenCalled();
const attachment = results[0];
expect(attachment).toEqual({
type: Tools.ui_resources,
messageId: 'run456',
toolCallId: 'tool123',
conversationId: 'thread789',
[Tools.ui_resources]: {
0: { type: 'button', label: 'Click me' },
1: { type: 'input', placeholder: 'Enter text' },
},
});
});
it('should write to response when headers are already sent', async () => {
res.headersSent = true;
const toolEndCallback = createToolEndCallback({ req, res, artifactPromises });
const output = {
tool_call_id: 'tool123',
artifact: {
[Tools.ui_resources]: {
data: {
0: { type: 'carousel', items: [] },
},
},
},
};
const metadata = {
run_id: 'run456',
thread_id: 'thread789',
};
await toolEndCallback({ output }, metadata);
const results = await Promise.all(artifactPromises);
expect(res.write).toHaveBeenCalled();
expect(results[0]).toEqual({
type: Tools.ui_resources,
messageId: 'run456',
toolCallId: 'tool123',
conversationId: 'thread789',
[Tools.ui_resources]: {
0: { type: 'carousel', items: [] },
},
});
});
it('should handle errors when processing ui_resources', async () => {
const toolEndCallback = createToolEndCallback({ req, res, artifactPromises });
// Mock res.write to throw an error
res.headersSent = true;
res.write.mockImplementation(() => {
throw new Error('Write failed');
});
const output = {
tool_call_id: 'tool123',
artifact: {
[Tools.ui_resources]: {
data: {
0: { type: 'test' },
},
},
},
};
const metadata = {
run_id: 'run456',
thread_id: 'thread789',
};
await toolEndCallback({ output }, metadata);
const results = await Promise.all(artifactPromises);
expect(logger.error).toHaveBeenCalledWith(
'Error processing artifact content:',
expect.any(Error),
);
expect(results[0]).toBeNull();
});
it('should handle multiple artifacts including ui_resources', async () => {
const toolEndCallback = createToolEndCallback({ req, res, artifactPromises });
const output = {
tool_call_id: 'tool123',
artifact: {
[Tools.ui_resources]: {
data: {
0: { type: 'chart', data: [] },
},
},
[Tools.web_search]: {
results: ['result1', 'result2'],
},
},
};
const metadata = {
run_id: 'run456',
thread_id: 'thread789',
};
await toolEndCallback({ output }, metadata);
const results = await Promise.all(artifactPromises);
// Both ui_resources and web_search should be processed
expect(artifactPromises).toHaveLength(2);
expect(results).toHaveLength(2);
// Check ui_resources attachment
const uiResourceAttachment = results.find((r) => r?.type === Tools.ui_resources);
expect(uiResourceAttachment).toBeTruthy();
expect(uiResourceAttachment[Tools.ui_resources]).toEqual({
0: { type: 'chart', data: [] },
});
// Check web_search attachment
const webSearchAttachment = results.find((r) => r?.type === Tools.web_search);
expect(webSearchAttachment).toBeTruthy();
expect(webSearchAttachment[Tools.web_search]).toEqual({
results: ['result1', 'result2'],
});
});
it('should not process artifacts when output has no artifacts', async () => {
const toolEndCallback = createToolEndCallback({ req, res, artifactPromises });
const output = {
tool_call_id: 'tool123',
content: 'Some regular content',
// No artifact property
};
const metadata = {
run_id: 'run456',
thread_id: 'thread789',
};
await toolEndCallback({ output }, metadata);
expect(artifactPromises).toHaveLength(0);
expect(res.write).not.toHaveBeenCalled();
});
});
describe('edge cases', () => {
it('should handle empty ui_resources data object', async () => {
const toolEndCallback = createToolEndCallback({ req, res, artifactPromises });
const output = {
tool_call_id: 'tool123',
artifact: {
[Tools.ui_resources]: {
data: {},
},
},
};
const metadata = {
run_id: 'run456',
thread_id: 'thread789',
};
await toolEndCallback({ output }, metadata);
const results = await Promise.all(artifactPromises);
expect(results[0]).toEqual({
type: Tools.ui_resources,
messageId: 'run456',
toolCallId: 'tool123',
conversationId: 'thread789',
[Tools.ui_resources]: {},
});
});
it('should handle ui_resources with complex nested data', async () => {
const toolEndCallback = createToolEndCallback({ req, res, artifactPromises });
const complexData = {
0: {
type: 'form',
fields: [
{ name: 'field1', type: 'text', required: true },
{ name: 'field2', type: 'select', options: ['a', 'b', 'c'] },
],
nested: {
deep: {
value: 123,
array: [1, 2, 3],
},
},
},
};
const output = {
tool_call_id: 'tool123',
artifact: {
[Tools.ui_resources]: {
data: complexData,
},
},
};
const metadata = {
run_id: 'run456',
thread_id: 'thread789',
};
await toolEndCallback({ output }, metadata);
const results = await Promise.all(artifactPromises);
expect(results[0][Tools.ui_resources]).toEqual(complexData);
});
it('should handle when output is undefined', async () => {
const toolEndCallback = createToolEndCallback({ req, res, artifactPromises });
const metadata = {
run_id: 'run456',
thread_id: 'thread789',
};
await toolEndCallback({ output: undefined }, metadata);
expect(artifactPromises).toHaveLength(0);
expect(res.write).not.toHaveBeenCalled();
});
it('should handle when data parameter is undefined', async () => {
const toolEndCallback = createToolEndCallback({ req, res, artifactPromises });
const metadata = {
run_id: 'run456',
thread_id: 'thread789',
};
await toolEndCallback(undefined, metadata);
expect(artifactPromises).toHaveLength(0);
expect(res.write).not.toHaveBeenCalled();
});
});
});

View File

@@ -158,7 +158,7 @@ describe('duplicateAgent', () => {
});
});
it('should handle tool_resources.ocr correctly', async () => {
it('should convert `tool_resources.ocr` to `tool_resources.context`', async () => {
const mockAgent = {
id: 'agent_123',
name: 'Test Agent',
@@ -178,7 +178,7 @@ describe('duplicateAgent', () => {
expect(createAgent).toHaveBeenCalledWith(
expect.objectContaining({
tool_resources: {
ocr: { enabled: true, config: 'test' },
context: { enabled: true, config: 'test' },
},
}),
);

View File

@@ -246,6 +246,7 @@ function createToolEndCallback({ req, res, artifactPromises }) {
const attachment = await processFileCitations({
user,
metadata,
appConfig: req.config,
toolArtifact: output.artifact,
toolCallId: output.tool_call_id,
});
@@ -264,6 +265,30 @@ function createToolEndCallback({ req, res, artifactPromises }) {
);
}
// TODO: a lot of duplicated code in createToolEndCallback
// we should refactor this to use a helper function in a follow-up PR
if (output.artifact[Tools.ui_resources]) {
artifactPromises.push(
(async () => {
const attachment = {
type: Tools.ui_resources,
messageId: metadata.run_id,
toolCallId: output.tool_call_id,
conversationId: metadata.thread_id,
[Tools.ui_resources]: output.artifact[Tools.ui_resources].data,
};
if (!res.headersSent) {
return attachment;
}
res.write(`event: attachment\ndata: ${JSON.stringify(attachment)}\n\n`);
return attachment;
})().catch((error) => {
logger.error('Error processing artifact content:', error);
return null;
}),
);
}
if (output.artifact[Tools.web_search]) {
artifactPromises.push(
(async () => {

View File

@@ -7,8 +7,13 @@ const {
createRun,
Tokenizer,
checkAccess,
logAxiosError,
sanitizeTitle,
resolveHeaders,
getBalanceConfig,
memoryInstructions,
formatContentStrings,
getTransactionsConfig,
createMemoryProcessor,
} = require('@librechat/api');
const {
@@ -33,18 +38,13 @@ const {
bedrockInputSchema,
removeNullishValues,
} = require('librechat-data-provider');
const {
findPluginAuthsByKeys,
getFormattedMemories,
deleteMemory,
setMemory,
} = require('~/models');
const { getMCPAuthMap, checkCapability, hasCustomUserVars } = require('~/server/services/Config');
const { addCacheControl, createContextHandlers } = require('~/app/clients/prompts');
const { initializeAgent } = require('~/server/services/Endpoints/agents/agent');
const { spendTokens, spendStructuredTokens } = require('~/models/spendTokens');
const { getFormattedMemories, deleteMemory, setMemory } = require('~/models');
const { encodeAndFormat } = require('~/server/services/Files/images/encode');
const { getProviderConfig } = require('~/server/services/Endpoints');
const { checkCapability } = require('~/server/services/Config');
const BaseClient = require('~/app/clients/BaseClient');
const { getRoleByName } = require('~/models/Role');
const { loadAgent } = require('~/models/Agent');
@@ -90,11 +90,10 @@ function createTokenCounter(encoding) {
}
function logToolError(graph, error, toolId) {
logger.error(
'[api/server/controllers/agents/client.js #chatCompletion] Tool Error',
logAxiosError({
error,
toolId,
);
message: `[api/server/controllers/agents/client.js #chatCompletion] Tool Error "${toolId}"`,
});
}
class AgentClient extends BaseClient {
@@ -213,16 +212,13 @@ class AgentClient extends BaseClient {
* @returns {Promise<Array<Partial<MongoFile>>>}
*/
async addImageURLs(message, attachments) {
const { files, text, image_urls } = await encodeAndFormat(
const { files, image_urls } = await encodeAndFormat(
this.options.req,
attachments,
this.options.agent.provider,
VisionModes.agents,
);
message.image_urls = image_urls.length ? image_urls : undefined;
if (text && text.length) {
message.ocr = text;
}
return files;
}
@@ -250,19 +246,18 @@ class AgentClient extends BaseClient {
if (this.options.attachments) {
const attachments = await this.options.attachments;
const latestMessage = orderedMessages[orderedMessages.length - 1];
if (this.message_file_map) {
this.message_file_map[orderedMessages[orderedMessages.length - 1].messageId] = attachments;
this.message_file_map[latestMessage.messageId] = attachments;
} else {
this.message_file_map = {
[orderedMessages[orderedMessages.length - 1].messageId]: attachments,
[latestMessage.messageId]: attachments,
};
}
const files = await this.addImageURLs(
orderedMessages[orderedMessages.length - 1],
attachments,
);
await this.addFileContextToMessage(latestMessage, attachments);
const files = await this.processAttachments(latestMessage, attachments);
this.options.attachments = files;
}
@@ -282,21 +277,21 @@ class AgentClient extends BaseClient {
assistantName: this.options?.modelLabel,
});
if (message.ocr && i !== orderedMessages.length - 1) {
if (message.fileContext && i !== orderedMessages.length - 1) {
if (typeof formattedMessage.content === 'string') {
formattedMessage.content = message.ocr + '\n' + formattedMessage.content;
formattedMessage.content = message.fileContext + '\n' + formattedMessage.content;
} else {
const textPart = formattedMessage.content.find((part) => part.type === 'text');
textPart
? (textPart.text = message.ocr + '\n' + textPart.text)
: formattedMessage.content.unshift({ type: 'text', text: message.ocr });
? (textPart.text = message.fileContext + '\n' + textPart.text)
: formattedMessage.content.unshift({ type: 'text', text: message.fileContext });
}
} else if (message.ocr && i === orderedMessages.length - 1) {
systemContent = [systemContent, message.ocr].join('\n');
} else if (message.fileContext && i === orderedMessages.length - 1) {
systemContent = [systemContent, message.fileContext].join('\n');
}
const needsTokenCount =
(this.contextStrategy && !orderedMessages[i].tokenCount) || message.ocr;
(this.contextStrategy && !orderedMessages[i].tokenCount) || message.fileContext;
/* If tokens were never counted, or, is a Vision request and the message has files, count again */
if (needsTokenCount || (this.isVisionModel && (message.image_urls || message.files))) {
@@ -402,6 +397,34 @@ class AgentClient extends BaseClient {
return result;
}
/**
* Creates a promise that resolves with the memory promise result or undefined after a timeout
* @param {Promise<(TAttachment | null)[] | undefined>} memoryPromise - The memory promise to await
* @param {number} timeoutMs - Timeout in milliseconds (default: 3000)
* @returns {Promise<(TAttachment | null)[] | undefined>}
*/
async awaitMemoryWithTimeout(memoryPromise, timeoutMs = 3000) {
if (!memoryPromise) {
return;
}
try {
const timeoutPromise = new Promise((_, reject) =>
setTimeout(() => reject(new Error('Memory processing timeout')), timeoutMs),
);
const attachments = await Promise.race([memoryPromise, timeoutPromise]);
return attachments;
} catch (error) {
if (error.message === 'Memory processing timeout') {
logger.warn('[AgentClient] Memory processing timed out after 3 seconds');
} else {
logger.error('[AgentClient] Error processing memory:', error);
}
return;
}
}
/**
* @returns {Promise<string | undefined>}
*/
@@ -423,8 +446,8 @@ class AgentClient extends BaseClient {
);
return;
}
/** @type {TCustomConfig['memory']} */
const memoryConfig = this.options.req?.app?.locals?.memory;
const appConfig = this.options.req.config;
const memoryConfig = appConfig.memory;
if (!memoryConfig || memoryConfig.disabled === true) {
return;
}
@@ -432,7 +455,7 @@ class AgentClient extends BaseClient {
/** @type {Agent} */
let prelimAgent;
const allowedProviders = new Set(
this.options.req?.app?.locals?.[EModelEndpoint.agents]?.allowedProviders,
appConfig?.endpoints?.[EModelEndpoint.agents]?.allowedProviders,
);
try {
if (memoryConfig.agent?.id != null && memoryConfig.agent.id !== this.options.agent.id) {
@@ -554,8 +577,8 @@ class AgentClient extends BaseClient {
if (this.processMemory == null) {
return;
}
/** @type {TCustomConfig['memory']} */
const memoryConfig = this.options.req?.app?.locals?.memory;
const appConfig = this.options.req.config;
const memoryConfig = appConfig.memory;
const messageWindowSize = memoryConfig?.messageWindowSize ?? 5;
let messagesToProcess = [...messages];
@@ -587,6 +610,7 @@ class AgentClient extends BaseClient {
await this.chatCompletion({
payload,
onProgress: opts.onProgress,
userMCPAuthMap: opts.userMCPAuthMap,
abortController: opts.abortController,
});
return this.contentParts;
@@ -596,9 +620,17 @@ class AgentClient extends BaseClient {
* @param {Object} params
* @param {string} [params.model]
* @param {string} [params.context='message']
* @param {AppConfig['balance']} [params.balance]
* @param {AppConfig['transactions']} [params.transactions]
* @param {UsageMetadata[]} [params.collectedUsage=this.collectedUsage]
*/
async recordCollectedUsage({ model, context = 'message', collectedUsage = this.collectedUsage }) {
async recordCollectedUsage({
model,
balance,
transactions,
context = 'message',
collectedUsage = this.collectedUsage,
}) {
if (!collectedUsage || !collectedUsage.length) {
return;
}
@@ -620,6 +652,8 @@ class AgentClient extends BaseClient {
const txMetadata = {
context,
balance,
transactions,
conversationId: this.conversationId,
user: this.user ?? this.options.req.user?.id,
endpointTokenConfig: this.options.endpointTokenConfig,
@@ -719,7 +753,13 @@ class AgentClient extends BaseClient {
return currentMessageTokens > 0 ? currentMessageTokens : originalEstimate;
}
async chatCompletion({ payload, abortController = null }) {
/**
* @param {object} params
* @param {string | ChatCompletionMessageParam[]} params.payload
* @param {Record<string, Record<string, string>>} [params.userMCPAuthMap]
* @param {AbortController} [params.abortController]
*/
async chatCompletion({ payload, userMCPAuthMap, abortController = null }) {
/** @type {Partial<GraphRunnableConfig>} */
let config;
/** @type {ReturnType<createRun>} */
@@ -731,15 +771,22 @@ class AgentClient extends BaseClient {
abortController = new AbortController();
}
/** @type {TCustomConfig['endpoints']['agents']} */
const agentsEConfig = this.options.req.app.locals[EModelEndpoint.agents];
const appConfig = this.options.req.config;
/** @type {AppConfig['endpoints']['agents']} */
const agentsEConfig = appConfig.endpoints?.[EModelEndpoint.agents];
config = {
runName: 'AgentRun',
configurable: {
thread_id: this.conversationId,
last_agent_index: this.agentConfigs?.size ?? 0,
user_id: this.user ?? this.options.req.user?.id,
hide_sequential_outputs: this.options.agent.hide_sequential_outputs,
requestBody: {
messageId: this.responseMessageId,
conversationId: this.conversationId,
parentMessageId: this.parentMessageId,
},
user: this.options.req.user,
},
recursionLimit: agentsEConfig?.recursionLimit ?? 25,
@@ -823,11 +870,10 @@ class AgentClient extends BaseClient {
if (agent.useLegacyContent === true) {
messages = formatContentStrings(messages);
}
if (
agent.model_parameters?.clientOptions?.defaultHeaders?.['anthropic-beta']?.includes(
'prompt-caching',
)
) {
const defaultHeaders =
agent.model_parameters?.clientOptions?.defaultHeaders ??
agent.model_parameters?.configuration?.defaultHeaders;
if (defaultHeaders?.['anthropic-beta']?.includes('prompt-caching')) {
messages = addCacheControl(messages);
}
@@ -835,6 +881,16 @@ class AgentClient extends BaseClient {
memoryPromise = this.runMemory(messages);
}
/** Resolve request-based headers for Custom Endpoints. Note: if this is added to
* non-custom endpoints, needs consideration of varying provider header configs.
*/
if (agent.model_parameters?.configuration?.defaultHeaders != null) {
agent.model_parameters.configuration.defaultHeaders = resolveHeaders({
headers: agent.model_parameters.configuration.defaultHeaders,
body: config.configurable.requestBody,
});
}
run = await createRun({
agent,
req: this.options.req,
@@ -870,21 +926,9 @@ class AgentClient extends BaseClient {
run.Graph.contentData = contentData;
}
try {
if (await hasCustomUserVars()) {
config.configurable.userMCPAuthMap = await getMCPAuthMap({
tools: agent.tools,
userId: this.options.req.user.id,
findPluginAuthsByKeys,
});
}
} catch (err) {
logger.error(
`[api/server/controllers/agents/client.js #chatCompletion] Error getting custom user vars for agent ${agent.id}`,
err,
);
if (userMCPAuthMap != null) {
config.configurable.userMCPAuthMap = userMCPAuthMap;
}
await run.processStream({ messages }, config, {
keepContent: i !== 0,
tokenCounter: createTokenCounter(this.getEncoding()),
@@ -1002,14 +1046,18 @@ class AgentClient extends BaseClient {
});
try {
if (memoryPromise) {
const attachments = await memoryPromise;
if (attachments && attachments.length > 0) {
this.artifactPromises.push(...attachments);
}
const attachments = await this.awaitMemoryWithTimeout(memoryPromise);
if (attachments && attachments.length > 0) {
this.artifactPromises.push(...attachments);
}
await this.recordCollectedUsage({ context: 'message' });
const balanceConfig = getBalanceConfig(appConfig);
const transactionsConfig = getTransactionsConfig(appConfig);
await this.recordCollectedUsage({
context: 'message',
balance: balanceConfig,
transactions: transactionsConfig,
});
} catch (err) {
logger.error(
'[api/server/controllers/agents/client.js #chatCompletion] Error recording collected usage',
@@ -1017,11 +1065,9 @@ class AgentClient extends BaseClient {
);
}
} catch (err) {
if (memoryPromise) {
const attachments = await memoryPromise;
if (attachments && attachments.length > 0) {
this.artifactPromises.push(...attachments);
}
const attachments = await this.awaitMemoryWithTimeout(memoryPromise);
if (attachments && attachments.length > 0) {
this.artifactPromises.push(...attachments);
}
logger.error(
'[api/server/controllers/agents/client.js #sendCompletion] Operation aborted',
@@ -1052,37 +1098,49 @@ class AgentClient extends BaseClient {
}
const { handleLLMEnd, collected: collectedMetadata } = createMetadataAggregator();
const { req, res, agent } = this.options;
const appConfig = req.config;
let endpoint = agent.endpoint;
/** @type {import('@librechat/agents').ClientOptions} */
let clientOptions = {
maxTokens: 75,
model: agent.model || agent.model_parameters.model,
};
let titleProviderConfig = await getProviderConfig(endpoint);
let titleProviderConfig = getProviderConfig({ provider: endpoint, appConfig });
/** @type {TEndpoint | undefined} */
const endpointConfig =
req.app.locals.all ?? req.app.locals[endpoint] ?? titleProviderConfig.customEndpointConfig;
appConfig.endpoints?.all ??
appConfig.endpoints?.[endpoint] ??
titleProviderConfig.customEndpointConfig;
if (!endpointConfig) {
logger.warn(
'[api/server/controllers/agents/client.js #titleConvo] Error getting endpoint config',
logger.debug(
`[api/server/controllers/agents/client.js #titleConvo] No endpoint config for "${endpoint}"`,
);
}
if (endpointConfig?.titleConvo === false) {
logger.debug(
`[api/server/controllers/agents/client.js #titleConvo] Title generation disabled for endpoint "${endpoint}"`,
);
return;
}
if (endpointConfig?.titleEndpoint && endpointConfig.titleEndpoint !== endpoint) {
try {
titleProviderConfig = await getProviderConfig(endpointConfig.titleEndpoint);
titleProviderConfig = getProviderConfig({
provider: endpointConfig.titleEndpoint,
appConfig,
});
endpoint = endpointConfig.titleEndpoint;
} catch (error) {
logger.warn(
`[api/server/controllers/agents/client.js #titleConvo] Error getting title endpoint config for ${endpointConfig.titleEndpoint}, falling back to default`,
`[api/server/controllers/agents/client.js #titleConvo] Error getting title endpoint config for "${endpointConfig.titleEndpoint}", falling back to default`,
error,
);
// Fall back to original provider config
endpoint = agent.endpoint;
titleProviderConfig = await getProviderConfig(endpoint);
titleProviderConfig = getProviderConfig({ provider: endpoint, appConfig });
}
}
@@ -1123,12 +1181,15 @@ class AgentClient extends BaseClient {
clientOptions.configuration = options.configOptions;
}
// Ensure maxTokens is set for non-o1 models
if (!/\b(o\d)\b/i.test(clientOptions.model) && !clientOptions.maxTokens) {
clientOptions.maxTokens = 75;
} else if (/\b(o\d)\b/i.test(clientOptions.model) && clientOptions.maxTokens != null) {
if (clientOptions.maxTokens != null) {
delete clientOptions.maxTokens;
}
if (clientOptions?.modelKwargs?.max_completion_tokens != null) {
delete clientOptions.modelKwargs.max_completion_tokens;
}
if (clientOptions?.modelKwargs?.max_output_tokens != null) {
delete clientOptions.modelKwargs.max_output_tokens;
}
clientOptions = Object.assign(
Object.fromEntries(
@@ -1144,6 +1205,20 @@ class AgentClient extends BaseClient {
clientOptions.json = true;
}
/** Resolve request-based headers for Custom Endpoints. Note: if this is added to
* non-custom endpoints, needs consideration of varying provider header configs.
*/
if (clientOptions?.configuration?.defaultHeaders != null) {
clientOptions.configuration.defaultHeaders = resolveHeaders({
headers: clientOptions.configuration.defaultHeaders,
body: {
messageId: this.responseMessageId,
conversationId: this.conversationId,
parentMessageId: this.parentMessageId,
},
});
}
try {
const titleResult = await this.run.generateTitle({
provider,
@@ -1160,6 +1235,10 @@ class AgentClient extends BaseClient {
handleLLMEnd,
},
],
configurable: {
thread_id: this.conversationId,
user_id: this.user ?? this.options.req.user?.id,
},
},
});
@@ -1182,10 +1261,14 @@ class AgentClient extends BaseClient {
};
});
const balanceConfig = getBalanceConfig(appConfig);
const transactionsConfig = getTransactionsConfig(appConfig);
await this.recordCollectedUsage({
model: clientOptions.model,
context: 'title',
collectedUsage,
context: 'title',
model: clientOptions.model,
balance: balanceConfig,
transactions: transactionsConfig,
}).catch((err) => {
logger.error(
'[api/server/controllers/agents/client.js #titleConvo] Error recording collected usage',
@@ -1193,7 +1276,7 @@ class AgentClient extends BaseClient {
);
});
return titleResult.title;
return sanitizeTitle(titleResult.title);
} catch (err) {
logger.error('[api/server/controllers/agents/client.js #titleConvo] Error', err);
return;
@@ -1204,17 +1287,26 @@ class AgentClient extends BaseClient {
* @param {object} params
* @param {number} params.promptTokens
* @param {number} params.completionTokens
* @param {OpenAIUsageMetadata} [params.usage]
* @param {string} [params.model]
* @param {OpenAIUsageMetadata} [params.usage]
* @param {AppConfig['balance']} [params.balance]
* @param {string} [params.context='message']
* @returns {Promise<void>}
*/
async recordTokenUsage({ model, promptTokens, completionTokens, usage, context = 'message' }) {
async recordTokenUsage({
model,
usage,
balance,
promptTokens,
completionTokens,
context = 'message',
}) {
try {
await spendTokens(
{
model,
context,
balance,
conversationId: this.conversationId,
user: this.user ?? this.options.req.user?.id,
endpointTokenConfig: this.options.endpointTokenConfig,
@@ -1231,6 +1323,7 @@ class AgentClient extends BaseClient {
await spendTokens(
{
model,
balance,
context: 'reasoning',
conversationId: this.conversationId,
user: this.user ?? this.options.req.user?.id,

View File

@@ -10,6 +10,10 @@ jest.mock('@librechat/agents', () => ({
}),
}));
jest.mock('@librechat/api', () => ({
...jest.requireActual('@librechat/api'),
}));
describe('AgentClient - titleConvo', () => {
let client;
let mockRun;
@@ -41,8 +45,16 @@ describe('AgentClient - titleConvo', () => {
// Mock request and response
mockReq = {
app: {
locals: {
user: {
id: 'user-123',
},
body: {
model: 'gpt-4',
endpoint: EModelEndpoint.openAI,
key: null,
},
config: {
endpoints: {
[EModelEndpoint.openAI]: {
// Match the agent endpoint
titleModel: 'gpt-3.5-turbo',
@@ -52,14 +64,6 @@ describe('AgentClient - titleConvo', () => {
},
},
},
user: {
id: 'user-123',
},
body: {
model: 'gpt-4',
endpoint: EModelEndpoint.openAI,
key: null,
},
};
mockRes = {};
@@ -143,7 +147,7 @@ describe('AgentClient - titleConvo', () => {
it('should handle missing endpoint config gracefully', async () => {
// Remove endpoint config
mockReq.app.locals[EModelEndpoint.openAI] = undefined;
mockReq.config = { endpoints: {} };
const text = 'Test conversation text';
const abortController = new AbortController();
@@ -161,7 +165,16 @@ describe('AgentClient - titleConvo', () => {
it('should use agent model when titleModel is not provided', async () => {
// Remove titleModel from config
delete mockReq.app.locals[EModelEndpoint.openAI].titleModel;
mockReq.config = {
endpoints: {
[EModelEndpoint.openAI]: {
titlePrompt: 'Custom title prompt',
titleMethod: 'structured',
titlePromptTemplate: 'Template: {{content}}',
// titleModel is omitted
},
},
};
const text = 'Test conversation text';
const abortController = new AbortController();
@@ -173,7 +186,16 @@ describe('AgentClient - titleConvo', () => {
});
it('should not use titleModel when it equals CURRENT_MODEL constant', async () => {
mockReq.app.locals[EModelEndpoint.openAI].titleModel = Constants.CURRENT_MODEL;
mockReq.config = {
endpoints: {
[EModelEndpoint.openAI]: {
titleModel: Constants.CURRENT_MODEL,
titlePrompt: 'Custom title prompt',
titleMethod: 'structured',
titlePromptTemplate: 'Template: {{content}}',
},
},
};
const text = 'Test conversation text';
const abortController = new AbortController();
@@ -216,6 +238,12 @@ describe('AgentClient - titleConvo', () => {
model: 'gpt-3.5-turbo',
context: 'title',
collectedUsage: expect.any(Array),
balance: {
enabled: false,
},
transactions: {
enabled: true,
},
});
});
@@ -228,6 +256,38 @@ describe('AgentClient - titleConvo', () => {
expect(result).toBe('Generated Title');
});
it('should sanitize the generated title by removing think blocks', async () => {
const titleWithThinkBlock = '<think>reasoning about the title</think> User Hi Greeting';
mockRun.generateTitle.mockResolvedValue({
title: titleWithThinkBlock,
});
const text = 'Test conversation text';
const abortController = new AbortController();
const result = await client.titleConvo({ text, abortController });
// Should remove the <think> block and return only the clean title
expect(result).toBe('User Hi Greeting');
expect(result).not.toContain('<think>');
expect(result).not.toContain('</think>');
});
it('should return fallback title when sanitization results in empty string', async () => {
const titleOnlyThinkBlock = '<think>only reasoning no actual title</think>';
mockRun.generateTitle.mockResolvedValue({
title: titleOnlyThinkBlock,
});
const text = 'Test conversation text';
const abortController = new AbortController();
const result = await client.titleConvo({ text, abortController });
// Should return the fallback title since sanitization would result in empty string
expect(result).toBe('Untitled Conversation');
});
it('should handle errors gracefully and return undefined', async () => {
mockRun.generateTitle.mockRejectedValue(new Error('Title generation failed'));
@@ -239,16 +299,142 @@ describe('AgentClient - titleConvo', () => {
expect(result).toBeUndefined();
});
it('should skip title generation when titleConvo is set to false', async () => {
// Set titleConvo to false in endpoint config
mockReq.config = {
endpoints: {
[EModelEndpoint.openAI]: {
titleConvo: false,
titleModel: 'gpt-3.5-turbo',
titlePrompt: 'Custom title prompt',
titleMethod: 'structured',
titlePromptTemplate: 'Template: {{content}}',
},
},
};
const text = 'Test conversation text';
const abortController = new AbortController();
const result = await client.titleConvo({ text, abortController });
// Should return undefined without generating title
expect(result).toBeUndefined();
// generateTitle should NOT have been called
expect(mockRun.generateTitle).not.toHaveBeenCalled();
// recordCollectedUsage should NOT have been called
expect(client.recordCollectedUsage).not.toHaveBeenCalled();
});
it('should skip title generation when titleConvo is false in all config', async () => {
// Set titleConvo to false in "all" config
mockReq.config = {
endpoints: {
all: {
titleConvo: false,
titleModel: 'gpt-4o-mini',
titlePrompt: 'All config title prompt',
titleMethod: 'completion',
titlePromptTemplate: 'All config template',
},
},
};
const text = 'Test conversation text';
const abortController = new AbortController();
const result = await client.titleConvo({ text, abortController });
// Should return undefined without generating title
expect(result).toBeUndefined();
// generateTitle should NOT have been called
expect(mockRun.generateTitle).not.toHaveBeenCalled();
// recordCollectedUsage should NOT have been called
expect(client.recordCollectedUsage).not.toHaveBeenCalled();
});
it('should skip title generation when titleConvo is false for custom endpoint scenario', async () => {
// This test validates the behavior when customEndpointConfig (retrieved via
// getProviderConfig for custom endpoints) has titleConvo: false.
//
// The code path is:
// 1. endpoints?.all is checked (undefined in this test)
// 2. endpoints?.[endpoint] is checked (our test config)
// 3. Would fall back to titleProviderConfig.customEndpointConfig (for real custom endpoints)
//
// We simulate a custom endpoint scenario using a dynamically named endpoint config
// Create a unique endpoint name that represents a custom endpoint
const customEndpointName = 'customEndpoint';
// Configure the endpoint to have titleConvo: false
// This simulates what would be in customEndpointConfig for a real custom endpoint
mockReq.config = {
endpoints: {
// No 'all' config - so it will check endpoints[endpoint]
// This config represents what customEndpointConfig would contain
[customEndpointName]: {
titleConvo: false,
titleModel: 'custom-model-v1',
titlePrompt: 'Custom endpoint title prompt',
titleMethod: 'completion',
titlePromptTemplate: 'Custom template: {{content}}',
baseURL: 'https://api.custom-llm.com/v1',
apiKey: 'test-custom-key',
// Additional custom endpoint properties
models: {
default: ['custom-model-v1', 'custom-model-v2'],
},
},
},
};
// Set up agent to use our custom endpoint
// Use openAI as base but override with custom endpoint name for this test
mockAgent.endpoint = EModelEndpoint.openAI;
mockAgent.provider = EModelEndpoint.openAI;
// Override the endpoint in the config to point to our custom config
mockReq.config.endpoints[EModelEndpoint.openAI] =
mockReq.config.endpoints[customEndpointName];
delete mockReq.config.endpoints[customEndpointName];
const text = 'Test custom endpoint conversation';
const abortController = new AbortController();
const result = await client.titleConvo({ text, abortController });
// Should return undefined without generating title because titleConvo is false
expect(result).toBeUndefined();
// generateTitle should NOT have been called
expect(mockRun.generateTitle).not.toHaveBeenCalled();
// recordCollectedUsage should NOT have been called
expect(client.recordCollectedUsage).not.toHaveBeenCalled();
});
it('should pass titleEndpoint configuration to generateTitle', async () => {
// Mock the API key just for this test
const originalApiKey = process.env.ANTHROPIC_API_KEY;
process.env.ANTHROPIC_API_KEY = 'test-api-key';
// Add titleEndpoint to the config
mockReq.app.locals[EModelEndpoint.openAI].titleEndpoint = EModelEndpoint.anthropic;
mockReq.app.locals[EModelEndpoint.openAI].titleMethod = 'structured';
mockReq.app.locals[EModelEndpoint.openAI].titlePrompt = 'Custom title prompt';
mockReq.app.locals[EModelEndpoint.openAI].titlePromptTemplate = 'Custom template';
mockReq.config = {
endpoints: {
[EModelEndpoint.openAI]: {
titleModel: 'gpt-3.5-turbo',
titleEndpoint: EModelEndpoint.anthropic,
titleMethod: 'structured',
titlePrompt: 'Custom title prompt',
titlePromptTemplate: 'Custom template',
},
},
};
const text = 'Test conversation text';
const abortController = new AbortController();
@@ -274,18 +460,16 @@ describe('AgentClient - titleConvo', () => {
});
it('should use all config when endpoint config is missing', async () => {
// Remove endpoint-specific config
delete mockReq.app.locals[EModelEndpoint.openAI].titleModel;
delete mockReq.app.locals[EModelEndpoint.openAI].titlePrompt;
delete mockReq.app.locals[EModelEndpoint.openAI].titleMethod;
delete mockReq.app.locals[EModelEndpoint.openAI].titlePromptTemplate;
// Set 'all' config
mockReq.app.locals.all = {
titleModel: 'gpt-4o-mini',
titlePrompt: 'All config title prompt',
titleMethod: 'completion',
titlePromptTemplate: 'All config template: {{content}}',
// Set 'all' config without endpoint-specific config
mockReq.config = {
endpoints: {
all: {
titleModel: 'gpt-4o-mini',
titlePrompt: 'All config title prompt',
titleMethod: 'completion',
titlePromptTemplate: 'All config template: {{content}}',
},
},
};
const text = 'Test conversation text';
@@ -309,17 +493,21 @@ describe('AgentClient - titleConvo', () => {
it('should prioritize all config over endpoint config for title settings', async () => {
// Set both endpoint and 'all' config
mockReq.app.locals[EModelEndpoint.openAI].titleModel = 'gpt-3.5-turbo';
mockReq.app.locals[EModelEndpoint.openAI].titlePrompt = 'Endpoint title prompt';
mockReq.app.locals[EModelEndpoint.openAI].titleMethod = 'structured';
// Remove titlePromptTemplate from endpoint config to test fallback
delete mockReq.app.locals[EModelEndpoint.openAI].titlePromptTemplate;
mockReq.app.locals.all = {
titleModel: 'gpt-4o-mini',
titlePrompt: 'All config title prompt',
titleMethod: 'completion',
titlePromptTemplate: 'All config template',
mockReq.config = {
endpoints: {
[EModelEndpoint.openAI]: {
titleModel: 'gpt-3.5-turbo',
titlePrompt: 'Endpoint title prompt',
titleMethod: 'structured',
// titlePromptTemplate is omitted to test fallback
},
all: {
titleModel: 'gpt-4o-mini',
titlePrompt: 'All config title prompt',
titleMethod: 'completion',
titlePromptTemplate: 'All config template',
},
},
};
const text = 'Test conversation text';
@@ -346,17 +534,18 @@ describe('AgentClient - titleConvo', () => {
const originalApiKey = process.env.ANTHROPIC_API_KEY;
process.env.ANTHROPIC_API_KEY = 'test-anthropic-key';
// Remove endpoint-specific config to test 'all' config
delete mockReq.app.locals[EModelEndpoint.openAI];
// Set comprehensive 'all' config with all new title options
mockReq.app.locals.all = {
titleConvo: true,
titleModel: 'claude-3-haiku-20240307',
titleMethod: 'completion', // Testing the new default method
titlePrompt: 'Generate a concise, descriptive title for this conversation',
titlePromptTemplate: 'Conversation summary: {{content}}',
titleEndpoint: EModelEndpoint.anthropic, // Should switch provider to Anthropic
mockReq.config = {
endpoints: {
all: {
titleConvo: true,
titleModel: 'claude-3-haiku-20240307',
titleMethod: 'completion', // Testing the new default method
titlePrompt: 'Generate a concise, descriptive title for this conversation',
titlePromptTemplate: 'Conversation summary: {{content}}',
titleEndpoint: EModelEndpoint.anthropic, // Should switch provider to Anthropic
},
},
};
const text = 'Test conversation about AI and machine learning';
@@ -402,15 +591,16 @@ describe('AgentClient - titleConvo', () => {
// Clear previous calls
mockRun.generateTitle.mockClear();
// Remove endpoint config
delete mockReq.app.locals[EModelEndpoint.openAI];
// Set 'all' config with specific titleMethod
mockReq.app.locals.all = {
titleModel: 'gpt-4o-mini',
titleMethod: method,
titlePrompt: `Testing ${method} method`,
titlePromptTemplate: `Template for ${method}: {{content}}`,
mockReq.config = {
endpoints: {
all: {
titleModel: 'gpt-4o-mini',
titleMethod: method,
titlePrompt: `Testing ${method} method`,
titlePromptTemplate: `Template for ${method}: {{content}}`,
},
},
};
const text = `Test conversation for ${method} method`;
@@ -455,29 +645,33 @@ describe('AgentClient - titleConvo', () => {
// Set up Azure endpoint with serverless config
mockAgent.endpoint = EModelEndpoint.azureOpenAI;
mockAgent.provider = EModelEndpoint.azureOpenAI;
mockReq.app.locals[EModelEndpoint.azureOpenAI] = {
titleConvo: true,
titleModel: 'grok-3',
titleMethod: 'completion',
titlePrompt: 'Azure serverless title prompt',
streamRate: 35,
modelGroupMap: {
'grok-3': {
group: 'Azure AI Foundry',
deploymentName: 'grok-3',
},
},
groupMap: {
'Azure AI Foundry': {
apiKey: '${AZURE_API_KEY}',
baseURL: 'https://test.services.ai.azure.com/models',
version: '2024-05-01-preview',
serverless: true,
models: {
mockReq.config = {
endpoints: {
[EModelEndpoint.azureOpenAI]: {
titleConvo: true,
titleModel: 'grok-3',
titleMethod: 'completion',
titlePrompt: 'Azure serverless title prompt',
streamRate: 35,
modelGroupMap: {
'grok-3': {
group: 'Azure AI Foundry',
deploymentName: 'grok-3',
},
},
groupMap: {
'Azure AI Foundry': {
apiKey: '${AZURE_API_KEY}',
baseURL: 'https://test.services.ai.azure.com/models',
version: '2024-05-01-preview',
serverless: true,
models: {
'grok-3': {
deploymentName: 'grok-3',
},
},
},
},
},
},
};
@@ -503,28 +697,32 @@ describe('AgentClient - titleConvo', () => {
// Set up Azure endpoint
mockAgent.endpoint = EModelEndpoint.azureOpenAI;
mockAgent.provider = EModelEndpoint.azureOpenAI;
mockReq.app.locals[EModelEndpoint.azureOpenAI] = {
titleConvo: true,
titleModel: 'gpt-4o',
titleMethod: 'structured',
titlePrompt: 'Azure instance title prompt',
streamRate: 35,
modelGroupMap: {
'gpt-4o': {
group: 'eastus',
deploymentName: 'gpt-4o',
},
},
groupMap: {
eastus: {
apiKey: '${EASTUS_API_KEY}',
instanceName: 'region-instance',
version: '2024-02-15-preview',
models: {
mockReq.config = {
endpoints: {
[EModelEndpoint.azureOpenAI]: {
titleConvo: true,
titleModel: 'gpt-4o',
titleMethod: 'structured',
titlePrompt: 'Azure instance title prompt',
streamRate: 35,
modelGroupMap: {
'gpt-4o': {
group: 'eastus',
deploymentName: 'gpt-4o',
},
},
groupMap: {
eastus: {
apiKey: '${EASTUS_API_KEY}',
instanceName: 'region-instance',
version: '2024-02-15-preview',
models: {
'gpt-4o': {
deploymentName: 'gpt-4o',
},
},
},
},
},
},
};
@@ -551,29 +749,33 @@ describe('AgentClient - titleConvo', () => {
mockAgent.endpoint = EModelEndpoint.azureOpenAI;
mockAgent.provider = EModelEndpoint.azureOpenAI;
mockAgent.model_parameters.model = 'gpt-4o-latest';
mockReq.app.locals[EModelEndpoint.azureOpenAI] = {
titleConvo: true,
titleModel: Constants.CURRENT_MODEL,
titleMethod: 'functions',
streamRate: 35,
modelGroupMap: {
'gpt-4o-latest': {
group: 'region-eastus',
deploymentName: 'gpt-4o-mini',
version: '2024-02-15-preview',
},
},
groupMap: {
'region-eastus': {
apiKey: '${EASTUS2_API_KEY}',
instanceName: 'test-instance',
version: '2024-12-01-preview',
models: {
mockReq.config = {
endpoints: {
[EModelEndpoint.azureOpenAI]: {
titleConvo: true,
titleModel: Constants.CURRENT_MODEL,
titleMethod: 'functions',
streamRate: 35,
modelGroupMap: {
'gpt-4o-latest': {
group: 'region-eastus',
deploymentName: 'gpt-4o-mini',
version: '2024-02-15-preview',
},
},
groupMap: {
'region-eastus': {
apiKey: '${EASTUS2_API_KEY}',
instanceName: 'test-instance',
version: '2024-12-01-preview',
models: {
'gpt-4o-latest': {
deploymentName: 'gpt-4o-mini',
version: '2024-02-15-preview',
},
},
},
},
},
},
};
@@ -598,56 +800,60 @@ describe('AgentClient - titleConvo', () => {
// Set up Azure endpoint
mockAgent.endpoint = EModelEndpoint.azureOpenAI;
mockAgent.provider = EModelEndpoint.azureOpenAI;
mockReq.app.locals[EModelEndpoint.azureOpenAI] = {
titleConvo: true,
titleModel: 'o1-mini',
titleMethod: 'completion',
streamRate: 35,
modelGroupMap: {
'gpt-4o': {
group: 'eastus',
deploymentName: 'gpt-4o',
},
'o1-mini': {
group: 'region-eastus',
deploymentName: 'o1-mini',
},
'codex-mini': {
group: 'codex-mini',
deploymentName: 'codex-mini',
},
},
groupMap: {
eastus: {
apiKey: '${EASTUS_API_KEY}',
instanceName: 'region-eastus',
version: '2024-02-15-preview',
models: {
mockReq.config = {
endpoints: {
[EModelEndpoint.azureOpenAI]: {
titleConvo: true,
titleModel: 'o1-mini',
titleMethod: 'completion',
streamRate: 35,
modelGroupMap: {
'gpt-4o': {
group: 'eastus',
deploymentName: 'gpt-4o',
},
},
},
'region-eastus': {
apiKey: '${EASTUS2_API_KEY}',
instanceName: 'region-eastus2',
version: '2024-12-01-preview',
models: {
'o1-mini': {
group: 'region-eastus',
deploymentName: 'o1-mini',
},
},
},
'codex-mini': {
apiKey: '${AZURE_API_KEY}',
baseURL: 'https://example.cognitiveservices.azure.com/openai/',
version: '2025-04-01-preview',
serverless: true,
models: {
'codex-mini': {
group: 'codex-mini',
deploymentName: 'codex-mini',
},
},
groupMap: {
eastus: {
apiKey: '${EASTUS_API_KEY}',
instanceName: 'region-eastus',
version: '2024-02-15-preview',
models: {
'gpt-4o': {
deploymentName: 'gpt-4o',
},
},
},
'region-eastus': {
apiKey: '${EASTUS2_API_KEY}',
instanceName: 'region-eastus2',
version: '2024-12-01-preview',
models: {
'o1-mini': {
deploymentName: 'o1-mini',
},
},
},
'codex-mini': {
apiKey: '${AZURE_API_KEY}',
baseURL: 'https://example.cognitiveservices.azure.com/openai/',
version: '2025-04-01-preview',
serverless: true,
models: {
'codex-mini': {
deploymentName: 'codex-mini',
},
},
},
},
},
},
};
@@ -679,33 +885,34 @@ describe('AgentClient - titleConvo', () => {
mockReq.body.endpoint = EModelEndpoint.azureOpenAI;
mockReq.body.model = 'gpt-4';
// Remove Azure-specific config
delete mockReq.app.locals[EModelEndpoint.azureOpenAI];
// Set 'all' config as fallback with a serverless Azure config
mockReq.app.locals.all = {
titleConvo: true,
titleModel: 'gpt-4',
titleMethod: 'structured',
titlePrompt: 'Fallback title prompt from all config',
titlePromptTemplate: 'Template: {{content}}',
modelGroupMap: {
'gpt-4': {
group: 'default-group',
deploymentName: 'gpt-4',
},
},
groupMap: {
'default-group': {
apiKey: '${AZURE_API_KEY}',
baseURL: 'https://default.openai.azure.com/',
version: '2024-02-15-preview',
serverless: true,
models: {
mockReq.config = {
endpoints: {
all: {
titleConvo: true,
titleModel: 'gpt-4',
titleMethod: 'structured',
titlePrompt: 'Fallback title prompt from all config',
titlePromptTemplate: 'Template: {{content}}',
modelGroupMap: {
'gpt-4': {
group: 'default-group',
deploymentName: 'gpt-4',
},
},
groupMap: {
'default-group': {
apiKey: '${AZURE_API_KEY}',
baseURL: 'https://default.openai.azure.com/',
version: '2024-02-15-preview',
serverless: true,
models: {
'gpt-4': {
deploymentName: 'gpt-4',
},
},
},
},
},
},
};
@@ -728,6 +935,239 @@ describe('AgentClient - titleConvo', () => {
});
});
describe('getOptions method - GPT-5+ model handling', () => {
let mockReq;
let mockRes;
let mockAgent;
let mockOptions;
beforeEach(() => {
jest.clearAllMocks();
mockAgent = {
id: 'agent-123',
endpoint: EModelEndpoint.openAI,
provider: EModelEndpoint.openAI,
model_parameters: {
model: 'gpt-5',
},
};
mockReq = {
app: {
locals: {},
},
user: {
id: 'user-123',
},
};
mockRes = {};
mockOptions = {
req: mockReq,
res: mockRes,
agent: mockAgent,
};
client = new AgentClient(mockOptions);
});
it('should move maxTokens to modelKwargs.max_completion_tokens for GPT-5 models', () => {
const clientOptions = {
model: 'gpt-5',
maxTokens: 2048,
temperature: 0.7,
};
// Simulate the getOptions logic that handles GPT-5+ models
if (/\bgpt-[5-9]\b/i.test(clientOptions.model) && clientOptions.maxTokens != null) {
clientOptions.modelKwargs = clientOptions.modelKwargs ?? {};
clientOptions.modelKwargs.max_completion_tokens = clientOptions.maxTokens;
delete clientOptions.maxTokens;
}
expect(clientOptions.maxTokens).toBeUndefined();
expect(clientOptions.modelKwargs).toBeDefined();
expect(clientOptions.modelKwargs.max_completion_tokens).toBe(2048);
expect(clientOptions.temperature).toBe(0.7); // Other options should remain
});
it('should move maxTokens to modelKwargs.max_output_tokens for GPT-5 models with useResponsesApi', () => {
const clientOptions = {
model: 'gpt-5',
maxTokens: 2048,
temperature: 0.7,
useResponsesApi: true,
};
if (/\bgpt-[5-9]\b/i.test(clientOptions.model) && clientOptions.maxTokens != null) {
clientOptions.modelKwargs = clientOptions.modelKwargs ?? {};
const paramName =
clientOptions.useResponsesApi === true ? 'max_output_tokens' : 'max_completion_tokens';
clientOptions.modelKwargs[paramName] = clientOptions.maxTokens;
delete clientOptions.maxTokens;
}
expect(clientOptions.maxTokens).toBeUndefined();
expect(clientOptions.modelKwargs).toBeDefined();
expect(clientOptions.modelKwargs.max_output_tokens).toBe(2048);
expect(clientOptions.temperature).toBe(0.7); // Other options should remain
});
it('should handle GPT-5+ models with existing modelKwargs', () => {
const clientOptions = {
model: 'gpt-6',
maxTokens: 1500,
temperature: 0.8,
modelKwargs: {
customParam: 'value',
},
};
// Simulate the getOptions logic
if (/\bgpt-[5-9]\b/i.test(clientOptions.model) && clientOptions.maxTokens != null) {
clientOptions.modelKwargs = clientOptions.modelKwargs ?? {};
clientOptions.modelKwargs.max_completion_tokens = clientOptions.maxTokens;
delete clientOptions.maxTokens;
}
expect(clientOptions.maxTokens).toBeUndefined();
expect(clientOptions.modelKwargs).toEqual({
customParam: 'value',
max_completion_tokens: 1500,
});
});
it('should not modify maxTokens for non-GPT-5+ models', () => {
const clientOptions = {
model: 'gpt-4',
maxTokens: 2048,
temperature: 0.7,
};
// Simulate the getOptions logic
if (/\bgpt-[5-9]\b/i.test(clientOptions.model) && clientOptions.maxTokens != null) {
clientOptions.modelKwargs = clientOptions.modelKwargs ?? {};
clientOptions.modelKwargs.max_completion_tokens = clientOptions.maxTokens;
delete clientOptions.maxTokens;
}
// Should not be modified since it's GPT-4
expect(clientOptions.maxTokens).toBe(2048);
expect(clientOptions.modelKwargs).toBeUndefined();
});
it('should handle various GPT-5+ model formats', () => {
const testCases = [
{ model: 'gpt-5', shouldTransform: true },
{ model: 'gpt-5-turbo', shouldTransform: true },
{ model: 'gpt-6', shouldTransform: true },
{ model: 'gpt-7-preview', shouldTransform: true },
{ model: 'gpt-8', shouldTransform: true },
{ model: 'gpt-9-mini', shouldTransform: true },
{ model: 'gpt-4', shouldTransform: false },
{ model: 'gpt-4o', shouldTransform: false },
{ model: 'gpt-3.5-turbo', shouldTransform: false },
{ model: 'claude-3', shouldTransform: false },
];
testCases.forEach(({ model, shouldTransform }) => {
const clientOptions = {
model,
maxTokens: 1000,
};
// Simulate the getOptions logic
if (/\bgpt-[5-9]\b/i.test(clientOptions.model) && clientOptions.maxTokens != null) {
clientOptions.modelKwargs = clientOptions.modelKwargs ?? {};
clientOptions.modelKwargs.max_completion_tokens = clientOptions.maxTokens;
delete clientOptions.maxTokens;
}
if (shouldTransform) {
expect(clientOptions.maxTokens).toBeUndefined();
expect(clientOptions.modelKwargs?.max_completion_tokens).toBe(1000);
} else {
expect(clientOptions.maxTokens).toBe(1000);
expect(clientOptions.modelKwargs).toBeUndefined();
}
});
});
it('should not swap max token param for older models when using useResponsesApi', () => {
const testCases = [
{ model: 'gpt-5', shouldTransform: true },
{ model: 'gpt-5-turbo', shouldTransform: true },
{ model: 'gpt-6', shouldTransform: true },
{ model: 'gpt-7-preview', shouldTransform: true },
{ model: 'gpt-8', shouldTransform: true },
{ model: 'gpt-9-mini', shouldTransform: true },
{ model: 'gpt-4', shouldTransform: false },
{ model: 'gpt-4o', shouldTransform: false },
{ model: 'gpt-3.5-turbo', shouldTransform: false },
{ model: 'claude-3', shouldTransform: false },
];
testCases.forEach(({ model, shouldTransform }) => {
const clientOptions = {
model,
maxTokens: 1000,
useResponsesApi: true,
};
if (/\bgpt-[5-9]\b/i.test(clientOptions.model) && clientOptions.maxTokens != null) {
clientOptions.modelKwargs = clientOptions.modelKwargs ?? {};
const paramName =
clientOptions.useResponsesApi === true ? 'max_output_tokens' : 'max_completion_tokens';
clientOptions.modelKwargs[paramName] = clientOptions.maxTokens;
delete clientOptions.maxTokens;
}
if (shouldTransform) {
expect(clientOptions.maxTokens).toBeUndefined();
expect(clientOptions.modelKwargs?.max_output_tokens).toBe(1000);
} else {
expect(clientOptions.maxTokens).toBe(1000);
expect(clientOptions.modelKwargs).toBeUndefined();
}
});
});
it('should not transform if maxTokens is null or undefined', () => {
const testCases = [
{ model: 'gpt-5', maxTokens: null },
{ model: 'gpt-5', maxTokens: undefined },
{ model: 'gpt-6', maxTokens: 0 }, // Should transform even if 0
];
testCases.forEach(({ model, maxTokens }, index) => {
const clientOptions = {
model,
maxTokens,
temperature: 0.7,
};
// Simulate the getOptions logic
if (/\bgpt-[5-9]\b/i.test(clientOptions.model) && clientOptions.maxTokens != null) {
clientOptions.modelKwargs = clientOptions.modelKwargs ?? {};
clientOptions.modelKwargs.max_completion_tokens = clientOptions.maxTokens;
delete clientOptions.maxTokens;
}
if (index < 2) {
// null or undefined cases
expect(clientOptions.maxTokens).toBe(maxTokens);
expect(clientOptions.modelKwargs).toBeUndefined();
} else {
// 0 case - should transform
expect(clientOptions.maxTokens).toBeUndefined();
expect(clientOptions.modelKwargs?.max_completion_tokens).toBe(0);
}
});
});
});
describe('runMemory method', () => {
let client;
let mockReq;
@@ -749,13 +1189,6 @@ describe('AgentClient - titleConvo', () => {
};
mockReq = {
app: {
locals: {
memory: {
messageWindowSize: 3,
},
},
},
user: {
id: 'user-123',
personalization: {
@@ -764,6 +1197,13 @@ describe('AgentClient - titleConvo', () => {
},
};
// Mock getAppConfig for memory tests
mockReq.config = {
memory: {
messageWindowSize: 3,
},
};
mockRes = {};
mockOptions = {

View File

@@ -21,7 +21,7 @@ const getLogStores = require('~/cache/getLogStores');
/**
* @typedef {Object} ErrorHandlerDependencies
* @property {Express.Request} req - The Express request object
* @property {ServerRequest} req - The Express request object
* @property {Express.Response} res - The Express response object
* @property {() => ErrorHandlerContext} getContext - Function to get the current context
* @property {string} [originPath] - The origin path for the error handler

View File

@@ -9,6 +9,24 @@ const {
const { disposeClient, clientRegistry, requestDataMap } = require('~/server/cleanup');
const { saveMessage } = require('~/models');
function createCloseHandler(abortController) {
return function (manual) {
if (!manual) {
logger.debug('[AgentController] Request closed');
}
if (!abortController) {
return;
} else if (abortController.signal.aborted) {
return;
} else if (abortController.requestCompleted) {
return;
}
abortController.abort();
logger.debug('[AgentController] Request aborted on close');
};
}
const AgentController = async (req, res, next, initializeClient, addTitle) => {
let {
text,
@@ -31,7 +49,6 @@ const AgentController = async (req, res, next, initializeClient, addTitle) => {
let userMessagePromise;
let getAbortData;
let client = null;
// Initialize as an array
let cleanupHandlers = [];
const newConvo = !conversationId;
@@ -62,9 +79,7 @@ const AgentController = async (req, res, next, initializeClient, addTitle) => {
// Create a function to handle final cleanup
const performCleanup = () => {
logger.debug('[AgentController] Performing cleanup');
// Make sure cleanupHandlers is an array before iterating
if (Array.isArray(cleanupHandlers)) {
// Execute all cleanup handlers
for (const handler of cleanupHandlers) {
try {
if (typeof handler === 'function') {
@@ -105,8 +120,33 @@ const AgentController = async (req, res, next, initializeClient, addTitle) => {
};
try {
/** @type {{ client: TAgentClient }} */
const result = await initializeClient({ req, res, endpointOption });
let prelimAbortController = new AbortController();
const prelimCloseHandler = createCloseHandler(prelimAbortController);
res.on('close', prelimCloseHandler);
const removePrelimHandler = (manual) => {
try {
prelimCloseHandler(manual);
res.removeListener('close', prelimCloseHandler);
} catch (e) {
logger.error('[AgentController] Error removing close listener', e);
}
};
cleanupHandlers.push(removePrelimHandler);
/** @type {{ client: TAgentClient; userMCPAuthMap?: Record<string, Record<string, string>> }} */
const result = await initializeClient({
req,
res,
endpointOption,
signal: prelimAbortController.signal,
});
if (prelimAbortController.signal?.aborted) {
prelimAbortController = null;
throw new Error('Request was aborted before initialization could complete');
} else {
prelimAbortController = null;
removePrelimHandler(true);
cleanupHandlers.pop();
}
client = result.client;
// Register client with finalization registry if available
@@ -138,22 +178,7 @@ const AgentController = async (req, res, next, initializeClient, addTitle) => {
};
const { abortController, onStart } = createAbortController(req, res, getAbortData, getReqData);
// Simple handler to avoid capturing scope
const closeHandler = () => {
logger.debug('[AgentController] Request closed');
if (!abortController) {
return;
} else if (abortController.signal.aborted) {
return;
} else if (abortController.requestCompleted) {
return;
}
abortController.abort();
logger.debug('[AgentController] Request aborted on close');
};
const closeHandler = createCloseHandler(abortController);
res.on('close', closeHandler);
cleanupHandlers.push(() => {
try {
@@ -175,6 +200,7 @@ const AgentController = async (req, res, next, initializeClient, addTitle) => {
abortController,
overrideParentMessageId,
isEdited: !!editedContent,
userMCPAuthMap: result.userMCPAuthMap,
responseMessageId: editedResponseMessageId,
progressOptions: {
res,
@@ -233,6 +259,26 @@ const AgentController = async (req, res, next, initializeClient, addTitle) => {
);
}
}
// Edge case: sendMessage completed but abort happened during sendCompletion
// We need to ensure a final event is sent
else if (!res.headersSent && !res.finished) {
logger.debug(
'[AgentController] Handling edge case: `sendMessage` completed but aborted during `sendCompletion`',
);
const finalResponse = { ...response };
finalResponse.error = true;
sendEvent(res, {
final: true,
conversation,
title: conversation.title,
requestMessage: userMessage,
responseMessage: finalResponse,
error: { message: 'Request was aborted during completion' },
});
res.end();
}
// Save user message if needed
if (!client.skipSaveUserMessage) {

Some files were not shown because too many files have changed in this diff Show More