Compare commits

..

113 Commits

Author SHA1 Message Date
Dustin Healy
70a82652a5 chore: bring back file search and ocr options 2025-08-14 07:50:21 -07:00
Dustin Healy
f211e25aac fix: stop duplication of file in chat on end of response stream 2025-08-13 22:29:51 -07:00
Dustin Healy
800391b264 chore: remove out of scope formatting changes 2025-08-13 20:59:08 -07:00
Andres Restrepo
6605b6c800 feat: implement Anthropic native PDF support with document preservation
- Add comprehensive debug logging throughout PDF processing pipeline
- Refactor attachment processing to separate image and document handling
- Create distinct addImageURLs(), addDocuments(), and processAttachments() methods
- Fix critical bugs in stream handling and parameter passing
- Add streamToBuffer utility for proper stream-to-buffer conversion
- Remove api/agents submodule from repository

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-08-12 18:26:40 -05:00
Danny Avila
007570b5c6 🎨 style: Add missing markdown font size variable to CSS (#9011) 2025-08-12 10:19:29 -04:00
Jeffrey Bulanadi
5943d5346c 📑 docs: Fix Typos in JSDoc and Doc Files (#8998)
- Fix grammar in translations README: 'if has not been ran'  'if it has not been run'
- Fix spacing in JSDoc comments: 'at theend'  'at the end' (2 instances)
2025-08-12 10:18:55 -04:00
Danny Avila
052e61b735 v0.8.0-rc2 (#9000) 2025-08-11 18:58:21 -04:00
Danny Avila
1ccac58403 🔒 fix: Provider Validation for Social, OpenID, SAML, and LDAP Logins (#8999)
* fix: social login provider crossover

* feat: Enhance OpenID login handling and add tests for provider validation

* refactor: authentication error handling to use ErrorTypes.AUTH_FAILED enum

* refactor: update authentication error handling in LDAP and SAML strategies to use ErrorTypes.AUTH_FAILED enum

* ci: Add validation for login with existing email and different provider in SAML strategy

chore: Add logging for existing users with different providers in LDAP, SAML, and Social Login strategies
2025-08-11 18:51:46 -04:00
Clay Rosenthal
04d74a7e07 🪖 ci: Helm OCI Publishing (#7256)
* Adding helm oci publishing (#3)

Signed-off-by: Clay Rosenthal <clayros@amazon.com>

* Update chart version

Signed-off-by: Clay Rosenthal <clayros@amazon.com>

* Update helm release

Signed-off-by: Clay Rosenthal <clayros@amazon.com>

* Update helm chart package path

Signed-off-by: Clay Rosenthal <clayros@amazon.com>

---------

Signed-off-by: Clay Rosenthal <clayros@amazon.com>
2025-08-11 16:21:05 -04:00
github-actions[bot]
0fdca8ddbd 🌍 i18n: Update translation.json with latest translations (#8996)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-11 16:19:37 -04:00
Danny Avila
c5ca621efd 🧑‍💼 feat: Add Agent Model Validation (#8995)
* fix: Update logger import to use data-schemas module

* feat: agent model validation

* fix: Remove invalid error messages from translation file
2025-08-11 14:26:28 -04:00
Danny Avila
8cefa566da 📸 fix: Avatar Handling for Social Login (#8993)
- Updated `handleExistingUser` function to improve avatar handling logic, including checks for manual flags and null/undefined avatars.
- Introduced a new test suite for `handleExistingUser` covering various scenarios, ensuring robust functionality for avatar updates in both local and non-local storage contexts.
2025-08-11 11:50:46 -04:00
Danny Avila
7e4c8a5d0d 🛡️ fix: OTP Verification For 2FA Disable Operation (#8975) 2025-08-10 15:05:16 -04:00
Danny Avila
edf33bedcb 🛂 feat: Payload limits and Validation for User-created Memories (#8974) 2025-08-10 14:46:16 -04:00
Dustin Healy
21e00168b1 🪙 fix: Max Output Tokens Refactor for Responses API (#8972)
🪙 fix: Max Output Tokens Refactor for Responses API (#8972)

chore: Remove `max_output_tokens` from model kwargs in `titleConvo` if provided
2025-08-10 14:08:35 -04:00
github-actions[bot]
da3730b7d6 🌍 i18n: Update translation.json with latest translations (#8957)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-09 12:04:56 -04:00
Danny Avila
770c766d50 🔧 refactor: Move Plugin-related Helpers to TS API and Add Tests (#8961) 2025-08-09 12:02:44 -04:00
alkshmir
5eb6926464 🪖 chore: Fix Typo in helm/librechat/values.yaml (#8960) 2025-08-09 12:00:37 -04:00
Danny Avila
e478ae1c28 📦 chore: Bump @librechat/agents to v2.4.75 (#8956) 2025-08-09 00:01:21 -04:00
Danny Avila
0c9284c8ae 🧠 refactor: Memory Timeout after Completion and Guarantee Stream Final Event (#8955) 2025-08-09 00:01:10 -04:00
Danny Avila
4eeadddfe6 🔮 fix: Artifacts readOnly to Re-render when Expected (#8954) 2025-08-08 22:44:58 -04:00
Dustin Healy
9ca1847535 🔧 refactor: customUserVar Error Normalization (#8950)
* fix: localization string had unused template var

* fix: add normalizeHttpError to hopefully stop UI hangs when an error is returned in UserController

- Ensures updateUserPluginsController always returns valid HTTP status codes instead of undefined
- Add normalizeHttpError() helper to safely extract status/message from errors
- Default to 400 status code when Error.status is undefined/invalid

* refactor: move normalizeHttpError to packages/api
2025-08-08 15:53:04 -04:00
Danny Avila
5d0bc95193 🧪 fix: Editor Styling, Incomplete Artifact Editing, Optimize Artifact Context (#8953)
* refactor: optimize artifacts context for improved performance

* fix: layout classes for artifacts editor

* chore: linting

* fix: enhance artifact mutation handling in CodeEditor to prevent infinite retries

* fix: handle incomplete artifacts in replaceArtifactContent and add regression tests
2025-08-08 15:49:58 -04:00
github-actions[bot]
e7d6100fe4 🌍 i18n: Update translation.json with latest translations (#8934)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-08 12:21:27 -04:00
Danny Avila
01a95229f2 📦 chore: Bump @librechat/agents to v2.4.73 (#8949) 2025-08-08 12:19:36 -04:00
Danny Avila
0939250f07 🛣️ fix: Remove Title Tokens Limit for GPT-5 Models (#8948)
* 🛣️ fix: Remove Title Tokens Limit for GPT-5 Models

* 🛣️ fix: Remove max_completion_tokens from modelKwargs when maxTokens is disabled

* chore: Add test-image* to .gitignore for CI/CD data
2025-08-08 11:15:29 -04:00
Danny Avila
7147bce3c3 feat: Add OpenAI Verbosity Parameter (#8929)
* WIP: Verbosity OpenAI Parameter

* 🔧 chore: remove unused import of extractEnvVariable from parsers.ts

*  feat: add comprehensive tests for getOpenAIConfig and enhance verbosity handling

* fix: Handling for maxTokens in GPT-5+ models and add corresponding tests

* feat: Implement GPT-5+ model handling in processMemory function
2025-08-07 20:49:40 -04:00
github-actions[bot]
486fe34a2b 🌍 i18n: Update translation.json with latest translations (#8924)
* 🌍 i18n: Update translation.json with latest translations

* Update translation.json

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Danny Avila <danny@librechat.ai>
2025-08-07 20:42:44 -04:00
github-actions[bot]
922f43f520 🌍 i18n: Update translation.json with latest translations (#8907)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-07 16:25:24 -04:00
Marco Beretta
e6fa01d514 💬 style: Enhance Tooltip with HTML support and Improve Styling (#8915)
*  feat: Enhance Tooltip component with HTML support and styling improvements

*  feat: Integrate DOMPurify for HTML sanitization in Tooltip component
2025-08-07 16:24:42 -04:00
Danny Avila
8238fb49e0 📦 chore: bump @librechat/agents to v2.4.72 2025-08-07 16:23:12 -04:00
Danny Avila
430557676d feat: GPT-5 Token Limits, Rates, Icon, Reasoning Support 2025-08-07 16:23:11 -04:00
Danny Avila
8a5047c456 📦 chore: bump @librechat/agents to v2.4.71 2025-08-07 16:23:11 -04:00
Danny Avila
c787515894 🧠 feat: Add minimal Reasoning Effort option 2025-08-07 16:23:11 -04:00
Danny Avila
d95d8032cc feat: GPT-OSS models Token Limits & Rates 2025-08-07 16:23:10 -04:00
Joseph Licata
b9f72f4869 🎚️ refactor: Update Min. Values for OpenAI Parameters (#8922) 2025-08-07 14:38:08 -04:00
Danny Avila
429bb6653a 📦 chore: bump @librechat/agents to v2.4.70 (#8923) 2025-08-07 14:36:10 -04:00
Dustin Healy
47caafa8f8 🔧 fix: MCP Queries and Connections (#8870)
* fix: add refetchQueries on connection success so ToolSelectDialog doesn't require hard refresh

* fix: change hook so we only query connection status when mcpServers are configured

* fix: change refetchQueries to invalidateQueries for tools after server connection update

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-08-07 02:31:05 -04:00
Dustin Healy
8530594f37 🟢 fix: Incorrect customUserVars Set States (#8905) 2025-08-07 02:19:06 -04:00
Sebastien Bruel
0b071c06f6 🥞 refactor: Duplicate Agent Versions as Informational Instead of Errors (#8881)
* Fix error when updating an agent with no changes

* Add tests

* Revert translation file changes
2025-08-07 02:12:05 -04:00
Danny Avila
1092392ed8 📂 fix: File Cleanup for Uploaded "Agent" Files (#8900) 2025-08-06 19:45:57 -04:00
Danny Avila
36c8947029 🔄 refactor: Select OpenRouter LLM Class Dynamically by baseURL (#8898) 2025-08-06 19:26:40 -04:00
Danny Avila
4175a3ea19 v0.8.0-rc1 (#8846) 2025-08-04 15:25:07 -04:00
github-actions[bot]
02dc71f4b7 🌍 i18n: Update translation.json with latest translations (#8845)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-04 15:24:24 -04:00
github-actions[bot]
a6c99a3267 🌍 i18n: Update translation.json with latest translations (#8828)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-04 14:54:07 -04:00
SollalF
fcefc6eedf feat: Add OpenID Audience Parameter (#8837)
*  feat: Add OpenID audience parameter support in authorization requests

* Updated .env.example to include OPENID_AUDIENCE variable for configuration.
* Enhanced openidStrategy to set the audience parameter in authorization requests if specified, improving OpenID integration.

* Update .env.example

* Update openidStrategy.js

---------

Co-authored-by: Danny Avila <danacordially@gmail.com>
2025-08-04 14:49:36 -04:00
Marco Beretta
dfdafdbd09 🖌️ feat: add animation styles for popovers and tooltips (#8831)
* feat: Update client version to 0.2.2 and add animation styles for popovers and tooltips

* refactor: Remove focus outline styles from Dropdown component

* feat: Update client version to 0.2.3 and add Select component export

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-08-04 14:44:00 -04:00
Danny Avila
33834cd484 🧹 feat: Automatic File Cleanup for Mistral OCR Uploads (#8827)
* chore: Handle optional token_endpoint in OAuth metadata discovery

* chore: Simplify permission typing logic in checkAccess function

* feat: Implement `deleteMistralFile` function and integrate file cleanup in `uploadMistralOCR`
2025-08-03 17:11:14 -04:00
Danny Avila
7ef2c626e2 🛠️ feat: Add Reset-Meili-Sync Script for MongoDB Flags (#8823) 2025-08-02 18:04:04 -04:00
Danny Avila
bc43423f58 🌍 i18n: Add Tibetan and Ukrainian languages to localization (#8819)
* 🌍 i18n: Add Tibetan and Ukrainian languages to localization

* feat: Update language selector to include Tibetan and Ukrainian options
* feat: Add translation files for Tibetan and Ukrainian languages
* chore: Update English translation.json with new language keys
* docs: Create localization guide for adding new languages

* Update README.md
2025-08-02 12:37:18 -04:00
Danny Avila
863401bcdf 🔧 fix: Assistants API SDK calls to match Updated Arguments (#8818)
* chore: remove comments in agents endpoint error handler

* chore: improve openai sdk typing

* chore: improve typing for azure asst init

* 🔧 fix: Assistants API SDK calls to match Updated Arguments
2025-08-02 12:19:58 -04:00
Danny Avila
33c8b87edd 📦 chore: Bump @modelcontextprotocol/sdk to v1.17.1 (#8809) 2025-08-01 16:10:12 -04:00
github-actions[bot]
077248a8a7 🌍 i18n: Update translation.json with latest translations (#8808)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-01 15:55:42 -04:00
Danny Avila
c6fb4686ef 🌐 ci: Bump Locize i18n Action Version 2025-08-01 15:52:00 -04:00
Dustin Healy
f1c6e4d55e 🧪 ci: Unit Tests for MCP Routes (#8803)
* refactor: remove unreachable if checks from mcp routes

* test: add tests for mcp routes
2025-08-01 14:47:00 -04:00
William Kim
e192c99c7d 🔧 fix: Apply Convo Export filename sanitization at export, not input (#8779)
Co-authored-by: Woosub Kim <woosub.kim@navercorp.com>
2025-07-31 07:28:33 -04:00
wartek69
056172f007 🔒 feat: MCP OAuth Config for Metadata Parameters (#8691)
* fix(mcp): add default metadata for pre-configured oauth

* removed lingering comment

* added configurable options & jest unit tests

* Update handler.test.ts

* Update handler.ts

---------

Co-authored-by: Alex <aleksander.chernyavskiy@seafar.eu>
Co-authored-by: Danny Avila <danacordially@gmail.com>
2025-07-31 07:24:49 -04:00
github-actions[bot]
5eed5009e9 🌍 i18n: Update translation.json with latest translations (#8771)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-07-30 15:59:27 -04:00
Danny Avila
6fc9abd4ad ✂️ fix: Remove Image Payloads from Memory Processing (#8770) 2025-07-30 15:54:33 -04:00
github-actions[bot]
03a924eaca 🌍 i18n: Update translation.json with latest translations (#8739)
* 🌍 i18n: Update translation.json with latest translations

* chore: Remove unused translation keys for MCP custom variables

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Danny Avila <danny@librechat.ai>
2025-07-30 15:23:12 -04:00
Danny Avila
25c993d93e 📦 chore: bump @librechat/agents to v2.4.69 (#8769) 2025-07-30 15:18:17 -04:00
Marco Beretta
09659c1040 🔨 style: Improve MCP UI (#8745)
* refactor: Enhance MCP components with improved UI elements and localization updates

* refactor: Clean up MCP components by removing unused imports and improving layout

* refactor: Update server status badge styling for improved UI consistency

* refactor: Move group up a level so 'X' and background highlight occur at same time for cancellation button

* refactor: Remove unused translation keys from the localization file

---------

Co-authored-by: Dustin Healy <dustinhealy1@gmail.com>
2025-07-30 14:56:22 -04:00
Josh Mullin
19a8f5c545 🐦 fix: Prioritize OIDC Username Claims to Prevent First Name Usernames (#8695)
Now prioritizes preferred_username claim, then the nonstandard
username claim, then email.

Removed given_name as a possible username choice to avoid exposing users’ first names as
usernames.

Updated openidStrategy.spec.js to reflect the new claim order.

Fixed mock OpenID server behavior where preferred_username was always
hardcoded, causing test failures.

Adjusted OpenID setup test to align with new username parameter
behavior.
2025-07-30 14:43:42 -04:00
Dustin Healy
1050346915 feat: Add Support for customUserVar Replacement in 'args' Field (#8743) 2025-07-30 14:37:56 -04:00
Marco Beretta
8a1a38f346 🔑 fix: Update Conversation Mutation to use ID from Payload (#8758) 2025-07-30 14:34:30 -04:00
Danny Avila
32081245da 🪵 chore: Remove Unnecessary Comments 2025-07-29 14:59:58 -04:00
Dustin Healy
6fd3b569ac ⚒️ fix: MCP Initialization Flows (#8734)
* fix: add OAuth flow back in to success state

* feat: disable server clicks during initialization to prevent spam

* fix: correct new tab behavior for OAuth between one-click and normal initialization flows

* fix: stop polling on error during oauth (was infinite popping toasts because we didn't clear interval)

* fix: cleanupServerState should be called after successful cancelOauth, not before

* fix: change from completeFlow to failFlow to avoid stale client IDs on OAuth after cancellation

* fix: add logic to differentiate between cancelled and failed flows when checking status for indicators (so error triangle indicator doesn't show up on cancellaiton)
2025-07-29 14:54:07 -04:00
Jakub Hrozek
6671fcb714 🛂 refactor: Use discoverAuthorizationServerMetadata for MCP OAuth (#8723)
* Use discoverAuthorizationServerMetadata instead of discoverMetadata

Uses the discoverAuthorizationServerMetadata function from the upstream
TS SDK. This has the advantage of falling back to OIDC discovery
metadata if the OAuth discovery metadata doesn't exist which is the case
with e.g. keycloak.

* chore: import order

---------

Co-authored-by: Danny Avila <danacordially@gmail.com>
2025-07-29 09:09:52 -04:00
Dustin Healy
c4677ab3fb 🔑 refactor: MCP Settings Rendering Logic for OAuth Servers (#8718)
* feat: add OAuth servers to conditional rendering logic for MCPPanel in SideNav

* feat: add startup flag check to conditional rendering logic

* fix: correct improper handling of failure state in reinitialize endpoint

* fix: change MCP config components to better handle servers without customUserVars

- removes the subtle reinitialize button from config components of servers without customUserVars or OAuth
- adds a placeholder message for components where servers have no customUserVars configured

* style: swap CustomUserVarsSection and ServerInitializationSection positions

* style: fix coloring for light mode and align more with existing design patterns

* chore: remove extraneous comments

* chore: reorder imports and `isEnabled` from api package

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-07-29 09:08:46 -04:00
github-actions[bot]
ef9d9b1276 🌍 i18n: Update translation.json with latest translations (#8676)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-07-28 15:15:06 -04:00
Danny Avila
a4ca4b7d9d 🔀 fix: Rerender Edge Cases After Migration to Shared Package (#8713)
* fix: render issues in PromptForm by decoupling nested dependencies as a result of @librechat/client components

* fix: MemoryViewer flicker by moving EditMemoryButton and DeleteMemoryButton outside of rendering

* fix: CategorySelector to use DropdownPopup for improved mobile compatibility

* chore: imports
2025-07-28 15:14:37 -04:00
Danny Avila
8e6eef04ab 🔧 fix: Update Proxy Config for OpenAI Image Tools (#8712)
- Replaced HttpsProxyAgent with ProxyAgent from undici for improved proxy handling in DALLE3.js and OpenAIImageTools.js.
- Updated fetchOptions to use dispatcher for proxy configuration.
- Added new test suite for DALLE3 to verify proxy configuration behavior based on environment variables.
2025-07-28 15:12:29 -04:00
Danny Avila
ec3cbca6e3 feat: Enhance Redis Config and Error Handling (#8709)
*  feat: Enhance Redis Config and Error Handling

- Added new Redis configuration options: `REDIS_RETRY_MAX_DELAY`, `REDIS_RETRY_MAX_ATTEMPTS`, `REDIS_CONNECT_TIMEOUT`, and `REDIS_ENABLE_OFFLINE_QUEUE` to improve connection resilience.
- Implemented error handling for Redis cache creation and session store initialization in `cacheFactory.js`.
- Enhanced logging for Redis client events and errors in `redisClients.js`.
- Updated `README.md` to document new Redis configuration options.

* chore: Add JSDoc comments to Redis configuration options in cacheConfig.js for improved clarity and documentation

* ci: update cacheFactory tests

* refactor: remove fallback

* fix: Improve error handling in Redis cache creation, re-throw errors when expected
2025-07-28 14:21:39 -04:00
Dustin Healy
4639dc3255 🐜 fix: Forward Ref to MCPSubMenu and ArtifactsSubMenu (#8696)
ToolsDropdown uses a menu library that passes refs to submenu items. Function components can't receive refs by default though, so we get  "Function components cannot be given refs" warnings in the console. React.forwardRef() allows them to properly handle ref forwarding by wrapping the component and attaching the ref to the outer div element.
2025-07-28 12:26:11 -04:00
Dustin Healy
0ef3fefaec 🏹 feat: Concurrent MCP Initialization Support (#8677)
*  feat: Enhance MCP Connection Status Management

- Introduced new functions to retrieve and manage connection status for multiple MCP servers, including OAuth flow checks and server-specific status retrieval.
- Refactored the MCP connection status endpoints to support both all servers and individual server queries.
- Replaced the old server initialization hook with a new `useMCPServerManager` hook for improved state management and handling of multiple OAuth flows.
- Updated the MCPPanel component to utilize the new context provider for better state handling and UI updates.
- Fixed a number of UI bugs when initializing servers

* 🗣️ i18n: Remove unused strings from translation.json

* refactor: move helper functions out of the route module into mcp service file

* ci: add tests for newly added functions in mcp service file

* fix: memoize setMCPValues to avoid render loop
2025-07-28 12:25:34 -04:00
ryanh-ai
37aba18a96 🪟 feat: Context Window for amazon.nova-premier (#8689) 2025-07-28 12:24:08 -04:00
Marco Beretta
2ce6ac74f4 📻 feat: radio component (#8692)
* 📦 feat: Add Radio component

* 📦 feat: Integrate localization for 'No options available' message in Radio component

* 📦 feat: Bump version to 0.2.0 in package.json

* 📦 feat: Update client package version to 0.2.0 in package-lock.json
2025-07-28 12:18:59 -04:00
Danny Avila
9fddb0ff6a 🐋 ci: Include packages/client source files in Dockerfile.multi Client Build 2025-07-27 15:11:42 -04:00
Danny Avila
32f7dbd11f 🐳 ci: Build client package for Dockerfile.multi 2025-07-27 13:29:43 -04:00
Danny Avila
79197454f8 📦 feat: Move Shared Components to @librechat/client (#8685)
* feat: init @librechat/client

* feat: Add common types and interfaces for accessibility, agents, artifacts, assistants, and tools

* feat: Add jotai as a peer dependency

* fix build client package

* feat: cleanup unused types from common/index.ts

- Remove 104 unused type exports from packages/client/src/common/index.ts
- Keep only 7 actually used exports (93% reduction)
- Add cleanup script with enhanced import pattern detection
- Support both named imports and namespace imports (* as t)
- Create automatic backups and comprehensive documentation
- Maintain type safety with build verification
- No breaking changes to existing code

Kept exports:
- TShowToast, Option, OptionWithIcon, DropdownValueSetter
- MentionOption, NotificationSeverity, MenuItemProps

Scripts: cleanup-common-types-safe.js, README-CLEANUP.md

* fix: cleanup

* fix: package; refactor: tsconfig

* feat: add back `recoil`

* fix: move dependencies to peerDependencies in client package

* feat: add @librechat/client as a dependency in package.json and package-lock.json

* feat: update client package configuration and dependencies

- Added new dependencies for Rollup plugins and updated existing ones in package.json and package-lock.json.
- Introduced a new Rollup configuration file for building the client package.
- Refactored build scripts to include a dedicated build command for the client.
- Updated TypeScript configuration for improved module resolution and type declaration output.
- Integrated a Toast component from the client package into the main App component.

* feat: enhance Rollup configuration for client package

- Updated terser plugin settings to preserve directives like 'use client'.
- Added custom warning handler to ignore "use client" directive warnings during the build process.

* chore: rename package/client build script command

* feat: update client package dependencies and Rollup configuration

- Added rollup-plugin-postcss to package.json and updated package-lock.json.
- Enhanced Rollup configuration to include postcss plugin for CSS handling.
- Updated index.ts to export all components from the components directory for better modularity.

* feat: add client package directory to update configuration

- Included the 'client' package directory in the update.js configuration to ensure it is recognized during updates.

* feat: export Toast component in client package

- Added export for the Toast component in index.ts to enhance modularity and accessibility of components.

* feat: /client transition to @librechat/client

* chore: fixed formatting issues

* fix: update peer dependencies in @librechat/client to prevent bundling them

* fix: correct useSprings implementation in SplitText component

* fix: circular dependencies in DataTable

* fix: add remaining peer dependencies and match actual versions previously used in `client/package.json`

* fix: correct frontend:ci script to include client package build

* chore: enhance unused package detection for @librechat/client and improve dependency extraction

* fix: add missing peer dependency for @radix-ui/react-collapsible

* chore: include "packages/client" in unused i18next keys detection

* test: update AgentFooter tests to use document.querySelector for spinner checks
test: mock window.matchMedia in setupTests.js for consistent test environment

* feat: add react-hook-form dependency and update FormInput component to use its types

* chore: linting

* refactor: remove unused defaultSelectedValues prop from MCPSelect and MultiSelect components

* chore: linting

* feat: update GitHub Actions workflow to publish @librechat/client

* chore: update GitHub Actions workflow to install and build data-provider and client dependencies

* chore: add missing @testing-library/react dependency to client package

* chore: update tsconfig.json to exclude additional test files

* chore: fix build issues, resolve latest LC changes

* chore: move MCP components outside of `~/components/ui`

* feat: implement dynamic theme system with environment variable support and Tailwind CSS integration

* chore: remove unnecessary logging of sttExternal and ttsExternal in Speech component

* chore: squashed cleanup commits

chore: move @tanstack/react-virtual to dependencies and remove recoil from package.json

chore: move dependencies to peerDependencies in package.json

feat: update package.json and rollup.config.js to include jotai and enhance bundling configuration

feat: update package.json and rollup.config.js to include jotai and enhance bundling configuration

refactor: reorganize exports in index.ts for improved clarity

refactor: remove unused types and interfaces from common files

refactor: update peer dependencies and improve component typings

- Removed duplicate peer dependencies from package.json and organized them.
- Updated rollup.config.js to disable TypeScript checking during the build process.
- Modified AnimatedTabs component to use React.ReactNode for label and content types, and added TypeScript workarounds for compatibility.
- Enhanced Label and Separator components to accept an optional className prop and improved prop spreading.
- Updated Slider component to include an optional className prop and refined prop handling for better type safety.

refactor: clean up client workflow and update package dependencies

refactor: update package dependencies and improve PostCSS and Rollup configurations

chore: bump version to 0.1.2 in package.json

chore: bump client version to 0.1.2 in package-lock.json

chore: bump client version to 0.1.3 and update dependencies

chore: bump client version to 0.1.4 and update @react-spring dependencies

chore: update package version to 0.1.5 and adjust peer dependencies

- Bump version in package.json from 0.1.4 to 0.1.5.
- Update peer dependency for @tanstack/react-query to allow version 5.0.0.
- Add @tanstack/react-table and @tanstack/react-virtual as dependencies.
- Update various dependencies to their latest compatible versions.
- Simplify postcss.config.js by removing unnecessary options.
- Clean up rollup.config.js by removing ignored PostCSS warnings.
- Update CheckboxButton component to cast icon as React JSX element.
- Adjust Combobox component's class names for better styling.
- Change DropdownPopup component to use React's namespace import.
- Modify InputOTP component to use 'any' type for OTPInputContext.
- Ensure displayLabel and value in ModelParameters are converted to strings.
- Update MultiSearch component's placeholder to ensure it's a string.
- Cast selectIcon in MultiSelect as React JSX element for consistency.
- Update OGDialogTemplate to cast selectText as React JSX element.
- Initialize animationRef in PixelCard with undefined for clarity.
- Add TypeScript ignore comments in Select and SelectDropDown components for Radix UI type conflicts.
- Ensure title in SelectDropDown is a string and adjust rendering of options.
- Update useLocalize hook to cast options as any for compatibility.

refactor: code structure; chore: translations cleanup

chore: remove unused imports and clean up code in NewChat component

refactor: enhance Menu component to support custom render functions for menu items

style: update itemClassName in ToolsDropdown for improved UI consistency

fix: merge conflicts

chore: update @radix-ui/react-accordion to version 1.2.11

* refactor: remove unnecessary TypeScript type assertions in AnimatedTabs, Label, Separator, and Slider components

* feat: enhance theme system with localStorage persistence and new theme atoms

* chore: bump version of @librechat/client to 0.1.7

* chore: fix ci/cd warnings/errors related to linting and unused localization keys

* chore: update dependencies for class-variance-authority, clsx, and match-sorter

* chore: bump @librechat/client to v0.1.8

* feat: add utility colors for theme customization and remove unused tailwindConfig

* v0.1.9

---------

Co-authored-by: Marco Beretta <81851188+berry-13@users.noreply.github.com>
2025-07-27 12:19:01 -04:00
Danny Avila
97e1cdd224 🧗 refactor: Replace traverse package with Minimal Traversal for Logging (#8687)
* 📦 chore: Remove `keyv` from peerDependencies in package.json and package-lock.json for data-schemas

* refactor: replace traverse import with custom object-traverse utility for better control and error handling during logging

* chore(data-schemas): bump version to 0.0.15 and remove unused dependencies

* refactor: optimize message construction in debugTraverse

* chore: update Node.js version to 20.x in data-schemas workflow
2025-07-27 11:47:37 -04:00
Dustin Healy
d6a65f5a08 🐛 fix: Temporary Chats Still Visible in Sidebar (#8688)
* 🐛 fix: Fix import error causing temporary chats to still display in sidebar

* refactor: Update import path for `getCustomConfig` in Conversation and Message models

* chore: eslint warnings

* ci: add tests for Conversation and Message models

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-07-27 11:42:35 -04:00
Danny Avila
f4facb7d35 🪵 refactor: Dynamic getLogDirectory utility for Loggers (#8686) 2025-07-26 20:11:20 -04:00
Dustin Healy
545a909953 🗂️ refactor: Make MCPSubMenu consistent with MCPSelect (#8650)
- Refactored MCPSelect and MCPSubMenu components to utilize a new custom hook, `useMCPServerManager`, for improved state management and server initialization logic.
- Added functionality to handle simultaneous MCP server initialization requests, including cancellation and user notifications.
- Updated translation files to include new messages for initialization cancellation.
- Improved the configuration dialog handling for MCP servers, streamlining the user experience when managing server settings.
2025-07-25 14:51:42 -04:00
Danny Avila
cd436dc6a8 📦 chore: Update @modelcontextprotocol/sdk to v1.17.0 (#8674)
* 📦 chore: Update `@modelcontextprotocol/sdk` to v1.17.0

* refactor: unused package detection by extracting workspace dependencies in GitHub Actions workflow

* chore: Enhance unused package detection by including peerDependencies extraction in GitHub Actions workflow

* fix: Ensure safe extraction of dependencies and peerDependencies in unused package detection workflow
2025-07-25 14:06:16 -04:00
Danny Avila
e75beb92b3 🗑️ chore: Remove Workflows for Changelogs (#8673) 2025-07-25 13:45:22 -04:00
Danny Avila
5251246313 📱 refactor: Redis Client Error Logging and Ping only when Ready (#8671)
* 📱 refactor: Redis Client Error Logging and Ping only when Ready

* chore: intellisense for warning comment for Keyv Redis client regarding prefix support
2025-07-25 12:33:05 -04:00
Danny Avila
26f23c6aaf 📦 chore: Bump @node-saml/passport-saml to v5.1.0 (#8670) 2025-07-25 11:26:20 -04:00
Danny Avila
1636af1f27 📦 chore: Bump mongodb-memory-server to v10.1.4 (#8669) 2025-07-25 11:23:38 -04:00
Theo N. Truong
b050a0bf1e feat: Add Redis Ping Interval Configuration (#8648)
Co-authored-by: Danny Avila <danny@librechat.ai>
2025-07-25 11:00:02 -04:00
github-actions[bot]
deb928bf80 🌍 i18n: Update translation.json with latest translations (#8664)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-07-25 10:36:14 -04:00
Theo N. Truong
21005b66cc feat: Add support for forced in-memory cache namespaces configuration (#8586)
*  feat: Add support for forced in-memory cache keys configuration

* refactor: Update cache keys to use uppercase constants and moved cache for `librechat.yaml` into its own cache namespace (STATIC_CONFIG) and with a more descriptive key (LIBRECHAT_YAML_CONFIG)
2025-07-25 10:32:55 -04:00
Dustin Healy
3dc9e85fab 🐛 fix: Display OAuth MCP servers according to Chat Menu Setting (#8643)
* fix: chatMenu not being respected in MCPSelect

* fix: chatMenu not being respected in MCPSubMenu
2025-07-25 10:21:10 -04:00
Sebastien Bruel
ec67cf2d3a 🚇 chore: Remove Overridden Transport Error Listener (#8656) 2025-07-25 10:17:33 -04:00
Dustin Healy
1fe977e48f 🐛 fix: MCP Name Normalization breaking User Provided Variables (#8644) 2025-07-24 10:44:58 -04:00
Danny Avila
01470ef9fd 🔄 refactor: Default Completion Title Prompt and Title Model Selection (#8646)
* refactor: prefer `agent.model` (user-facing value) over `agent.model_parameters.model` to ensure Azure mapping

* chore: update @librechat/agents to version 2.4.68 to use new default title prompt for completion title method
2025-07-24 10:38:26 -04:00
Danny Avila
bef5c26bed v0.7.9 (#8638)
* chore: update version to v0.7.9 across all relevant files

* 🔧 chore: bump @librechat/api version to 1.2.9

* 🔧 chore: update @librechat/data-schemas version to 0.0.12

* 🔧 chore: bump librechat-data-provider version to 0.7.902
2025-07-24 01:46:47 -04:00
github-actions[bot]
9e03fef9db 🌍 i18n: Update translation.json with latest translations (#8639)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-07-24 00:46:42 -04:00
Sebastien Bruel
283c9cff6f ℹ️ fix: Add back Removed Icons for MCP Servers in Tools Dialog (#8636)
* Bug: Fix icons for MCP servers

* Add `OPENAI_API_KEY` to `jestSetup.js` to fix tests
2025-07-24 00:41:06 -04:00
Danny Avila
0aafdc0a86 🔳 fix: Bare Object MCP Tool Schemas as Passthrough (#8637)
* 🔳 fix: Bare Object MCP Tool Schemas as Passthrough

* ci: Add cases for handling complex object schemas in convertJsonSchemaToZod
2025-07-24 00:11:20 -04:00
Danny Avila
365e3bca95 🔁 feat: Allow "http" as Alias for "streamable-http" in MCP Options (#8624)
- Updated StreamableHTTPOptionsSchema to accept "http" alongside "streamable-http".
- Enhanced isStreamableHTTPOptions function to handle both types and validate URLs accordingly.
- Added tests to ensure correct processing of "http" type options and rejection of websocket URLs.
2025-07-23 10:26:40 -04:00
Danny Avila
a01536ddb7 🔗 fix: Set Abort Signal for Agent Chain Run if Cleaned Up (#8625) 2025-07-23 10:26:27 -04:00
github-actions[bot]
8a3ff62ee6 🌍 i18n: Update translation.json with latest translations (#8613)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-07-23 09:06:36 -04:00
Danny Avila
74d8a3824c 🔌 feat: MCP Reinitialization and OAuth in UI (#8598)
*  feat: Add connection status endpoint for MCP servers

- Implemented a new endpoint to retrieve the connection status of all MCP servers without disconnecting idle connections.
- Enhanced MCPManager class with a method to get all user-specific connections.

* feat: add silencer arg to loadCustomConfig function to conditionally print config details

- Modified loadCustomConfig to accept a printConfig parameter that allows me to prevent the entire custom config being printed every time it is called

* fix: new status endpoint actually works now, changes to manager.ts to support it

- Updated the connection status endpoint to utilize Maps for app and user connections, rather than incorrectly treating them as objects.
- Introduced a new method + variable in MCPManager to track servers requiring OAuth discovered at startup.
- Stopped OAuth flow from continuing once detected during startup for a new connection

* refactor: Remove hasAuthConfig since we can get that on the frontend without needing to use the endpoint

* feat: Add MCP connection status query and query key for new endpoint

- Introduced a new query hook `useMCPConnectionStatusQuery` to fetch the connection status of MCP servers.
- Added request in data-service
- Defined the API endpoint for retrieving MCP connection status in api-endpoints.ts.
- Defined new types for MCP connection status responses in the types module.
- Added mcpConnectionStatus key

* feat: Enhance MCPSelect component with connection status and server configuration

- Added connection status handling for MCP servers using the new `useMCPConnectionStatusQuery` hook.
- Implemented logic to display appropriate status icons based on connection state and authentication configuration.
- Updated the server selection logic to utilize configured MCP servers from the startup configuration.
- Refactored the rendering of configuration buttons and status indicators for improved user interaction.

* refactor: move MCPConfigDialog to its own  MCP subdir in ui and update import

* refactor: silence loadCustomConfig in status endpoint

* feat: Add optional pluginKey parameter to getUserPluginAuthValue

* feat: Add MCP authentication values endpoint and related queries

- Implemented a new endpoint to check authentication value flags for specific MCP servers, returning boolean indicators for each custom user variable.
- Added a corresponding query hook `useMCPAuthValuesQuery` to fetch authentication values from the frontend.
- Defined the API endpoint for retrieving MCP authentication values in api-endpoints.ts.
- Updated data-service to include a method for fetching MCP authentication values.
- Introduced new types for MCP authentication values responses in the types module.
- Added a new query key for MCP authentication values.

* feat: Localize MCPSelect component status labels and aria attributes

- Updated the MCPSelect component to use localized strings for connection status labels and aria attributes, enhancing accessibility and internationalization support.
- Added new translation keys for various connection states in the translation.json file.

* feat: Implement filtered MCP values selection based on connection status in MCPSelect

- Added a new `filteredSetMCPValues` function to ensure only connected servers are selectable in the MCPSelect component.
- Updated the rendering logic to visually indicate the connection status of servers by adjusting opacity.
- Enhanced accessibility by localizing the aria-label for the configuration button.

* feat: Add CustomUserVarsSection component for managing user variables

- Introduced a new `CustomUserVarsSection` component to allow users to configure custom variables for MCP servers.
- Integrated localization for user interface elements and added new translation keys for variable management.
- Added functionality to save and revoke user variables, with visual indicators for set/unset states.

* feat: Enhance MCPSelect and MCPConfigDialog with improved state management and UI updates

- Integrated `useQueryClient` to refetch queries for tools, authentication values, and connection status upon successful plugin updates in MCPSelect.
- Simplified plugin key handling by directly using the formatted plugin key in save and revoke operations.
- Updated MCPConfigDialog to include server status indicators and improved dialog content structure for better user experience.
- Added new translation key for active status in the localization files.

* feat: Enhance MCPConfigDialog with dynamic server status badges and localization updates

- Added a helper function to render status badges based on the connection state of the MCP server, improving user feedback on connection status.
- Updated the localization files to include new translation keys for connection states such as "Connecting" and "Offline".
- Refactored the dialog to utilize the new status rendering function for better code organization and readability.

* feat: Implement OAuth handling and server initialization in MCP reinitialize flow

- Added OAuth handling to the MCP reinitialize endpoint, allowing the server to capture and return OAuth URLs when required.
- Updated the MCPConfigDialog to include a new ServerInitializationSection for managing server initialization and OAuth flow.
- Enhanced the user experience by providing feedback on server status and OAuth requirements through localized messages.
- Introduced new translation keys for OAuth-related messages in the localization files.
- Refactored the MCPSelect component to remove unused authentication configuration props.

* feat: Make OAuth actually work / update after OAuth link authorized

- Improved the handling of OAuth flows in the MCP reinitialize process, allowing for immediate return when OAuth is initiated.
- Updated the UserController to extract server names from plugin keys for better logging and connection management.
- Enhanced the MCPSelect component to reflect authentication status based on OAuth requirements.
- Implemented polling for OAuth completion in the ServerInitializationSection to improve user feedback during the connection process.
- Refactored MCPManager to support new OAuth flow initiation logic and connection handling.

* refactor: Simplify MCPPanel component and enhance server status display

- Removed unused imports and state management related to user plugins and server reinitialization.
- Integrated connection status handling directly into the MCPPanel for improved user feedback.
- Updated the rendering logic to display server connection states with visual indicators.
- Refactored the editing view to utilize new components for server initialization and custom user variables management.

* chore: remove comments

* chore: remove unused translation key for MCP panel

* refactor: Rename returnOnOAuthInitiated to returnOnOAuth for clarity

* refactor: attempt initialize on server click

* feat: add cancel OAuth flow functionality and related UI updates

* refactor: move server status icon logic into its own component

* chore: remove old localization strings (makes more sense for icon labels to just use configure stirng since thats where it leads to)

* fix: fix accessibility issues with MCPSelect

* fix: add missing save/revoke mutation logic to MCPPanel

* styling: add margin to checkmark in MultiSelect

* fix: add back in customUserVars check to hide gear config icon for servers without customUserVars

---------

Co-authored-by: Dustin Healy <dustinhealy1@gmail.com>
Co-authored-by: Dustin Healy <54083382+dustinhealy@users.noreply.github.com>
2025-07-22 22:52:45 -04:00
Danny Avila
62c3f135e7 ✔️ fix: Resource field TypeError & Missing Role Permission Type (#8606)
* fix: resource parameter undefined TypeError in log

* chore: Add missing FILE_SEARCH permission type to IRole interface

* chore: Bump version of @librechat/data-schemas to 0.0.11

* fix: Ensure resource is defined and handle potential null values in OAuth flow
2025-07-22 18:22:58 -04:00
Rinor Maloku
baf3b4ad08 🔐 feat: Add Resource Parameter to OAuth Requests per MCP Spec (#8599) 2025-07-22 17:52:55 -04:00
Danny Avila
e5d08ccdf1 🗂️ feat: Add File Search Toggle Permission for Chat Area Badge (#8605) 2025-07-22 17:51:21 -04:00
github-actions[bot]
5178507b1c 🌍 i18n: Update translation.json with latest translations (#8602)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-07-22 15:53:30 -04:00
Danny Avila
f797e90d79 🔀 feat: moonshotai/kimi Context and OpenRouter Endpoint Token Config (#8604)
*  feat: Enhance agent initialization with endpoint token configuration and round max context tokens

* feat: recognize moonshot/kimi model context window

* chore: remove unused i18n key
2025-07-22 15:52:54 -04:00
Danny Avila
259224d986 🧼 chore: Clean up Settings by Removing Beta tab and reorganizing imports 2025-07-22 12:05:58 -04:00
Danny Avila
13789ab261 ☁️ fix: 'thinking' parameter default to false for Bedrock Conversations (#8600) 2025-07-22 12:01:18 -04:00
Dustin Healy
faaba30af1 feat: Add MCP Reinitialization to MCPPanel (#8418)
*  feat: Add MCP Reinitialization to MCPPanel

- Refactored tool caching to include user-specific tools in various service files.
- Refactored MCPManager class for clarity
- Added a new endpoint for reinitializing MCP servers, allowing for dynamic updates of server configurations.
- Enhanced the MCPPanel component to support server reinitialization with user feedback.

* 🔃 refactor: Simplify Plugin Deduplication and Clear Cache Post-MCP Initialization

- Replaced manual deduplication of tools with the dedicated `filterUniquePlugins` function for improved readability.
- Added back cache clearing for tools after MCP initialization to ensure fresh data is used.
- Removed unused exports from `PluginController.js` to clean up the codebase.
2025-07-21 17:49:19 -04:00
Danny Avila
14660d75ae 🆕 feat: Enhanced Title Generation Config Options (#8580)
* 🏗️ refactor: Extract reasoning key logic into separate function

* refactor: Ensure `overrideProvider` is always defined in `getProviderConfig` result, and only used in `initializeAgent` if different from `agent.provider`

* feat: new title configuration options across services

- titlePrompt
- titleEndpoint
- titlePromptTemplate
- new "completion" titleMethod (new default)

* chore: update @librechat/agents and conform openai version to prevent SDK errors

* chore: add form-data package as a dependency and override to v4.0.4 to address CVE-2025-7783

* feat: add support for 'all' endpoint configuration in AppService and corresponding tests

* refactor: replace HttpsProxyAgent with ProxyAgent from undici for improved proxy handling in assistant initialization

* chore: update frontend review workflow to limit package paths to data-provider

* chore: update backend review workflow to include all package paths
2025-07-21 17:37:37 -04:00
803 changed files with 24948 additions and 4711 deletions

View File

@@ -442,6 +442,8 @@ OPENID_REQUIRED_ROLE_PARAMETER_PATH=
OPENID_USERNAME_CLAIM=
# Set to determine which user info property returned from OpenID Provider to store as the User's name
OPENID_NAME_CLAIM=
# Optional audience parameter for OpenID authorization requests
OPENID_AUDIENCE=
OPENID_BUTTON_LABEL=
OPENID_IMAGE_URL=
@@ -627,6 +629,15 @@ HELP_AND_FAQ_URL=https://librechat.ai
# Redis connection limits
# REDIS_MAX_LISTENERS=40
# Redis ping interval in seconds (0 = disabled, >0 = enabled)
# When set to a positive integer, Redis clients will ping the server at this interval to keep connections alive
# When unset or 0, no pinging is performed (recommended for most use cases)
# REDIS_PING_INTERVAL=300
# Force specific cache namespaces to use in-memory storage even when Redis is enabled
# Comma-separated list of CacheKeys (e.g., STATIC_CONFIG,ROLES,MESSAGES)
# FORCED_IN_MEMORY_CACHE_NAMESPACES=STATIC_CONFIG,ROLES
#==================================================#
# Others #
#==================================================#

View File

@@ -7,7 +7,7 @@ on:
- release/*
paths:
- 'api/**'
- 'packages/api/**'
- 'packages/**'
jobs:
tests_Backend:
name: Run Backend unit tests

View File

@@ -1,6 +1,11 @@
name: Publish `@librechat/client` to NPM
on:
push:
branches:
- main
paths:
- 'packages/client/package.json'
workflow_dispatch:
inputs:
reason:
@@ -17,16 +22,37 @@ jobs:
- name: Use Node.js
uses: actions/setup-node@v4
with:
node-version: '18.x'
- name: Check if client package exists
node-version: '20.x'
- name: Install client dependencies
run: cd packages/client && npm ci
- name: Build client
run: cd packages/client && npm run build
- name: Set up npm authentication
run: echo "//registry.npmjs.org/:_authToken=${{ secrets.PUBLISH_NPM_TOKEN }}" > ~/.npmrc
- name: Check version change
id: check
working-directory: packages/client
run: |
if [ -d "packages/client" ]; then
echo "Client package directory found"
PACKAGE_VERSION=$(node -p "require('./package.json').version")
PUBLISHED_VERSION=$(npm view @librechat/client version 2>/dev/null || echo "0.0.0")
if [ "$PACKAGE_VERSION" = "$PUBLISHED_VERSION" ]; then
echo "No version change, skipping publish"
echo "skip=true" >> $GITHUB_OUTPUT
else
echo "Client package directory not found - workflow ready for future use"
exit 0
echo "Version changed, proceeding with publish"
echo "skip=false" >> $GITHUB_OUTPUT
fi
- name: Placeholder for future publishing
run: echo "Client package publishing workflow is ready"
- name: Pack package
if: steps.check.outputs.skip != 'true'
working-directory: packages/client
run: npm pack
- name: Publish
if: steps.check.outputs.skip != 'true'
working-directory: packages/client
run: npm publish *.tgz --access public

View File

@@ -22,7 +22,7 @@ jobs:
- name: Use Node.js
uses: actions/setup-node@v4
with:
node-version: '18.x'
node-version: '20.x'
- name: Install dependencies
run: cd packages/data-schemas && npm ci

View File

@@ -8,7 +8,7 @@ on:
- release/*
paths:
- 'client/**'
- 'packages/**'
- 'packages/data-provider/**'
jobs:
tests_frontend_ubuntu:

View File

@@ -1,95 +0,0 @@
name: Generate Release Changelog PR
on:
push:
tags:
- 'v*.*.*'
workflow_dispatch:
jobs:
generate-release-changelog-pr:
permissions:
contents: write # Needed for pushing commits and creating branches.
pull-requests: write
runs-on: ubuntu-latest
steps:
# 1. Checkout the repository (with full history).
- name: Checkout Repository
uses: actions/checkout@v4
with:
fetch-depth: 0
# 2. Generate the release changelog using our custom configuration.
- name: Generate Release Changelog
id: generate_release
uses: mikepenz/release-changelog-builder-action@v5.1.0
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
configuration: ".github/configuration-release.json"
owner: ${{ github.repository_owner }}
repo: ${{ github.event.repository.name }}
outputFile: CHANGELOG-release.md
# 3. Update the main CHANGELOG.md:
# - If it doesn't exist, create it with a basic header.
# - Remove the "Unreleased" section (if present).
# - Prepend the new release changelog above previous releases.
# - Remove all temporary files before committing.
- name: Update CHANGELOG.md
run: |
# Determine the release tag, e.g. "v1.2.3"
TAG=${GITHUB_REF##*/}
echo "Using release tag: $TAG"
# Ensure CHANGELOG.md exists; if not, create a basic header.
if [ ! -f CHANGELOG.md ]; then
echo "# Changelog" > CHANGELOG.md
echo "" >> CHANGELOG.md
echo "All notable changes to this project will be documented in this file." >> CHANGELOG.md
echo "" >> CHANGELOG.md
fi
echo "Updating CHANGELOG.md…"
# Remove the "Unreleased" section (from "## [Unreleased]" until the first occurrence of '---') if it exists.
if grep -q "^## \[Unreleased\]" CHANGELOG.md; then
awk '/^## \[Unreleased\]/{flag=1} flag && /^---/{flag=0; next} !flag' CHANGELOG.md > CHANGELOG.cleaned
else
cp CHANGELOG.md CHANGELOG.cleaned
fi
# Split the cleaned file into:
# - header.md: content before the first release header ("## [v...").
# - tail.md: content from the first release header onward.
awk '/^## \[v/{exit} {print}' CHANGELOG.cleaned > header.md
awk 'f{print} /^## \[v/{f=1; print}' CHANGELOG.cleaned > tail.md
# Combine header, the new release changelog, and the tail.
echo "Combining updated changelog parts..."
cat header.md CHANGELOG-release.md > CHANGELOG.md.new
echo "" >> CHANGELOG.md.new
cat tail.md >> CHANGELOG.md.new
mv CHANGELOG.md.new CHANGELOG.md
# Remove temporary files.
rm -f CHANGELOG.cleaned header.md tail.md CHANGELOG-release.md
echo "Final CHANGELOG.md content:"
cat CHANGELOG.md
# 4. Create (or update) the Pull Request with the updated CHANGELOG.md.
- name: Create Pull Request
uses: peter-evans/create-pull-request@v7
with:
token: ${{ secrets.GITHUB_TOKEN }}
sign-commits: true
commit-message: "chore: update CHANGELOG for release ${{ github.ref_name }}"
base: main
branch: "changelog/${{ github.ref_name }}"
reviewers: danny-avila
title: "📜 docs: Changelog for release ${{ github.ref_name }}"
body: |
**Description**:
- This PR updates the CHANGELOG.md by removing the "Unreleased" section and adding new release notes for release ${{ github.ref_name }} above previous releases.

View File

@@ -1,107 +0,0 @@
name: Generate Unreleased Changelog PR
on:
schedule:
- cron: "0 0 * * 1" # Runs every Monday at 00:00 UTC
workflow_dispatch:
jobs:
generate-unreleased-changelog-pr:
permissions:
contents: write # Needed for pushing commits and creating branches.
pull-requests: write
runs-on: ubuntu-latest
steps:
# 1. Checkout the repository on main.
- name: Checkout Repository on Main
uses: actions/checkout@v4
with:
ref: main
fetch-depth: 0
# 4. Get the latest version tag.
- name: Get Latest Tag
id: get_latest_tag
run: |
LATEST_TAG=$(git describe --tags $(git rev-list --tags --max-count=1) || echo "none")
echo "Latest tag: $LATEST_TAG"
echo "tag=$LATEST_TAG" >> $GITHUB_OUTPUT
# 5. Generate the Unreleased changelog.
- name: Generate Unreleased Changelog
id: generate_unreleased
uses: mikepenz/release-changelog-builder-action@v5.1.0
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
configuration: ".github/configuration-unreleased.json"
owner: ${{ github.repository_owner }}
repo: ${{ github.event.repository.name }}
outputFile: CHANGELOG-unreleased.md
fromTag: ${{ steps.get_latest_tag.outputs.tag }}
toTag: main
# 7. Update CHANGELOG.md with the new Unreleased section.
- name: Update CHANGELOG.md
id: update_changelog
run: |
# Create CHANGELOG.md if it doesn't exist.
if [ ! -f CHANGELOG.md ]; then
echo "# Changelog" > CHANGELOG.md
echo "" >> CHANGELOG.md
echo "All notable changes to this project will be documented in this file." >> CHANGELOG.md
echo "" >> CHANGELOG.md
fi
echo "Updating CHANGELOG.md…"
# Extract content before the "## [Unreleased]" (or first version header if missing).
if grep -q "^## \[Unreleased\]" CHANGELOG.md; then
awk '/^## \[Unreleased\]/{exit} {print}' CHANGELOG.md > CHANGELOG_TMP.md
else
awk '/^## \[v/{exit} {print}' CHANGELOG.md > CHANGELOG_TMP.md
fi
# Append the generated Unreleased changelog.
echo "" >> CHANGELOG_TMP.md
cat CHANGELOG-unreleased.md >> CHANGELOG_TMP.md
echo "" >> CHANGELOG_TMP.md
# Append the remainder of the original changelog (starting from the first version header).
awk 'f{print} /^## \[v/{f=1; print}' CHANGELOG.md >> CHANGELOG_TMP.md
# Replace the old file with the updated file.
mv CHANGELOG_TMP.md CHANGELOG.md
# Remove the temporary generated file.
rm -f CHANGELOG-unreleased.md
echo "Final CHANGELOG.md:"
cat CHANGELOG.md
# 8. Check if CHANGELOG.md has any updates.
- name: Check for CHANGELOG.md changes
id: changelog_changes
run: |
if git diff --quiet CHANGELOG.md; then
echo "has_changes=false" >> $GITHUB_OUTPUT
else
echo "has_changes=true" >> $GITHUB_OUTPUT
fi
# 9. Create (or update) the Pull Request only if there are changes.
- name: Create Pull Request
if: steps.changelog_changes.outputs.has_changes == 'true'
uses: peter-evans/create-pull-request@v7
with:
token: ${{ secrets.GITHUB_TOKEN }}
base: main
branch: "changelog/unreleased-update"
sign-commits: true
commit-message: "action: update Unreleased changelog"
title: "📜 docs: Unreleased Changelog"
body: |
**Description**:
- This PR updates the Unreleased section in CHANGELOG.md.
- It compares the current main branch with the latest version tag (determined as ${{ steps.get_latest_tag.outputs.tag }}),
regenerates the Unreleased changelog, removes any old Unreleased block, and inserts the new content.

View File

@@ -4,12 +4,13 @@ name: Build Helm Charts on Tag
on:
push:
tags:
- "*"
- "chart-*"
jobs:
release:
permissions:
contents: write
packages: write
runs-on: ubuntu-latest
steps:
- name: Checkout
@@ -26,15 +27,49 @@ jobs:
uses: azure/setup-helm@v4
env:
GITHUB_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
- name: Build Subchart Deps
run: |
cd helm/librechat-rag-api
helm dependency build
cd helm/librechat
helm dependency build
cd ../librechat-rag-api
helm dependency build
- name: Run chart-releaser
uses: helm/chart-releaser-action@v1.6.0
- name: Get Chart Version
id: chart-version
run: |
CHART_VERSION=$(echo "${{ github.ref_name }}" | cut -d'-' -f2)
echo "CHART_VERSION=${CHART_VERSION}" >> "$GITHUB_OUTPUT"
# Log in to GitHub Container Registry
- name: Log in to GitHub Container Registry
uses: docker/login-action@v3
with:
charts_dir: helm
skip_existing: true
env:
CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
# Run Helm OCI Charts Releaser
# This is for the librechat chart
- name: Release Helm OCI Charts for librechat
uses: appany/helm-oci-chart-releaser@v0.4.2
with:
name: librechat
repository: ${{ github.actor }}/librechat-chart
tag: ${{ steps.chart-version.outputs.CHART_VERSION }}
path: helm/librechat
registry: ghcr.io
registry_username: ${{ github.actor }}
registry_password: ${{ secrets.GITHUB_TOKEN }}
# this is for the librechat-rag-api chart
- name: Release Helm OCI Charts for librechat-rag-api
uses: appany/helm-oci-chart-releaser@v0.4.2
with:
name: librechat-rag-api
repository: ${{ github.actor }}/librechat-chart
tag: ${{ steps.chart-version.outputs.CHART_VERSION }}
path: helm/librechat-rag-api
registry: ghcr.io
registry_username: ${{ github.actor }}
registry_password: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -6,6 +6,7 @@ on:
- "client/src/**"
- "api/**"
- "packages/data-provider/src/**"
- "packages/client/**"
jobs:
detect-unused-i18n-keys:
@@ -23,7 +24,7 @@ jobs:
# Define paths
I18N_FILE="client/src/locales/en/translation.json"
SOURCE_DIRS=("client/src" "api" "packages/data-provider/src")
SOURCE_DIRS=("client/src" "api" "packages/data-provider/src" "packages/client")
# Check if translation file exists
if [[ ! -f "$I18N_FILE" ]]; then

View File

@@ -48,7 +48,7 @@ jobs:
# 2. Download translation files from locize.
- name: Download Translations from locize
uses: locize/download@v1
uses: locize/download@v2
with:
project-id: ${{ secrets.LOCIZE_PROJECT_ID }}
path: "client/src/locales"

View File

@@ -7,6 +7,7 @@ on:
- 'package-lock.json'
- 'client/**'
- 'api/**'
- 'packages/client/**'
jobs:
detect-unused-packages:
@@ -28,7 +29,7 @@ jobs:
- name: Validate JSON files
run: |
for FILE in package.json client/package.json api/package.json; do
for FILE in package.json client/package.json api/package.json packages/client/package.json; do
if [[ -f "$FILE" ]]; then
jq empty "$FILE" || (echo "::error title=Invalid JSON::$FILE is invalid" && exit 1)
fi
@@ -63,12 +64,31 @@ jobs:
local folder=$1
local output_file=$2
if [[ -d "$folder" ]]; then
grep -rEho "require\\(['\"]([a-zA-Z0-9@/._-]+)['\"]\\)" "$folder" --include=\*.{js,ts,mjs,cjs} | \
# Extract require() statements
grep -rEho "require\\(['\"]([a-zA-Z0-9@/._-]+)['\"]\\)" "$folder" --include=\*.{js,ts,tsx,jsx,mjs,cjs} | \
sed -E "s/require\\(['\"]([a-zA-Z0-9@/._-]+)['\"]\\)/\1/" > "$output_file"
grep -rEho "import .* from ['\"]([a-zA-Z0-9@/._-]+)['\"]" "$folder" --include=\*.{js,ts,mjs,cjs} | \
# Extract ES6 imports - various patterns
# import x from 'module'
grep -rEho "import .* from ['\"]([a-zA-Z0-9@/._-]+)['\"]" "$folder" --include=\*.{js,ts,tsx,jsx,mjs,cjs} | \
sed -E "s/import .* from ['\"]([a-zA-Z0-9@/._-]+)['\"]/\1/" >> "$output_file"
# import 'module' (side-effect imports)
grep -rEho "import ['\"]([a-zA-Z0-9@/._-]+)['\"]" "$folder" --include=\*.{js,ts,tsx,jsx,mjs,cjs} | \
sed -E "s/import ['\"]([a-zA-Z0-9@/._-]+)['\"]/\1/" >> "$output_file"
# export { x } from 'module' or export * from 'module'
grep -rEho "export .* from ['\"]([a-zA-Z0-9@/._-]+)['\"]" "$folder" --include=\*.{js,ts,tsx,jsx,mjs,cjs} | \
sed -E "s/export .* from ['\"]([a-zA-Z0-9@/._-]+)['\"]/\1/" >> "$output_file"
# import type { x } from 'module' (TypeScript)
grep -rEho "import type .* from ['\"]([a-zA-Z0-9@/._-]+)['\"]" "$folder" --include=\*.{ts,tsx} | \
sed -E "s/import type .* from ['\"]([a-zA-Z0-9@/._-]+)['\"]/\1/" >> "$output_file"
# Remove subpath imports but keep the base package
# e.g., '@tanstack/react-query/devtools' becomes '@tanstack/react-query'
sed -i -E 's|^(@?[a-zA-Z0-9-]+(/[a-zA-Z0-9-]+)?)/.*|\1|' "$output_file"
sort -u "$output_file" -o "$output_file"
else
touch "$output_file"
@@ -78,13 +98,80 @@ jobs:
extract_deps_from_code "." root_used_code.txt
extract_deps_from_code "client" client_used_code.txt
extract_deps_from_code "api" api_used_code.txt
# Extract dependencies used by @librechat/client package
extract_deps_from_code "packages/client" packages_client_used_code.txt
- name: Get @librechat/client dependencies
id: get-librechat-client-deps
run: |
if [[ -f "packages/client/package.json" ]]; then
# Get all dependencies from @librechat/client (dependencies, devDependencies, and peerDependencies)
DEPS=$(jq -r '.dependencies // {} | keys[]' packages/client/package.json 2>/dev/null || echo "")
DEV_DEPS=$(jq -r '.devDependencies // {} | keys[]' packages/client/package.json 2>/dev/null || echo "")
PEER_DEPS=$(jq -r '.peerDependencies // {} | keys[]' packages/client/package.json 2>/dev/null || echo "")
# Combine all dependencies
echo "$DEPS" > librechat_client_deps.txt
echo "$DEV_DEPS" >> librechat_client_deps.txt
echo "$PEER_DEPS" >> librechat_client_deps.txt
# Also include dependencies that are imported in packages/client
cat packages_client_used_code.txt >> librechat_client_deps.txt
# Remove empty lines and sort
grep -v '^$' librechat_client_deps.txt | sort -u > temp_deps.txt
mv temp_deps.txt librechat_client_deps.txt
else
touch librechat_client_deps.txt
fi
- name: Extract Workspace Dependencies
id: extract-workspace-deps
run: |
# Function to get dependencies from a workspace package that are used by another package
get_workspace_package_deps() {
local package_json=$1
local output_file=$2
# Get all workspace dependencies (starting with @librechat/)
if [[ -f "$package_json" ]]; then
local workspace_deps=$(jq -r '.dependencies // {} | to_entries[] | select(.key | startswith("@librechat/")) | .key' "$package_json" 2>/dev/null || echo "")
# For each workspace dependency, get its dependencies
for dep in $workspace_deps; do
# Convert @librechat/api to packages/api
local workspace_path=$(echo "$dep" | sed 's/@librechat\//packages\//')
local workspace_package_json="${workspace_path}/package.json"
if [[ -f "$workspace_package_json" ]]; then
# Extract all dependencies from the workspace package
jq -r '.dependencies // {} | keys[]' "$workspace_package_json" 2>/dev/null >> "$output_file"
# Also extract peerDependencies
jq -r '.peerDependencies // {} | keys[]' "$workspace_package_json" 2>/dev/null >> "$output_file"
fi
done
fi
if [[ -f "$output_file" ]]; then
sort -u "$output_file" -o "$output_file"
else
touch "$output_file"
fi
}
# Get workspace dependencies for each package
get_workspace_package_deps "package.json" root_workspace_deps.txt
get_workspace_package_deps "client/package.json" client_workspace_deps.txt
get_workspace_package_deps "api/package.json" api_workspace_deps.txt
- name: Run depcheck for root package.json
id: check-root
run: |
if [[ -f "package.json" ]]; then
UNUSED=$(depcheck --json | jq -r '.dependencies | join("\n")' || echo "")
UNUSED=$(comm -23 <(echo "$UNUSED" | sort) <(cat root_used_deps.txt root_used_code.txt | sort) || echo "")
# Exclude dependencies used in scripts, code, and workspace packages
UNUSED=$(comm -23 <(echo "$UNUSED" | sort) <(cat root_used_deps.txt root_used_code.txt root_workspace_deps.txt | sort) || echo "")
echo "ROOT_UNUSED<<EOF" >> $GITHUB_ENV
echo "$UNUSED" >> $GITHUB_ENV
echo "EOF" >> $GITHUB_ENV
@@ -97,7 +184,8 @@ jobs:
chmod -R 755 client
cd client
UNUSED=$(depcheck --json | jq -r '.dependencies | join("\n")' || echo "")
UNUSED=$(comm -23 <(echo "$UNUSED" | sort) <(cat ../client_used_deps.txt ../client_used_code.txt | sort) || echo "")
# Exclude dependencies used in scripts, code, and workspace packages
UNUSED=$(comm -23 <(echo "$UNUSED" | sort) <(cat ../client_used_deps.txt ../client_used_code.txt ../client_workspace_deps.txt | sort) || echo "")
# Filter out false positives
UNUSED=$(echo "$UNUSED" | grep -v "^micromark-extension-llm-math$" || echo "")
echo "CLIENT_UNUSED<<EOF" >> $GITHUB_ENV
@@ -113,7 +201,8 @@ jobs:
chmod -R 755 api
cd api
UNUSED=$(depcheck --json | jq -r '.dependencies | join("\n")' || echo "")
UNUSED=$(comm -23 <(echo "$UNUSED" | sort) <(cat ../api_used_deps.txt ../api_used_code.txt | sort) || echo "")
# Exclude dependencies used in scripts, code, and workspace packages
UNUSED=$(comm -23 <(echo "$UNUSED" | sort) <(cat ../api_used_deps.txt ../api_used_code.txt ../api_workspace_deps.txt | sort) || echo "")
echo "API_UNUSED<<EOF" >> $GITHUB_ENV
echo "$UNUSED" >> $GITHUB_ENV
echo "EOF" >> $GITHUB_ENV

3
.gitignore vendored
View File

@@ -13,6 +13,9 @@ pids
*.seed
.git
# CI/CD data
test-image*
# Directory for instrumented libs generated by jscoverage/JSCover
lib-cov

View File

@@ -7,8 +7,49 @@ All notable changes to this project will be documented in this file.
## [Unreleased]
- no changes
### ✨ New Features
- ✨ feat: implement search parameter updates by **@mawburn** in [#7151](https://github.com/danny-avila/LibreChat/pull/7151)
- 🎏 feat: Add MCP support for Streamable HTTP Transport by **@benverhees** in [#7353](https://github.com/danny-avila/LibreChat/pull/7353)
- 🔒 feat: Add Content Security Policy using Helmet middleware by **@rubentalstra** in [#7377](https://github.com/danny-avila/LibreChat/pull/7377)
- ✨ feat: Add Normalization for MCP Server Names by **@danny-avila** in [#7421](https://github.com/danny-avila/LibreChat/pull/7421)
- 📊 feat: Improve Helm Chart by **@hofq** in [#3638](https://github.com/danny-avila/LibreChat/pull/3638)
- 🦾 feat: Claude-4 Support by **@danny-avila** in [#7509](https://github.com/danny-avila/LibreChat/pull/7509)
- 🪨 feat: Bedrock Support for Claude-4 Reasoning by **@danny-avila** in [#7517](https://github.com/danny-avila/LibreChat/pull/7517)
### 🌍 Internationalization
- 🌍 i18n: Add `Danish` and `Czech` and `Catalan` localization support by **@rubentalstra** in [#7373](https://github.com/danny-avila/LibreChat/pull/7373)
- 🌍 i18n: Update translation.json with latest translations by **@github-actions[bot]** in [#7375](https://github.com/danny-avila/LibreChat/pull/7375)
- 🌍 i18n: Update translation.json with latest translations by **@github-actions[bot]** in [#7468](https://github.com/danny-avila/LibreChat/pull/7468)
### 🔧 Fixes
- 💬 fix: update aria-label for accessibility in ConvoLink component by **@berry-13** in [#7320](https://github.com/danny-avila/LibreChat/pull/7320)
- 🔑 fix: use `apiKey` instead of `openAIApiKey` in OpenAI-like Config by **@danny-avila** in [#7337](https://github.com/danny-avila/LibreChat/pull/7337)
- 🔄 fix: update navigation logic in `useFocusChatEffect` to ensure correct search parameters are used by **@mawburn** in [#7340](https://github.com/danny-avila/LibreChat/pull/7340)
- 🔄 fix: Improve MCP Connection Cleanup by **@danny-avila** in [#7400](https://github.com/danny-avila/LibreChat/pull/7400)
- 🛡️ fix: Preset and Validation Logic for URL Query Params by **@danny-avila** in [#7407](https://github.com/danny-avila/LibreChat/pull/7407)
- 🌘 fix: artifact of preview text is illegible in dark mode by **@nhtruong** in [#7405](https://github.com/danny-avila/LibreChat/pull/7405)
- 🛡️ fix: Temporarily Remove CSP until Configurable by **@danny-avila** in [#7419](https://github.com/danny-avila/LibreChat/pull/7419)
- 💽 fix: Exclude index page `/` from static cache settings by **@sbruel** in [#7382](https://github.com/danny-avila/LibreChat/pull/7382)
### ⚙️ Other Changes
- 📜 docs: CHANGELOG for release v0.7.8 by **@github-actions[bot]** in [#7290](https://github.com/danny-avila/LibreChat/pull/7290)
- 📦 chore: Update API Package Dependencies by **@danny-avila** in [#7359](https://github.com/danny-avila/LibreChat/pull/7359)
- 📜 docs: Unreleased Changelog by **@github-actions[bot]** in [#7321](https://github.com/danny-avila/LibreChat/pull/7321)
- 📜 docs: Unreleased Changelog by **@github-actions[bot]** in [#7434](https://github.com/danny-avila/LibreChat/pull/7434)
- 🛡️ chore: `multer` v2.0.0 for CVE-2025-47935 and CVE-2025-47944 by **@danny-avila** in [#7454](https://github.com/danny-avila/LibreChat/pull/7454)
- 📂 refactor: Improve `FileAttachment` & File Form Deletion by **@danny-avila** in [#7471](https://github.com/danny-avila/LibreChat/pull/7471)
- 📊 chore: Remove Old Helm Chart by **@hofq** in [#7512](https://github.com/danny-avila/LibreChat/pull/7512)
- 🪖 chore: bump helm app version to v0.7.8 by **@austin-barrington** in [#7524](https://github.com/danny-avila/LibreChat/pull/7524)
---
## [v0.7.8] -
Changes from v0.7.8-rc1 to v0.7.8.
@@ -50,7 +91,6 @@ Changes from v0.7.8-rc1 to v0.7.8.
---
## [v0.7.8-rc1] -
## [v0.7.8-rc1] -
Changes from v0.7.7 to v0.7.8-rc1.

View File

@@ -1,4 +1,4 @@
# v0.7.9-rc1
# v0.8.0-rc2
# Base node image
FROM node:20-alpine AS node

View File

@@ -1,5 +1,5 @@
# Dockerfile.multi
# v0.7.9-rc1
# v0.8.0-rc2
# Base for all builds
FROM node:20-alpine AS base-min
@@ -16,6 +16,7 @@ COPY package*.json ./
COPY packages/data-provider/package*.json ./packages/data-provider/
COPY packages/api/package*.json ./packages/api/
COPY packages/data-schemas/package*.json ./packages/data-schemas/
COPY packages/client/package*.json ./packages/client/
COPY client/package*.json ./client/
COPY api/package*.json ./api/
@@ -45,11 +46,19 @@ COPY --from=data-provider-build /app/packages/data-provider/dist /app/packages/d
COPY --from=data-schemas-build /app/packages/data-schemas/dist /app/packages/data-schemas/dist
RUN npm run build
# Build `client` package
FROM base AS client-package-build
WORKDIR /app/packages/client
COPY packages/client ./
RUN npm run build
# Client build
FROM base AS client-build
WORKDIR /app/client
COPY client ./
COPY --from=data-provider-build /app/packages/data-provider/dist /app/packages/data-provider/dist
COPY --from=client-package-build /app/packages/client/dist /app/packages/client/dist
COPY --from=client-package-build /app/packages/client/src /app/packages/client/src
ENV NODE_OPTIONS="--max-old-space-size=2048"
RUN npm run build

View File

@@ -27,6 +27,7 @@ const {
const { getModelMaxTokens, getModelMaxOutputTokens, matchModelName } = require('~/utils');
const { spendTokens, spendStructuredTokens } = require('~/models/spendTokens');
const { encodeAndFormat } = require('~/server/services/Files/images/encode');
const { encodeAndFormatDocuments } = require('~/server/services/Files/documents');
const { sleep } = require('~/server/utils');
const BaseClient = require('./BaseClient');
const { logger } = require('~/config');
@@ -312,6 +313,33 @@ class AnthropicClient extends BaseClient {
return files;
}
async addDocuments(message, attachments) {
// Only process documents
const documentResult = await encodeAndFormatDocuments(
this.options.req,
attachments,
EModelEndpoint.anthropic,
);
message.documents =
documentResult.documents && documentResult.documents.length
? documentResult.documents
: undefined;
return documentResult.files;
}
async processAttachments(message, attachments) {
// Process both images and documents
const [imageFiles, documentFiles] = await Promise.all([
this.addImageURLs(message, attachments),
this.addDocuments(message, attachments),
]);
// Combine files from both processors
return [...imageFiles, ...documentFiles];
}
/**
* @param {object} params
* @param {number} params.promptTokens
@@ -382,7 +410,7 @@ class AnthropicClient extends BaseClient {
};
}
const files = await this.addImageURLs(latestMessage, attachments);
const files = await this.processAttachments(latestMessage, attachments);
this.options.attachments = files;
}
@@ -941,7 +969,7 @@ class AnthropicClient extends BaseClient {
const content = `<conversation_context>
${convo}
</conversation_context>
Please generate a title for this conversation.`;
const titleMessage = { role: 'user', content };

View File

@@ -1233,7 +1233,7 @@ class BaseClient {
{},
);
await this.addImageURLs(message, files, this.visionMode);
await this.processAttachments(message, files, this.visionMode);
this.message_file_map[message.messageId] = files;
return message;

View File

@@ -268,7 +268,7 @@ class GoogleClient extends BaseClient {
const formattedMessages = [];
const attachments = await this.options.attachments;
const latestMessage = { ...messages[messages.length - 1] };
const files = await this.addImageURLs(latestMessage, attachments, VisionModes.generative);
const files = await this.processAttachments(latestMessage, attachments, VisionModes.generative);
this.options.attachments = files;
messages[messages.length - 1] = latestMessage;
@@ -312,6 +312,20 @@ class GoogleClient extends BaseClient {
return files;
}
// eslint-disable-next-line no-unused-vars
async addDocuments(message, attachments) {
// GoogleClient doesn't support document processing yet
// Return empty results for consistency
return [];
}
async processAttachments(message, attachments, mode = '') {
// For GoogleClient, only process images
const imageFiles = await this.addImageURLs(message, attachments, mode);
const documentFiles = await this.addDocuments(message, attachments);
return [...imageFiles, ...documentFiles];
}
/**
* Builds the augmented prompt for attachments
* TODO: Add File API Support
@@ -345,7 +359,7 @@ class GoogleClient extends BaseClient {
const { prompt } = await this.buildMessagesPrompt(messages, parentMessageId);
const files = await this.addImageURLs(latestMessage, attachments);
const files = await this.processAttachments(latestMessage, attachments);
this.options.attachments = files;

View File

@@ -372,6 +372,19 @@ class OpenAIClient extends BaseClient {
return files;
}
async addDocuments(message, attachments) {
// OpenAI doesn't support native document processing yet
// Return empty results for consistency
return [];
}
async processAttachments(message, attachments) {
// For OpenAI, only process images
const imageFiles = await this.addImageURLs(message, attachments);
const documentFiles = await this.addDocuments(message, attachments);
return [...imageFiles, ...documentFiles];
}
async buildMessages(messages, parentMessageId, { promptPrefix = null }, opts) {
let orderedMessages = this.constructor.getMessagesForConversation({
messages,
@@ -400,7 +413,7 @@ class OpenAIClient extends BaseClient {
};
}
const files = await this.addImageURLs(
const files = await this.processAttachments(
orderedMessages[orderedMessages.length - 1],
attachments,
);
@@ -1222,7 +1235,9 @@ ${convo}
}
if (this.isOmni === true && modelOptions.max_tokens != null) {
modelOptions.max_completion_tokens = modelOptions.max_tokens;
const paramName =
modelOptions.useResponsesApi === true ? 'max_output_tokens' : 'max_completion_tokens';
modelOptions[paramName] = modelOptions.max_tokens;
delete modelOptions.max_tokens;
}
if (this.isOmni === true && modelOptions.temperature != null) {

View File

@@ -3,24 +3,61 @@ const { EModelEndpoint, ContentTypes } = require('librechat-data-provider');
const { HumanMessage, AIMessage, SystemMessage } = require('@langchain/core/messages');
/**
* Formats a message to OpenAI Vision API payload format.
* Formats a message with document attachments for specific endpoints.
*
* @param {Object} params - The parameters for formatting.
* @param {Object} params.message - The message object to format.
* @param {string} [params.message.role] - The role of the message sender (must be 'user').
* @param {string} [params.message.content] - The text content of the message.
* @param {Array<Object>} [params.documents] - The document attachments for the message.
* @param {EModelEndpoint} [params.endpoint] - Identifier for specific endpoint handling
* @returns {(Object)} - The formatted message.
*/
const formatDocumentMessage = ({ message, documents, endpoint }) => {
const contentParts = [];
// Add documents first (for Anthropic PDFs)
if (documents && documents.length > 0) {
contentParts.push(...documents);
}
// Add text content
contentParts.push({ type: ContentTypes.TEXT, text: message.content });
if (endpoint === EModelEndpoint.anthropic) {
message.content = contentParts;
return message;
}
// For other endpoints, might need different handling
message.content = contentParts;
return message;
};
/**
* Formats a message with vision capabilities (image_urls) for specific endpoints.
*
* @param {Object} params - The parameters for formatting.
* @param {Object} params.message - The message object to format.
* @param {Array<string>} [params.image_urls] - The image_urls to attach to the message.
* @param {EModelEndpoint} [params.endpoint] - Identifier for specific endpoint handling
* @returns {(Object)} - The formatted message.
*/
const formatVisionMessage = ({ message, image_urls, endpoint }) => {
const contentParts = [];
// Add images
if (image_urls && image_urls.length > 0) {
contentParts.push(...image_urls);
}
// Add text content
contentParts.push({ type: ContentTypes.TEXT, text: message.content });
if (endpoint === EModelEndpoint.anthropic) {
message.content = [...image_urls, { type: ContentTypes.TEXT, text: message.content }];
message.content = contentParts;
return message;
}
message.content = [{ type: ContentTypes.TEXT, text: message.content }, ...image_urls];
return message;
};
@@ -58,7 +95,18 @@ const formatMessage = ({ message, userName, assistantName, endpoint, langChain =
content,
};
const { image_urls } = message;
const { image_urls, documents } = message;
// Handle documents
if (Array.isArray(documents) && documents.length > 0 && role === 'user') {
return formatDocumentMessage({
message: formattedMessage,
documents: message.documents,
endpoint,
});
}
// Handle images
if (Array.isArray(image_urls) && image_urls.length > 0 && role === 'user') {
return formatVisionMessage({
message: formattedMessage,
@@ -146,7 +194,21 @@ const formatAgentMessages = (payload) => {
message.content = [{ type: ContentTypes.TEXT, [ContentTypes.TEXT]: message.content }];
}
if (message.role !== 'assistant') {
messages.push(formatMessage({ message, langChain: true }));
// Check if message has documents and preserve array structure
const hasDocuments =
Array.isArray(message.content) &&
message.content.some((part) => part && part.type === 'document');
if (hasDocuments && message.role === 'user') {
// For user messages with documents, create HumanMessage directly with array content
messages.push(new HumanMessage({ content: message.content }));
} else if (hasDocuments && message.role === 'system') {
// For system messages with documents, create SystemMessage directly with array content
messages.push(new SystemMessage({ content: message.content }));
} else {
// Use regular formatting for messages without documents
messages.push(formatMessage({ message, langChain: true }));
}
continue;
}
@@ -239,6 +301,8 @@ const formatAgentMessages = (payload) => {
module.exports = {
formatMessage,
formatDocumentMessage,
formatVisionMessage,
formatFromLangChain,
formatAgentMessages,
formatLangChainMessages,

View File

@@ -3,8 +3,8 @@ const path = require('path');
const OpenAI = require('openai');
const fetch = require('node-fetch');
const { v4: uuidv4 } = require('uuid');
const { ProxyAgent } = require('undici');
const { Tool } = require('@langchain/core/tools');
const { HttpsProxyAgent } = require('https-proxy-agent');
const { FileContext, ContentTypes } = require('librechat-data-provider');
const { getImageBasename } = require('~/server/services/Files/images');
const extractBaseURL = require('~/utils/extractBaseURL');
@@ -46,7 +46,10 @@ class DALLE3 extends Tool {
}
if (process.env.PROXY) {
config.httpAgent = new HttpsProxyAgent(process.env.PROXY);
const proxyAgent = new ProxyAgent(process.env.PROXY);
config.fetchOptions = {
dispatcher: proxyAgent,
};
}
/** @type {OpenAI} */
@@ -163,7 +166,8 @@ Error Message: ${error.message}`);
if (this.isAgent) {
let fetchOptions = {};
if (process.env.PROXY) {
fetchOptions.agent = new HttpsProxyAgent(process.env.PROXY);
const proxyAgent = new ProxyAgent(process.env.PROXY);
fetchOptions.dispatcher = proxyAgent;
}
const imageResponse = await fetch(theImageUrl, fetchOptions);
const arrayBuffer = await imageResponse.arrayBuffer();

View File

@@ -3,10 +3,10 @@ const axios = require('axios');
const { v4 } = require('uuid');
const OpenAI = require('openai');
const FormData = require('form-data');
const { ProxyAgent } = require('undici');
const { tool } = require('@langchain/core/tools');
const { logAxiosError } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { HttpsProxyAgent } = require('https-proxy-agent');
const { ContentTypes, EImageOutputType } = require('librechat-data-provider');
const { getStrategyFunctions } = require('~/server/services/Files/strategies');
const { extractBaseURL } = require('~/utils');
@@ -189,7 +189,10 @@ function createOpenAIImageTools(fields = {}) {
}
const clientConfig = { ...closureConfig };
if (process.env.PROXY) {
clientConfig.httpAgent = new HttpsProxyAgent(process.env.PROXY);
const proxyAgent = new ProxyAgent(process.env.PROXY);
clientConfig.fetchOptions = {
dispatcher: proxyAgent,
};
}
/** @type {OpenAI} */
@@ -335,7 +338,10 @@ Error Message: ${error.message}`);
const clientConfig = { ...closureConfig };
if (process.env.PROXY) {
clientConfig.httpAgent = new HttpsProxyAgent(process.env.PROXY);
const proxyAgent = new ProxyAgent(process.env.PROXY);
clientConfig.fetchOptions = {
dispatcher: proxyAgent,
};
}
const formData = new FormData();
@@ -447,6 +453,19 @@ Error Message: ${error.message}`);
baseURL,
};
if (process.env.PROXY) {
try {
const url = new URL(process.env.PROXY);
axiosConfig.proxy = {
host: url.hostname.replace(/^\[|\]$/g, ''),
port: url.port ? parseInt(url.port, 10) : undefined,
protocol: url.protocol.replace(':', ''),
};
} catch (error) {
logger.error('Error parsing proxy URL:', error);
}
}
if (process.env.IMAGE_GEN_OAI_AZURE_API_VERSION && process.env.IMAGE_GEN_OAI_BASEURL) {
axiosConfig.params = {
'api-version': process.env.IMAGE_GEN_OAI_AZURE_API_VERSION,

View File

@@ -0,0 +1,94 @@
const DALLE3 = require('../DALLE3');
const { ProxyAgent } = require('undici');
const processFileURL = jest.fn();
jest.mock('~/server/services/Files/images', () => ({
getImageBasename: jest.fn().mockImplementation((url) => {
const parts = url.split('/');
const lastPart = parts.pop();
const imageExtensionRegex = /\.(jpg|jpeg|png|gif|bmp|tiff|svg)$/i;
if (imageExtensionRegex.test(lastPart)) {
return lastPart;
}
return '';
}),
}));
jest.mock('fs', () => {
return {
existsSync: jest.fn(),
mkdirSync: jest.fn(),
promises: {
writeFile: jest.fn(),
readFile: jest.fn(),
unlink: jest.fn(),
},
};
});
jest.mock('path', () => {
return {
resolve: jest.fn(),
join: jest.fn(),
relative: jest.fn(),
extname: jest.fn().mockImplementation((filename) => {
return filename.slice(filename.lastIndexOf('.'));
}),
};
});
describe('DALLE3 Proxy Configuration', () => {
let originalEnv;
beforeAll(() => {
originalEnv = { ...process.env };
});
beforeEach(() => {
jest.resetModules();
process.env = { ...originalEnv };
});
afterEach(() => {
process.env = originalEnv;
});
it('should configure ProxyAgent in fetchOptions.dispatcher when PROXY env is set', () => {
// Set proxy environment variable
process.env.PROXY = 'http://proxy.example.com:8080';
process.env.DALLE_API_KEY = 'test-api-key';
// Create instance
const dalleWithProxy = new DALLE3({ processFileURL });
// Check that the openai client exists
expect(dalleWithProxy.openai).toBeDefined();
// Check that _options exists and has fetchOptions with a dispatcher
expect(dalleWithProxy.openai._options).toBeDefined();
expect(dalleWithProxy.openai._options.fetchOptions).toBeDefined();
expect(dalleWithProxy.openai._options.fetchOptions.dispatcher).toBeDefined();
expect(dalleWithProxy.openai._options.fetchOptions.dispatcher).toBeInstanceOf(ProxyAgent);
});
it('should not configure ProxyAgent when PROXY env is not set', () => {
// Ensure PROXY is not set
delete process.env.PROXY;
process.env.DALLE_API_KEY = 'test-api-key';
// Create instance
const dalleWithoutProxy = new DALLE3({ processFileURL });
// Check that the openai client exists
expect(dalleWithoutProxy.openai).toBeDefined();
// Check that _options exists but fetchOptions either doesn't exist or doesn't have a dispatcher
expect(dalleWithoutProxy.openai._options).toBeDefined();
// fetchOptions should either not exist or not have a dispatcher
if (dalleWithoutProxy.openai._options.fetchOptions) {
expect(dalleWithoutProxy.openai._options.fetchOptions.dispatcher).toBeUndefined();
}
});
});

View File

@@ -230,7 +230,7 @@ const loadTools = async ({
/** @type {Record<string, string>} */
const toolContextMap = {};
const appTools = (await getCachedTools({ includeGlobal: true })) ?? {};
const cachedTools = (await getCachedTools({ userId: user, includeGlobal: true })) ?? {};
for (const tool of tools) {
if (tool === Tools.execute_code) {
@@ -298,7 +298,7 @@ Current Date & Time: ${replaceSpecialVars({ text: '{{iso_datetime}}' })}
});
};
continue;
} else if (tool && appTools[tool] && mcpToolPattern.test(tool)) {
} else if (tool && cachedTools && mcpToolPattern.test(tool)) {
requestedTools[tool] = async () =>
createMCPTool({
req: options.req,

View File

@@ -1,5 +1,6 @@
const fs = require('fs');
const { math, isEnabled } = require('@librechat/api');
const { CacheKeys } = require('librechat-data-provider');
// To ensure that different deployments do not interfere with each other's cache, we use a prefix for the Redis keys.
// This prefix is usually the deployment ID, which is often passed to the container or pod as an env var.
@@ -15,7 +16,26 @@ if (USE_REDIS && !process.env.REDIS_URI) {
throw new Error('USE_REDIS is enabled but REDIS_URI is not set.');
}
// Comma-separated list of cache namespaces that should be forced to use in-memory storage
// even when Redis is enabled. This allows selective performance optimization for specific caches.
const FORCED_IN_MEMORY_CACHE_NAMESPACES = process.env.FORCED_IN_MEMORY_CACHE_NAMESPACES
? process.env.FORCED_IN_MEMORY_CACHE_NAMESPACES.split(',').map((key) => key.trim())
: [];
// Validate against CacheKeys enum
if (FORCED_IN_MEMORY_CACHE_NAMESPACES.length > 0) {
const validKeys = Object.values(CacheKeys);
const invalidKeys = FORCED_IN_MEMORY_CACHE_NAMESPACES.filter((key) => !validKeys.includes(key));
if (invalidKeys.length > 0) {
throw new Error(
`Invalid cache keys in FORCED_IN_MEMORY_CACHE_NAMESPACES: ${invalidKeys.join(', ')}. Valid keys: ${validKeys.join(', ')}`,
);
}
}
const cacheConfig = {
FORCED_IN_MEMORY_CACHE_NAMESPACES,
USE_REDIS,
REDIS_URI: process.env.REDIS_URI,
REDIS_USERNAME: process.env.REDIS_USERNAME,
@@ -23,6 +43,15 @@ const cacheConfig = {
REDIS_CA: process.env.REDIS_CA ? fs.readFileSync(process.env.REDIS_CA, 'utf8') : null,
REDIS_KEY_PREFIX: process.env[REDIS_KEY_PREFIX_VAR] || REDIS_KEY_PREFIX || '',
REDIS_MAX_LISTENERS: math(process.env.REDIS_MAX_LISTENERS, 40),
REDIS_PING_INTERVAL: math(process.env.REDIS_PING_INTERVAL, 0),
/** Max delay between reconnection attempts in ms */
REDIS_RETRY_MAX_DELAY: math(process.env.REDIS_RETRY_MAX_DELAY, 3000),
/** Max number of reconnection attempts (0 = infinite) */
REDIS_RETRY_MAX_ATTEMPTS: math(process.env.REDIS_RETRY_MAX_ATTEMPTS, 10),
/** Connection timeout in ms */
REDIS_CONNECT_TIMEOUT: math(process.env.REDIS_CONNECT_TIMEOUT, 10000),
/** Queue commands when disconnected */
REDIS_ENABLE_OFFLINE_QUEUE: isEnabled(process.env.REDIS_ENABLE_OFFLINE_QUEUE ?? 'true'),
CI: isEnabled(process.env.CI),
DEBUG_MEMORY_CACHE: isEnabled(process.env.DEBUG_MEMORY_CACHE),

View File

@@ -14,6 +14,8 @@ describe('cacheConfig', () => {
delete process.env.REDIS_KEY_PREFIX_VAR;
delete process.env.REDIS_KEY_PREFIX;
delete process.env.USE_REDIS;
delete process.env.REDIS_PING_INTERVAL;
delete process.env.FORCED_IN_MEMORY_CACHE_NAMESPACES;
// Clear require cache
jest.resetModules();
@@ -105,4 +107,51 @@ describe('cacheConfig', () => {
expect(cacheConfig.REDIS_CA).toBeNull();
});
});
describe('REDIS_PING_INTERVAL configuration', () => {
test('should default to 0 when REDIS_PING_INTERVAL is not set', () => {
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.REDIS_PING_INTERVAL).toBe(0);
});
test('should use provided REDIS_PING_INTERVAL value', () => {
process.env.REDIS_PING_INTERVAL = '300';
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.REDIS_PING_INTERVAL).toBe(300);
});
});
describe('FORCED_IN_MEMORY_CACHE_NAMESPACES validation', () => {
test('should parse comma-separated cache keys correctly', () => {
process.env.FORCED_IN_MEMORY_CACHE_NAMESPACES = ' ROLES, STATIC_CONFIG ,MESSAGES ';
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.FORCED_IN_MEMORY_CACHE_NAMESPACES).toEqual([
'ROLES',
'STATIC_CONFIG',
'MESSAGES',
]);
});
test('should throw error for invalid cache keys', () => {
process.env.FORCED_IN_MEMORY_CACHE_NAMESPACES = 'INVALID_KEY,ROLES';
expect(() => {
require('./cacheConfig');
}).toThrow('Invalid cache keys in FORCED_IN_MEMORY_CACHE_NAMESPACES: INVALID_KEY');
});
test('should handle empty string gracefully', () => {
process.env.FORCED_IN_MEMORY_CACHE_NAMESPACES = '';
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.FORCED_IN_MEMORY_CACHE_NAMESPACES).toEqual([]);
});
test('should handle undefined env var gracefully', () => {
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.FORCED_IN_MEMORY_CACHE_NAMESPACES).toEqual([]);
});
});
});

View File

@@ -1,12 +1,13 @@
const KeyvRedis = require('@keyv/redis').default;
const { Keyv } = require('keyv');
const { cacheConfig } = require('./cacheConfig');
const { keyvRedisClient, ioredisClient, GLOBAL_PREFIX_SEPARATOR } = require('./redisClients');
const { RedisStore } = require('rate-limit-redis');
const { Time } = require('librechat-data-provider');
const { logger } = require('@librechat/data-schemas');
const { RedisStore: ConnectRedis } = require('connect-redis');
const MemoryStore = require('memorystore')(require('express-session'));
const { keyvRedisClient, ioredisClient, GLOBAL_PREFIX_SEPARATOR } = require('./redisClients');
const { cacheConfig } = require('./cacheConfig');
const { violationFile } = require('./keyvFiles');
const { RedisStore } = require('rate-limit-redis');
/**
* Creates a cache instance using Redis or a fallback store. Suitable for general caching needs.
@@ -16,12 +17,25 @@ const { RedisStore } = require('rate-limit-redis');
* @returns {Keyv} Cache instance.
*/
const standardCache = (namespace, ttl = undefined, fallbackStore = undefined) => {
if (cacheConfig.USE_REDIS) {
const keyvRedis = new KeyvRedis(keyvRedisClient);
const cache = new Keyv(keyvRedis, { namespace, ttl });
keyvRedis.namespace = cacheConfig.REDIS_KEY_PREFIX;
keyvRedis.keyPrefixSeparator = GLOBAL_PREFIX_SEPARATOR;
return cache;
if (
cacheConfig.USE_REDIS &&
!cacheConfig.FORCED_IN_MEMORY_CACHE_NAMESPACES?.includes(namespace)
) {
try {
const keyvRedis = new KeyvRedis(keyvRedisClient);
const cache = new Keyv(keyvRedis, { namespace, ttl });
keyvRedis.namespace = cacheConfig.REDIS_KEY_PREFIX;
keyvRedis.keyPrefixSeparator = GLOBAL_PREFIX_SEPARATOR;
cache.on('error', (err) => {
logger.error(`Cache error in namespace ${namespace}:`, err);
});
return cache;
} catch (err) {
logger.error(`Failed to create Redis cache for namespace ${namespace}:`, err);
throw err;
}
}
if (fallbackStore) return new Keyv({ store: fallbackStore, namespace, ttl });
return new Keyv({ namespace, ttl });
@@ -47,7 +61,13 @@ const violationCache = (namespace, ttl = undefined) => {
const sessionCache = (namespace, ttl = undefined) => {
namespace = namespace.endsWith(':') ? namespace : `${namespace}:`;
if (!cacheConfig.USE_REDIS) return new MemoryStore({ ttl, checkPeriod: Time.ONE_DAY });
return new ConnectRedis({ client: ioredisClient, ttl, prefix: namespace });
const store = new ConnectRedis({ client: ioredisClient, ttl, prefix: namespace });
if (ioredisClient) {
ioredisClient.on('error', (err) => {
logger.error(`Session store Redis error for namespace ${namespace}:`, err);
});
}
return store;
};
/**
@@ -59,8 +79,30 @@ const limiterCache = (prefix) => {
if (!prefix) throw new Error('prefix is required');
if (!cacheConfig.USE_REDIS) return undefined;
prefix = prefix.endsWith(':') ? prefix : `${prefix}:`;
return new RedisStore({ sendCommand, prefix });
try {
if (!ioredisClient) {
logger.warn(`Redis client not available for rate limiter with prefix ${prefix}`);
return undefined;
}
return new RedisStore({ sendCommand, prefix });
} catch (err) {
logger.error(`Failed to create Redis rate limiter for prefix ${prefix}:`, err);
return undefined;
}
};
const sendCommand = (...args) => {
if (!ioredisClient) {
logger.warn('Redis client not available for command execution');
return Promise.reject(new Error('Redis client not available'));
}
return ioredisClient.call(...args).catch((err) => {
logger.error('Redis command execution failed:', err);
throw err;
});
};
const sendCommand = (...args) => ioredisClient?.call(...args);
module.exports = { standardCache, sessionCache, violationCache, limiterCache };

View File

@@ -6,13 +6,17 @@ const mockKeyvRedis = {
keyPrefixSeparator: '',
};
const mockKeyv = jest.fn().mockReturnValue({ mock: 'keyv' });
const mockKeyv = jest.fn().mockReturnValue({
mock: 'keyv',
on: jest.fn(),
});
const mockConnectRedis = jest.fn().mockReturnValue({ mock: 'connectRedis' });
const mockMemoryStore = jest.fn().mockReturnValue({ mock: 'memoryStore' });
const mockRedisStore = jest.fn().mockReturnValue({ mock: 'redisStore' });
const mockIoredisClient = {
call: jest.fn(),
on: jest.fn(),
};
const mockKeyvRedisClient = {};
@@ -31,6 +35,7 @@ jest.mock('./cacheConfig', () => ({
cacheConfig: {
USE_REDIS: false,
REDIS_KEY_PREFIX: 'test',
FORCED_IN_MEMORY_CACHE_NAMESPACES: [],
},
}));
@@ -52,6 +57,14 @@ jest.mock('rate-limit-redis', () => ({
RedisStore: mockRedisStore,
}));
jest.mock('@librechat/data-schemas', () => ({
logger: {
error: jest.fn(),
warn: jest.fn(),
info: jest.fn(),
},
}));
// Import after mocking
const { standardCache, sessionCache, violationCache, limiterCache } = require('./cacheFactory');
const { cacheConfig } = require('./cacheConfig');
@@ -63,6 +76,7 @@ describe('cacheFactory', () => {
// Reset cache config mock
cacheConfig.USE_REDIS = false;
cacheConfig.REDIS_KEY_PREFIX = 'test';
cacheConfig.FORCED_IN_MEMORY_CACHE_NAMESPACES = [];
});
describe('redisCache', () => {
@@ -116,6 +130,52 @@ describe('cacheFactory', () => {
expect(mockKeyv).toHaveBeenCalledWith({ namespace: undefined, ttl: undefined });
});
it('should use fallback when namespace is in FORCED_IN_MEMORY_CACHE_NAMESPACES', () => {
cacheConfig.USE_REDIS = true;
cacheConfig.FORCED_IN_MEMORY_CACHE_NAMESPACES = ['forced-memory'];
const namespace = 'forced-memory';
const ttl = 3600;
standardCache(namespace, ttl);
expect(require('@keyv/redis').default).not.toHaveBeenCalled();
expect(mockKeyv).toHaveBeenCalledWith({ namespace, ttl });
});
it('should use Redis when namespace is not in FORCED_IN_MEMORY_CACHE_NAMESPACES', () => {
cacheConfig.USE_REDIS = true;
cacheConfig.FORCED_IN_MEMORY_CACHE_NAMESPACES = ['other-namespace'];
const namespace = 'test-namespace';
const ttl = 3600;
standardCache(namespace, ttl);
expect(require('@keyv/redis').default).toHaveBeenCalledWith(mockKeyvRedisClient);
expect(mockKeyv).toHaveBeenCalledWith(mockKeyvRedis, { namespace, ttl });
});
it('should throw error when Redis cache creation fails', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'test-namespace';
const ttl = 3600;
const testError = new Error('Redis connection failed');
const KeyvRedis = require('@keyv/redis').default;
KeyvRedis.mockImplementationOnce(() => {
throw testError;
});
expect(() => standardCache(namespace, ttl)).toThrow('Redis connection failed');
const { logger } = require('@librechat/data-schemas');
expect(logger.error).toHaveBeenCalledWith(
`Failed to create Redis cache for namespace ${namespace}:`,
testError,
);
expect(mockKeyv).not.toHaveBeenCalled();
});
});
describe('violationCache', () => {
@@ -207,6 +267,86 @@ describe('cacheFactory', () => {
checkPeriod: Time.ONE_DAY,
});
});
it('should throw error when ConnectRedis constructor fails', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'sessions';
const ttl = 86400;
// Mock ConnectRedis to throw an error during construction
const redisError = new Error('Redis connection failed');
mockConnectRedis.mockImplementationOnce(() => {
throw redisError;
});
// The error should propagate up, not be caught
expect(() => sessionCache(namespace, ttl)).toThrow('Redis connection failed');
// Verify that MemoryStore was NOT used as fallback
expect(mockMemoryStore).not.toHaveBeenCalled();
});
it('should register error handler but let errors propagate to Express', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'sessions';
// Create a mock session store with middleware methods
const mockSessionStore = {
get: jest.fn(),
set: jest.fn(),
destroy: jest.fn(),
};
mockConnectRedis.mockReturnValue(mockSessionStore);
const store = sessionCache(namespace);
// Verify error handler was registered
expect(mockIoredisClient.on).toHaveBeenCalledWith('error', expect.any(Function));
// Get the error handler
const errorHandler = mockIoredisClient.on.mock.calls.find((call) => call[0] === 'error')[1];
// Simulate an error from Redis during a session operation
const redisError = new Error('Socket closed unexpectedly');
// The error handler should log but not swallow the error
const { logger } = require('@librechat/data-schemas');
errorHandler(redisError);
expect(logger.error).toHaveBeenCalledWith(
`Session store Redis error for namespace ${namespace}::`,
redisError,
);
// Now simulate what happens when session middleware tries to use the store
const callback = jest.fn();
mockSessionStore.get.mockImplementation((sid, cb) => {
cb(new Error('Redis connection lost'));
});
// Call the store's get method (as Express session would)
store.get('test-session-id', callback);
// The error should be passed to the callback, not swallowed
expect(callback).toHaveBeenCalledWith(new Error('Redis connection lost'));
});
it('should handle null ioredisClient gracefully', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'sessions';
// Temporarily set ioredisClient to null (simulating connection not established)
const originalClient = require('./redisClients').ioredisClient;
require('./redisClients').ioredisClient = null;
// ConnectRedis might accept null client but would fail on first use
// The important thing is it doesn't throw uncaught exceptions during construction
const store = sessionCache(namespace);
expect(store).toBeDefined();
// Restore original client
require('./redisClients').ioredisClient = originalClient;
});
});
describe('limiterCache', () => {
@@ -248,8 +388,10 @@ describe('cacheFactory', () => {
});
});
it('should pass sendCommand function that calls ioredisClient.call', () => {
it('should pass sendCommand function that calls ioredisClient.call', async () => {
cacheConfig.USE_REDIS = true;
mockIoredisClient.call.mockResolvedValue('test-value');
limiterCache('rate-limit');
const sendCommandCall = mockRedisStore.mock.calls[0][0];
@@ -257,9 +399,29 @@ describe('cacheFactory', () => {
// Test that sendCommand properly delegates to ioredisClient.call
const args = ['GET', 'test-key'];
sendCommand(...args);
const result = await sendCommand(...args);
expect(mockIoredisClient.call).toHaveBeenCalledWith(...args);
expect(result).toBe('test-value');
});
it('should handle sendCommand errors properly', async () => {
cacheConfig.USE_REDIS = true;
// Mock the call method to reject with an error
const testError = new Error('Redis error');
mockIoredisClient.call.mockRejectedValue(testError);
limiterCache('rate-limit');
const sendCommandCall = mockRedisStore.mock.calls[0][0];
const sendCommand = sendCommandCall.sendCommand;
// Test that sendCommand properly handles errors
const args = ['GET', 'test-key'];
await expect(sendCommand(...args)).rejects.toThrow('Redis error');
expect(mockIoredisClient.call).toHaveBeenCalledWith(...args);
});
it('should handle undefined prefix', () => {

View File

@@ -33,6 +33,7 @@ const namespaces = {
[CacheKeys.ROLES]: standardCache(CacheKeys.ROLES),
[CacheKeys.MCP_TOOLS]: standardCache(CacheKeys.MCP_TOOLS),
[CacheKeys.CONFIG_STORE]: standardCache(CacheKeys.CONFIG_STORE),
[CacheKeys.STATIC_CONFIG]: standardCache(CacheKeys.STATIC_CONFIG),
[CacheKeys.PENDING_REQ]: standardCache(CacheKeys.PENDING_REQ),
[CacheKeys.ENCODED_DOMAINS]: new Keyv({ store: keyvMongo, namespace: CacheKeys.ENCODED_DOMAINS }),
[CacheKeys.ABORT_KEYS]: standardCache(CacheKeys.ABORT_KEYS, Time.TEN_MINUTES),

View File

@@ -1,6 +1,7 @@
const IoRedis = require('ioredis');
const { cacheConfig } = require('./cacheConfig');
const { logger } = require('@librechat/data-schemas');
const { createClient, createCluster } = require('@keyv/redis');
const { cacheConfig } = require('./cacheConfig');
const GLOBAL_PREFIX_SEPARATOR = '::';
@@ -12,31 +13,136 @@ const ca = cacheConfig.REDIS_CA;
/** @type {import('ioredis').Redis | import('ioredis').Cluster | null} */
let ioredisClient = null;
if (cacheConfig.USE_REDIS) {
/** @type {import('ioredis').RedisOptions | import('ioredis').ClusterOptions} */
const redisOptions = {
username: username,
password: password,
tls: ca ? { ca } : undefined,
keyPrefix: `${cacheConfig.REDIS_KEY_PREFIX}${GLOBAL_PREFIX_SEPARATOR}`,
maxListeners: cacheConfig.REDIS_MAX_LISTENERS,
retryStrategy: (times) => {
if (
cacheConfig.REDIS_RETRY_MAX_ATTEMPTS > 0 &&
times > cacheConfig.REDIS_RETRY_MAX_ATTEMPTS
) {
logger.error(
`ioredis giving up after ${cacheConfig.REDIS_RETRY_MAX_ATTEMPTS} reconnection attempts`,
);
return null;
}
const delay = Math.min(times * 50, cacheConfig.REDIS_RETRY_MAX_DELAY);
logger.info(`ioredis reconnecting... attempt ${times}, delay ${delay}ms`);
return delay;
},
reconnectOnError: (err) => {
const targetError = 'READONLY';
if (err.message.includes(targetError)) {
logger.warn('ioredis reconnecting due to READONLY error');
return true;
}
return false;
},
enableOfflineQueue: cacheConfig.REDIS_ENABLE_OFFLINE_QUEUE,
connectTimeout: cacheConfig.REDIS_CONNECT_TIMEOUT,
maxRetriesPerRequest: 3,
};
ioredisClient =
urls.length === 1
? new IoRedis(cacheConfig.REDIS_URI, redisOptions)
: new IoRedis.Cluster(cacheConfig.REDIS_URI, { redisOptions });
: new IoRedis.Cluster(cacheConfig.REDIS_URI, {
redisOptions,
clusterRetryStrategy: (times) => {
if (
cacheConfig.REDIS_RETRY_MAX_ATTEMPTS > 0 &&
times > cacheConfig.REDIS_RETRY_MAX_ATTEMPTS
) {
logger.error(
`ioredis cluster giving up after ${cacheConfig.REDIS_RETRY_MAX_ATTEMPTS} reconnection attempts`,
);
return null;
}
const delay = Math.min(times * 100, cacheConfig.REDIS_RETRY_MAX_DELAY);
logger.info(`ioredis cluster reconnecting... attempt ${times}, delay ${delay}ms`);
return delay;
},
enableOfflineQueue: cacheConfig.REDIS_ENABLE_OFFLINE_QUEUE,
});
// Pinging the Redis server every 5 minutes to keep the connection alive
const pingInterval = setInterval(() => ioredisClient.ping(), 5 * 60 * 1000);
ioredisClient.on('close', () => clearInterval(pingInterval));
ioredisClient.on('end', () => clearInterval(pingInterval));
ioredisClient.on('error', (err) => {
logger.error('ioredis client error:', err);
});
ioredisClient.on('connect', () => {
logger.info('ioredis client connected');
});
ioredisClient.on('ready', () => {
logger.info('ioredis client ready');
});
ioredisClient.on('reconnecting', (delay) => {
logger.info(`ioredis client reconnecting in ${delay}ms`);
});
ioredisClient.on('close', () => {
logger.warn('ioredis client connection closed');
});
/** Ping Interval to keep the Redis server connection alive (if enabled) */
let pingInterval = null;
const clearPingInterval = () => {
if (pingInterval) {
clearInterval(pingInterval);
pingInterval = null;
}
};
if (cacheConfig.REDIS_PING_INTERVAL > 0) {
pingInterval = setInterval(() => {
if (ioredisClient && ioredisClient.status === 'ready') {
ioredisClient.ping().catch((err) => {
logger.error('ioredis ping failed:', err);
});
}
}, cacheConfig.REDIS_PING_INTERVAL * 1000);
ioredisClient.on('close', clearPingInterval);
ioredisClient.on('end', clearPingInterval);
}
}
/** @type {import('@keyv/redis').RedisClient | import('@keyv/redis').RedisCluster | null} */
let keyvRedisClient = null;
if (cacheConfig.USE_REDIS) {
// ** WARNING ** Keyv Redis client does not support Prefix like ioredis above.
// The prefix feature will be handled by the Keyv-Redis store in cacheFactory.js
const redisOptions = { username, password, socket: { tls: ca != null, ca } };
/**
* ** WARNING ** Keyv Redis client does not support Prefix like ioredis above.
* The prefix feature will be handled by the Keyv-Redis store in cacheFactory.js
* @type {import('@keyv/redis').RedisClientOptions | import('@keyv/redis').RedisClusterOptions}
*/
const redisOptions = {
username,
password,
socket: {
tls: ca != null,
ca,
connectTimeout: cacheConfig.REDIS_CONNECT_TIMEOUT,
reconnectStrategy: (retries) => {
if (
cacheConfig.REDIS_RETRY_MAX_ATTEMPTS > 0 &&
retries > cacheConfig.REDIS_RETRY_MAX_ATTEMPTS
) {
logger.error(
`@keyv/redis client giving up after ${cacheConfig.REDIS_RETRY_MAX_ATTEMPTS} reconnection attempts`,
);
return new Error('Max reconnection attempts reached');
}
const delay = Math.min(retries * 100, cacheConfig.REDIS_RETRY_MAX_DELAY);
logger.info(`@keyv/redis reconnecting... attempt ${retries}, delay ${delay}ms`);
return delay;
},
},
disableOfflineQueue: !cacheConfig.REDIS_ENABLE_OFFLINE_QUEUE,
};
keyvRedisClient =
urls.length === 1
@@ -48,10 +154,51 @@ if (cacheConfig.USE_REDIS) {
keyvRedisClient.setMaxListeners(cacheConfig.REDIS_MAX_LISTENERS);
// Pinging the Redis server every 5 minutes to keep the connection alive
const keyvPingInterval = setInterval(() => keyvRedisClient.ping(), 5 * 60 * 1000);
keyvRedisClient.on('disconnect', () => clearInterval(keyvPingInterval));
keyvRedisClient.on('end', () => clearInterval(keyvPingInterval));
keyvRedisClient.on('error', (err) => {
logger.error('@keyv/redis client error:', err);
});
keyvRedisClient.on('connect', () => {
logger.info('@keyv/redis client connected');
});
keyvRedisClient.on('ready', () => {
logger.info('@keyv/redis client ready');
});
keyvRedisClient.on('reconnecting', () => {
logger.info('@keyv/redis client reconnecting...');
});
keyvRedisClient.on('disconnect', () => {
logger.warn('@keyv/redis client disconnected');
});
keyvRedisClient.connect().catch((err) => {
logger.error('@keyv/redis initial connection failed:', err);
throw err;
});
/** Ping Interval to keep the Redis server connection alive (if enabled) */
let pingInterval = null;
const clearPingInterval = () => {
if (pingInterval) {
clearInterval(pingInterval);
pingInterval = null;
}
};
if (cacheConfig.REDIS_PING_INTERVAL > 0) {
pingInterval = setInterval(() => {
if (keyvRedisClient && keyvRedisClient.isReady) {
keyvRedisClient.ping().catch((err) => {
logger.error('@keyv/redis ping failed:', err);
});
}
}, cacheConfig.REDIS_PING_INTERVAL * 1000);
keyvRedisClient.on('disconnect', clearPingInterval);
keyvRedisClient.on('end', clearPingInterval);
}
}
module.exports = { ioredisClient, keyvRedisClient, GLOBAL_PREFIX_SEPARATOR };

View File

@@ -61,7 +61,7 @@ const getAgent = async (searchParameter) => await Agent.findOne(searchParameter)
const loadEphemeralAgent = async ({ req, agent_id, endpoint, model_parameters: _m }) => {
const { model, ...model_parameters } = _m;
/** @type {Record<string, FunctionTool>} */
const availableTools = await getCachedTools({ includeGlobal: true });
const availableTools = await getCachedTools({ userId: req.user.id, includeGlobal: true });
/** @type {TEphemeralAgent | null} */
const ephemeralAgent = req.body.ephemeralAgent;
const mcpServers = new Set(ephemeralAgent?.mcp);
@@ -316,17 +316,10 @@ const updateAgent = async (searchParameter, updateData, options = {}) => {
if (shouldCreateVersion) {
const duplicateVersion = isDuplicateVersion(updateData, versionData, versions, actionsHash);
if (duplicateVersion && !forceVersion) {
const error = new Error(
'Duplicate version: This would create a version identical to an existing one',
);
error.statusCode = 409;
error.details = {
duplicateVersion,
versionIndex: versions.findIndex(
(v) => JSON.stringify(duplicateVersion) === JSON.stringify(v),
),
};
throw error;
// No changes detected, return the current agent without creating a new version
const agentObj = currentAgent.toObject();
agentObj.version = versions.length;
return agentObj;
}
}

View File

@@ -879,45 +879,31 @@ describe('models/Agent', () => {
expect(emptyParamsAgent.model_parameters).toEqual({});
});
test('should detect duplicate versions and reject updates', async () => {
const originalConsoleError = console.error;
console.error = jest.fn();
test('should not create new version for duplicate updates', async () => {
const authorId = new mongoose.Types.ObjectId();
const testCases = generateVersionTestCases();
try {
const authorId = new mongoose.Types.ObjectId();
const testCases = generateVersionTestCases();
for (const testCase of testCases) {
const testAgentId = `agent_${uuidv4()}`;
for (const testCase of testCases) {
const testAgentId = `agent_${uuidv4()}`;
await createAgent({
id: testAgentId,
provider: 'test',
model: 'test-model',
author: authorId,
...testCase.initial,
});
await createAgent({
id: testAgentId,
provider: 'test',
model: 'test-model',
author: authorId,
...testCase.initial,
});
const updatedAgent = await updateAgent({ id: testAgentId }, testCase.update);
expect(updatedAgent.versions).toHaveLength(2); // No new version created
await updateAgent({ id: testAgentId }, testCase.update);
// Update with duplicate data should succeed but not create a new version
const duplicateUpdate = await updateAgent({ id: testAgentId }, testCase.duplicate);
let error;
try {
await updateAgent({ id: testAgentId }, testCase.duplicate);
} catch (e) {
error = e;
}
expect(duplicateUpdate.versions).toHaveLength(2); // No new version created
expect(error).toBeDefined();
expect(error.message).toContain('Duplicate version');
expect(error.statusCode).toBe(409);
expect(error.details).toBeDefined();
expect(error.details.duplicateVersion).toBeDefined();
const agent = await getAgent({ id: testAgentId });
expect(agent.versions).toHaveLength(2);
}
} finally {
console.error = originalConsoleError;
const agent = await getAgent({ id: testAgentId });
expect(agent.versions).toHaveLength(2);
}
});
@@ -1093,20 +1079,13 @@ describe('models/Agent', () => {
expect(secondUpdate.versions).toHaveLength(3);
// Update without forceVersion and no changes should not create a version
let error;
try {
await updateAgent(
{ id: agentId },
{ tools: ['listEvents_action_test.com', 'createEvent_action_test.com'] },
{ updatingUserId: authorId.toString(), forceVersion: false },
);
} catch (e) {
error = e;
}
const duplicateUpdate = await updateAgent(
{ id: agentId },
{ tools: ['listEvents_action_test.com', 'createEvent_action_test.com'] },
{ updatingUserId: authorId.toString(), forceVersion: false },
);
expect(error).toBeDefined();
expect(error.message).toContain('Duplicate version');
expect(error.statusCode).toBe(409);
expect(duplicateUpdate.versions).toHaveLength(3); // No new version created
});
test('should handle isDuplicateVersion with arrays containing null/undefined values', async () => {
@@ -2400,11 +2379,18 @@ describe('models/Agent', () => {
agent_ids: ['agent1', 'agent2'],
});
await updateAgent({ id: agentId }, { agent_ids: ['agent1', 'agent2', 'agent3'] });
const updatedAgent = await updateAgent(
{ id: agentId },
{ agent_ids: ['agent1', 'agent2', 'agent3'] },
);
expect(updatedAgent.versions).toHaveLength(2);
await expect(
updateAgent({ id: agentId }, { agent_ids: ['agent1', 'agent2', 'agent3'] }),
).rejects.toThrow('Duplicate version');
// Update with same agent_ids should succeed but not create a new version
const duplicateUpdate = await updateAgent(
{ id: agentId },
{ agent_ids: ['agent1', 'agent2', 'agent3'] },
);
expect(duplicateUpdate.versions).toHaveLength(2); // No new version created
});
test('should handle agent_ids field alongside other fields', async () => {
@@ -2543,9 +2529,10 @@ describe('models/Agent', () => {
expect(updated.versions).toHaveLength(2);
expect(updated.agent_ids).toEqual([]);
await expect(updateAgent({ id: agentId }, { agent_ids: [] })).rejects.toThrow(
'Duplicate version',
);
// Update with same empty agent_ids should succeed but not create a new version
const duplicateUpdate = await updateAgent({ id: agentId }, { agent_ids: [] });
expect(duplicateUpdate.versions).toHaveLength(2); // No new version created
expect(duplicateUpdate.agent_ids).toEqual([]);
});
test('should handle agent without agent_ids field', async () => {

View File

@@ -1,6 +1,6 @@
const { logger } = require('@librechat/data-schemas');
const { createTempChatExpirationDate } = require('@librechat/api');
const getCustomConfig = require('~/server/services/Config/getCustomConfig');
const { getCustomConfig } = require('~/server/services/Config/getCustomConfig');
const { getMessages, deleteMessages } = require('./Message');
const { Conversation } = require('~/db/models');

View File

@@ -0,0 +1,572 @@
const mongoose = require('mongoose');
const { v4: uuidv4 } = require('uuid');
const { EModelEndpoint } = require('librechat-data-provider');
const { MongoMemoryServer } = require('mongodb-memory-server');
const {
deleteNullOrEmptyConversations,
searchConversation,
getConvosByCursor,
getConvosQueried,
getConvoFiles,
getConvoTitle,
deleteConvos,
saveConvo,
getConvo,
} = require('./Conversation');
jest.mock('~/server/services/Config/getCustomConfig');
jest.mock('./Message');
const { getCustomConfig } = require('~/server/services/Config/getCustomConfig');
const { getMessages, deleteMessages } = require('./Message');
const { Conversation } = require('~/db/models');
describe('Conversation Operations', () => {
let mongoServer;
let mockReq;
let mockConversationData;
beforeAll(async () => {
mongoServer = await MongoMemoryServer.create();
const mongoUri = mongoServer.getUri();
await mongoose.connect(mongoUri);
});
afterAll(async () => {
await mongoose.disconnect();
await mongoServer.stop();
});
beforeEach(async () => {
// Clear database
await Conversation.deleteMany({});
// Reset mocks
jest.clearAllMocks();
// Default mock implementations
getMessages.mockResolvedValue([]);
deleteMessages.mockResolvedValue({ deletedCount: 0 });
mockReq = {
user: { id: 'user123' },
body: {},
};
mockConversationData = {
conversationId: uuidv4(),
title: 'Test Conversation',
endpoint: EModelEndpoint.openAI,
};
});
describe('saveConvo', () => {
it('should save a conversation for an authenticated user', async () => {
const result = await saveConvo(mockReq, mockConversationData);
expect(result.conversationId).toBe(mockConversationData.conversationId);
expect(result.user).toBe('user123');
expect(result.title).toBe('Test Conversation');
expect(result.endpoint).toBe(EModelEndpoint.openAI);
// Verify the conversation was actually saved to the database
const savedConvo = await Conversation.findOne({
conversationId: mockConversationData.conversationId,
user: 'user123',
});
expect(savedConvo).toBeTruthy();
expect(savedConvo.title).toBe('Test Conversation');
});
it('should query messages when saving a conversation', async () => {
// Mock messages as ObjectIds
const mongoose = require('mongoose');
const mockMessages = [new mongoose.Types.ObjectId(), new mongoose.Types.ObjectId()];
getMessages.mockResolvedValue(mockMessages);
await saveConvo(mockReq, mockConversationData);
// Verify that getMessages was called with correct parameters
expect(getMessages).toHaveBeenCalledWith(
{ conversationId: mockConversationData.conversationId },
'_id',
);
});
it('should handle newConversationId when provided', async () => {
const newConversationId = uuidv4();
const result = await saveConvo(mockReq, {
...mockConversationData,
newConversationId,
});
expect(result.conversationId).toBe(newConversationId);
});
it('should handle unsetFields metadata', async () => {
const metadata = {
unsetFields: { someField: 1 },
};
await saveConvo(mockReq, mockConversationData, metadata);
const savedConvo = await Conversation.findOne({
conversationId: mockConversationData.conversationId,
});
expect(savedConvo.someField).toBeUndefined();
});
});
describe('isTemporary conversation handling', () => {
it('should save a conversation with expiredAt when isTemporary is true', async () => {
// Mock custom config with 24 hour retention
getCustomConfig.mockResolvedValue({
interface: {
temporaryChatRetention: 24,
},
});
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveConvo(mockReq, mockConversationData);
const afterSave = new Date();
expect(result.conversationId).toBe(mockConversationData.conversationId);
expect(result.expiredAt).toBeDefined();
expect(result.expiredAt).toBeInstanceOf(Date);
// Verify expiredAt is approximately 24 hours in the future
const expectedExpirationTime = new Date(beforeSave.getTime() + 24 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
new Date(afterSave.getTime() + 24 * 60 * 60 * 1000 + 1000).getTime(),
);
});
it('should save a conversation without expiredAt when isTemporary is false', async () => {
mockReq.body = { isTemporary: false };
const result = await saveConvo(mockReq, mockConversationData);
expect(result.conversationId).toBe(mockConversationData.conversationId);
expect(result.expiredAt).toBeNull();
});
it('should save a conversation without expiredAt when isTemporary is not provided', async () => {
// No isTemporary in body
mockReq.body = {};
const result = await saveConvo(mockReq, mockConversationData);
expect(result.conversationId).toBe(mockConversationData.conversationId);
expect(result.expiredAt).toBeNull();
});
it('should use custom retention period from config', async () => {
// Mock custom config with 48 hour retention
getCustomConfig.mockResolvedValue({
interface: {
temporaryChatRetention: 48,
},
});
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveConvo(mockReq, mockConversationData);
expect(result.expiredAt).toBeDefined();
// Verify expiredAt is approximately 48 hours in the future
const expectedExpirationTime = new Date(beforeSave.getTime() + 48 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
expectedExpirationTime.getTime() + 1000,
);
});
it('should handle minimum retention period (1 hour)', async () => {
// Mock custom config with less than minimum retention
getCustomConfig.mockResolvedValue({
interface: {
temporaryChatRetention: 0.5, // Half hour - should be clamped to 1 hour
},
});
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveConvo(mockReq, mockConversationData);
expect(result.expiredAt).toBeDefined();
// Verify expiredAt is approximately 1 hour in the future (minimum)
const expectedExpirationTime = new Date(beforeSave.getTime() + 1 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
expectedExpirationTime.getTime() + 1000,
);
});
it('should handle maximum retention period (8760 hours)', async () => {
// Mock custom config with more than maximum retention
getCustomConfig.mockResolvedValue({
interface: {
temporaryChatRetention: 10000, // Should be clamped to 8760 hours
},
});
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveConvo(mockReq, mockConversationData);
expect(result.expiredAt).toBeDefined();
// Verify expiredAt is approximately 8760 hours (1 year) in the future
const expectedExpirationTime = new Date(beforeSave.getTime() + 8760 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
expectedExpirationTime.getTime() + 1000,
);
});
it('should handle getCustomConfig errors gracefully', async () => {
// Mock getCustomConfig to throw an error
getCustomConfig.mockRejectedValue(new Error('Config service unavailable'));
mockReq.body = { isTemporary: true };
const result = await saveConvo(mockReq, mockConversationData);
// Should still save the conversation but with expiredAt as null
expect(result.conversationId).toBe(mockConversationData.conversationId);
expect(result.expiredAt).toBeNull();
});
it('should use default retention when config is not provided', async () => {
// Mock getCustomConfig to return empty config
getCustomConfig.mockResolvedValue({});
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveConvo(mockReq, mockConversationData);
expect(result.expiredAt).toBeDefined();
// Default retention is 30 days (720 hours)
const expectedExpirationTime = new Date(beforeSave.getTime() + 30 * 24 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
expectedExpirationTime.getTime() + 1000,
);
});
it('should update expiredAt when saving existing temporary conversation', async () => {
// First save a temporary conversation
getCustomConfig.mockResolvedValue({
interface: {
temporaryChatRetention: 24,
},
});
mockReq.body = { isTemporary: true };
const firstSave = await saveConvo(mockReq, mockConversationData);
const originalExpiredAt = firstSave.expiredAt;
// Wait a bit to ensure time difference
await new Promise((resolve) => setTimeout(resolve, 100));
// Save again with same conversationId but different title
const updatedData = { ...mockConversationData, title: 'Updated Title' };
const secondSave = await saveConvo(mockReq, updatedData);
// Should update title and create new expiredAt
expect(secondSave.title).toBe('Updated Title');
expect(secondSave.expiredAt).toBeDefined();
expect(new Date(secondSave.expiredAt).getTime()).toBeGreaterThan(
new Date(originalExpiredAt).getTime(),
);
});
it('should not set expiredAt when updating non-temporary conversation', async () => {
// First save a non-temporary conversation
mockReq.body = { isTemporary: false };
const firstSave = await saveConvo(mockReq, mockConversationData);
expect(firstSave.expiredAt).toBeNull();
// Update without isTemporary flag
mockReq.body = {};
const updatedData = { ...mockConversationData, title: 'Updated Title' };
const secondSave = await saveConvo(mockReq, updatedData);
expect(secondSave.title).toBe('Updated Title');
expect(secondSave.expiredAt).toBeNull();
});
it('should filter out expired conversations in getConvosByCursor', async () => {
// Create some test conversations
const nonExpiredConvo = await Conversation.create({
conversationId: uuidv4(),
user: 'user123',
title: 'Non-expired',
endpoint: EModelEndpoint.openAI,
expiredAt: null,
updatedAt: new Date(),
});
await Conversation.create({
conversationId: uuidv4(),
user: 'user123',
title: 'Future expired',
endpoint: EModelEndpoint.openAI,
expiredAt: new Date(Date.now() + 24 * 60 * 60 * 1000), // 24 hours from now
updatedAt: new Date(),
});
// Mock Meili search
Conversation.meiliSearch = jest.fn().mockResolvedValue({ hits: [] });
const result = await getConvosByCursor('user123');
// Should only return conversations with null or non-existent expiredAt
expect(result.conversations).toHaveLength(1);
expect(result.conversations[0].conversationId).toBe(nonExpiredConvo.conversationId);
});
it('should filter out expired conversations in getConvosQueried', async () => {
// Create test conversations
const nonExpiredConvo = await Conversation.create({
conversationId: uuidv4(),
user: 'user123',
title: 'Non-expired',
endpoint: EModelEndpoint.openAI,
expiredAt: null,
});
const expiredConvo = await Conversation.create({
conversationId: uuidv4(),
user: 'user123',
title: 'Expired',
endpoint: EModelEndpoint.openAI,
expiredAt: new Date(Date.now() + 24 * 60 * 60 * 1000),
});
const convoIds = [
{ conversationId: nonExpiredConvo.conversationId },
{ conversationId: expiredConvo.conversationId },
];
const result = await getConvosQueried('user123', convoIds);
// Should only return the non-expired conversation
expect(result.conversations).toHaveLength(1);
expect(result.conversations[0].conversationId).toBe(nonExpiredConvo.conversationId);
expect(result.convoMap[nonExpiredConvo.conversationId]).toBeDefined();
expect(result.convoMap[expiredConvo.conversationId]).toBeUndefined();
});
});
describe('searchConversation', () => {
it('should find a conversation by conversationId', async () => {
await Conversation.create({
conversationId: mockConversationData.conversationId,
user: 'user123',
title: 'Test',
endpoint: EModelEndpoint.openAI,
});
const result = await searchConversation(mockConversationData.conversationId);
expect(result).toBeTruthy();
expect(result.conversationId).toBe(mockConversationData.conversationId);
expect(result.user).toBe('user123');
expect(result.title).toBeUndefined(); // Only returns conversationId and user
});
it('should return null if conversation not found', async () => {
const result = await searchConversation('non-existent-id');
expect(result).toBeNull();
});
});
describe('getConvo', () => {
it('should retrieve a conversation for a user', async () => {
await Conversation.create({
conversationId: mockConversationData.conversationId,
user: 'user123',
title: 'Test Conversation',
endpoint: EModelEndpoint.openAI,
});
const result = await getConvo('user123', mockConversationData.conversationId);
expect(result.conversationId).toBe(mockConversationData.conversationId);
expect(result.user).toBe('user123');
expect(result.title).toBe('Test Conversation');
});
it('should return null if conversation not found', async () => {
const result = await getConvo('user123', 'non-existent-id');
expect(result).toBeNull();
});
});
describe('getConvoTitle', () => {
it('should return the conversation title', async () => {
await Conversation.create({
conversationId: mockConversationData.conversationId,
user: 'user123',
title: 'Test Title',
endpoint: EModelEndpoint.openAI,
});
const result = await getConvoTitle('user123', mockConversationData.conversationId);
expect(result).toBe('Test Title');
});
it('should return null if conversation has no title', async () => {
await Conversation.create({
conversationId: mockConversationData.conversationId,
user: 'user123',
title: null,
endpoint: EModelEndpoint.openAI,
});
const result = await getConvoTitle('user123', mockConversationData.conversationId);
expect(result).toBeNull();
});
it('should return "New Chat" if conversation not found', async () => {
const result = await getConvoTitle('user123', 'non-existent-id');
expect(result).toBe('New Chat');
});
});
describe('getConvoFiles', () => {
it('should return conversation files', async () => {
const files = ['file1', 'file2'];
await Conversation.create({
conversationId: mockConversationData.conversationId,
user: 'user123',
endpoint: EModelEndpoint.openAI,
files,
});
const result = await getConvoFiles(mockConversationData.conversationId);
expect(result).toEqual(files);
});
it('should return empty array if no files', async () => {
await Conversation.create({
conversationId: mockConversationData.conversationId,
user: 'user123',
endpoint: EModelEndpoint.openAI,
});
const result = await getConvoFiles(mockConversationData.conversationId);
expect(result).toEqual([]);
});
it('should return empty array if conversation not found', async () => {
const result = await getConvoFiles('non-existent-id');
expect(result).toEqual([]);
});
});
describe('deleteConvos', () => {
it('should delete conversations and associated messages', async () => {
await Conversation.create({
conversationId: mockConversationData.conversationId,
user: 'user123',
title: 'To Delete',
endpoint: EModelEndpoint.openAI,
});
deleteMessages.mockResolvedValue({ deletedCount: 5 });
const result = await deleteConvos('user123', {
conversationId: mockConversationData.conversationId,
});
expect(result.deletedCount).toBe(1);
expect(result.messages.deletedCount).toBe(5);
expect(deleteMessages).toHaveBeenCalledWith({
conversationId: { $in: [mockConversationData.conversationId] },
});
// Verify conversation was deleted
const deletedConvo = await Conversation.findOne({
conversationId: mockConversationData.conversationId,
});
expect(deletedConvo).toBeNull();
});
it('should throw error if no conversations found', async () => {
await expect(deleteConvos('user123', { conversationId: 'non-existent' })).rejects.toThrow(
'Conversation not found or already deleted.',
);
});
});
describe('deleteNullOrEmptyConversations', () => {
it('should delete conversations with null, empty, or missing conversationIds', async () => {
// Since conversationId is required by the schema, we can't create documents with null/missing IDs
// This test should verify the function works when such documents exist (e.g., from data corruption)
// For this test, let's create a valid conversation and verify the function doesn't delete it
await Conversation.create({
conversationId: mockConversationData.conversationId,
user: 'user4',
endpoint: EModelEndpoint.openAI,
});
deleteMessages.mockResolvedValue({ deletedCount: 0 });
const result = await deleteNullOrEmptyConversations();
expect(result.conversations.deletedCount).toBe(0); // No invalid conversations to delete
expect(result.messages.deletedCount).toBe(0);
// Verify valid conversation remains
const remainingConvos = await Conversation.find({});
expect(remainingConvos).toHaveLength(1);
expect(remainingConvos[0].conversationId).toBe(mockConversationData.conversationId);
});
});
describe('Error Handling', () => {
it('should handle database errors in saveConvo', async () => {
// Force a database error by disconnecting
await mongoose.disconnect();
const result = await saveConvo(mockReq, mockConversationData);
expect(result).toEqual({ message: 'Error saving conversation' });
// Reconnect for other tests
await mongoose.connect(mongoServer.getUri());
});
});
});

View File

@@ -1,7 +1,7 @@
const { z } = require('zod');
const { logger } = require('@librechat/data-schemas');
const { createTempChatExpirationDate } = require('@librechat/api');
const getCustomConfig = require('~/server/services/Config/getCustomConfig');
const { getCustomConfig } = require('~/server/services/Config/getCustomConfig');
const { Message } = require('~/db/models');
const idSchema = z.string().uuid();

View File

@@ -1,17 +1,21 @@
const mongoose = require('mongoose');
const { MongoMemoryServer } = require('mongodb-memory-server');
const { v4: uuidv4 } = require('uuid');
const { messageSchema } = require('@librechat/data-schemas');
const { MongoMemoryServer } = require('mongodb-memory-server');
const {
saveMessage,
getMessages,
updateMessage,
deleteMessages,
bulkSaveMessages,
updateMessageText,
deleteMessagesSince,
} = require('./Message');
jest.mock('~/server/services/Config/getCustomConfig');
const { getCustomConfig } = require('~/server/services/Config/getCustomConfig');
/**
* @type {import('mongoose').Model<import('@librechat/data-schemas').IMessage>}
*/
@@ -117,21 +121,21 @@ describe('Message Operations', () => {
const conversationId = uuidv4();
// Create multiple messages in the same conversation
const message1 = await saveMessage(mockReq, {
await saveMessage(mockReq, {
messageId: 'msg1',
conversationId,
text: 'First message',
user: 'user123',
});
const message2 = await saveMessage(mockReq, {
await saveMessage(mockReq, {
messageId: 'msg2',
conversationId,
text: 'Second message',
user: 'user123',
});
const message3 = await saveMessage(mockReq, {
await saveMessage(mockReq, {
messageId: 'msg3',
conversationId,
text: 'Third message',
@@ -314,4 +318,265 @@ describe('Message Operations', () => {
expect(messages[0].text).toBe('Victim message');
});
});
describe('isTemporary message handling', () => {
beforeEach(() => {
// Reset mocks before each test
jest.clearAllMocks();
});
it('should save a message with expiredAt when isTemporary is true', async () => {
// Mock custom config with 24 hour retention
getCustomConfig.mockResolvedValue({
interface: {
temporaryChatRetention: 24,
},
});
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveMessage(mockReq, mockMessageData);
const afterSave = new Date();
expect(result.messageId).toBe('msg123');
expect(result.expiredAt).toBeDefined();
expect(result.expiredAt).toBeInstanceOf(Date);
// Verify expiredAt is approximately 24 hours in the future
const expectedExpirationTime = new Date(beforeSave.getTime() + 24 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
new Date(afterSave.getTime() + 24 * 60 * 60 * 1000 + 1000).getTime(),
);
});
it('should save a message without expiredAt when isTemporary is false', async () => {
mockReq.body = { isTemporary: false };
const result = await saveMessage(mockReq, mockMessageData);
expect(result.messageId).toBe('msg123');
expect(result.expiredAt).toBeNull();
});
it('should save a message without expiredAt when isTemporary is not provided', async () => {
// No isTemporary in body
mockReq.body = {};
const result = await saveMessage(mockReq, mockMessageData);
expect(result.messageId).toBe('msg123');
expect(result.expiredAt).toBeNull();
});
it('should use custom retention period from config', async () => {
// Mock custom config with 48 hour retention
getCustomConfig.mockResolvedValue({
interface: {
temporaryChatRetention: 48,
},
});
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveMessage(mockReq, mockMessageData);
expect(result.expiredAt).toBeDefined();
// Verify expiredAt is approximately 48 hours in the future
const expectedExpirationTime = new Date(beforeSave.getTime() + 48 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
expectedExpirationTime.getTime() + 1000,
);
});
it('should handle minimum retention period (1 hour)', async () => {
// Mock custom config with less than minimum retention
getCustomConfig.mockResolvedValue({
interface: {
temporaryChatRetention: 0.5, // Half hour - should be clamped to 1 hour
},
});
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveMessage(mockReq, mockMessageData);
expect(result.expiredAt).toBeDefined();
// Verify expiredAt is approximately 1 hour in the future (minimum)
const expectedExpirationTime = new Date(beforeSave.getTime() + 1 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
expectedExpirationTime.getTime() + 1000,
);
});
it('should handle maximum retention period (8760 hours)', async () => {
// Mock custom config with more than maximum retention
getCustomConfig.mockResolvedValue({
interface: {
temporaryChatRetention: 10000, // Should be clamped to 8760 hours
},
});
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveMessage(mockReq, mockMessageData);
expect(result.expiredAt).toBeDefined();
// Verify expiredAt is approximately 8760 hours (1 year) in the future
const expectedExpirationTime = new Date(beforeSave.getTime() + 8760 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
expectedExpirationTime.getTime() + 1000,
);
});
it('should handle getCustomConfig errors gracefully', async () => {
// Mock getCustomConfig to throw an error
getCustomConfig.mockRejectedValue(new Error('Config service unavailable'));
mockReq.body = { isTemporary: true };
const result = await saveMessage(mockReq, mockMessageData);
// Should still save the message but with expiredAt as null
expect(result.messageId).toBe('msg123');
expect(result.expiredAt).toBeNull();
});
it('should use default retention when config is not provided', async () => {
// Mock getCustomConfig to return empty config
getCustomConfig.mockResolvedValue({});
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveMessage(mockReq, mockMessageData);
expect(result.expiredAt).toBeDefined();
// Default retention is 30 days (720 hours)
const expectedExpirationTime = new Date(beforeSave.getTime() + 30 * 24 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
expectedExpirationTime.getTime() + 1000,
);
});
it('should not update expiredAt on message update', async () => {
// First save a temporary message
getCustomConfig.mockResolvedValue({
interface: {
temporaryChatRetention: 24,
},
});
mockReq.body = { isTemporary: true };
const savedMessage = await saveMessage(mockReq, mockMessageData);
const originalExpiredAt = savedMessage.expiredAt;
// Now update the message without isTemporary flag
mockReq.body = {};
const updatedMessage = await updateMessage(mockReq, {
messageId: 'msg123',
text: 'Updated text',
});
// expiredAt should not be in the returned updated message object
expect(updatedMessage.expiredAt).toBeUndefined();
// Verify in database that expiredAt wasn't changed
const dbMessage = await Message.findOne({ messageId: 'msg123', user: 'user123' });
expect(dbMessage.expiredAt).toEqual(originalExpiredAt);
});
it('should preserve expiredAt when saving existing temporary message', async () => {
// First save a temporary message
getCustomConfig.mockResolvedValue({
interface: {
temporaryChatRetention: 24,
},
});
mockReq.body = { isTemporary: true };
const firstSave = await saveMessage(mockReq, mockMessageData);
const originalExpiredAt = firstSave.expiredAt;
// Wait a bit to ensure time difference
await new Promise((resolve) => setTimeout(resolve, 100));
// Save again with same messageId but different text
const updatedData = { ...mockMessageData, text: 'Updated text' };
const secondSave = await saveMessage(mockReq, updatedData);
// Should update text but create new expiredAt
expect(secondSave.text).toBe('Updated text');
expect(secondSave.expiredAt).toBeDefined();
expect(new Date(secondSave.expiredAt).getTime()).toBeGreaterThan(
new Date(originalExpiredAt).getTime(),
);
});
it('should handle bulk operations with temporary messages', async () => {
// This test verifies bulkSaveMessages doesn't interfere with expiredAt
const messages = [
{
messageId: 'bulk1',
conversationId: uuidv4(),
text: 'Bulk message 1',
user: 'user123',
expiredAt: new Date(Date.now() + 24 * 60 * 60 * 1000),
},
{
messageId: 'bulk2',
conversationId: uuidv4(),
text: 'Bulk message 2',
user: 'user123',
expiredAt: null,
},
];
await bulkSaveMessages(messages);
const savedMessages = await Message.find({
messageId: { $in: ['bulk1', 'bulk2'] },
}).lean();
expect(savedMessages).toHaveLength(2);
const bulk1 = savedMessages.find((m) => m.messageId === 'bulk1');
const bulk2 = savedMessages.find((m) => m.messageId === 'bulk2');
expect(bulk1.expiredAt).toBeDefined();
expect(bulk2.expiredAt).toBeNull();
});
});
});

View File

@@ -1,4 +1,4 @@
const { matchModelName } = require('../utils');
const { matchModelName } = require('../utils/tokens');
const defaultRate = 6;
/**
@@ -87,6 +87,9 @@ const tokenValues = Object.assign(
'gpt-4.1': { prompt: 2, completion: 8 },
'gpt-4.5': { prompt: 75, completion: 150 },
'gpt-4o-mini': { prompt: 0.15, completion: 0.6 },
'gpt-5': { prompt: 1.25, completion: 10 },
'gpt-5-mini': { prompt: 0.25, completion: 2 },
'gpt-5-nano': { prompt: 0.05, completion: 0.4 },
'gpt-4o': { prompt: 2.5, completion: 10 },
'gpt-4o-2024-05-13': { prompt: 5, completion: 15 },
'gpt-4-1106': { prompt: 10, completion: 30 },
@@ -147,6 +150,9 @@ const tokenValues = Object.assign(
codestral: { prompt: 0.3, completion: 0.9 },
'ministral-8b': { prompt: 0.1, completion: 0.1 },
'ministral-3b': { prompt: 0.04, completion: 0.04 },
// GPT-OSS models
'gpt-oss-20b': { prompt: 0.05, completion: 0.2 },
'gpt-oss-120b': { prompt: 0.15, completion: 0.6 },
},
bedrockValues,
);
@@ -214,6 +220,12 @@ const getValueKey = (model, endpoint) => {
return 'gpt-4.1';
} else if (modelName.includes('gpt-4o-2024-05-13')) {
return 'gpt-4o-2024-05-13';
} else if (modelName.includes('gpt-5-nano')) {
return 'gpt-5-nano';
} else if (modelName.includes('gpt-5-mini')) {
return 'gpt-5-mini';
} else if (modelName.includes('gpt-5')) {
return 'gpt-5';
} else if (modelName.includes('gpt-4o-mini')) {
return 'gpt-4o-mini';
} else if (modelName.includes('gpt-4o')) {

View File

@@ -25,8 +25,14 @@ describe('getValueKey', () => {
expect(getValueKey('gpt-4-some-other-info')).toBe('8k');
});
it('should return undefined for model names that do not match any known patterns', () => {
expect(getValueKey('gpt-5-some-other-info')).toBeUndefined();
it('should return "gpt-5" for model name containing "gpt-5"', () => {
expect(getValueKey('gpt-5-some-other-info')).toBe('gpt-5');
expect(getValueKey('gpt-5-2025-01-30')).toBe('gpt-5');
expect(getValueKey('gpt-5-2025-01-30-0130')).toBe('gpt-5');
expect(getValueKey('openai/gpt-5')).toBe('gpt-5');
expect(getValueKey('openai/gpt-5-2025-01-30')).toBe('gpt-5');
expect(getValueKey('gpt-5-turbo')).toBe('gpt-5');
expect(getValueKey('gpt-5-0130')).toBe('gpt-5');
});
it('should return "gpt-3.5-turbo-1106" for model name containing "gpt-3.5-turbo-1106"', () => {
@@ -84,6 +90,29 @@ describe('getValueKey', () => {
expect(getValueKey('gpt-4.1-nano-0125')).toBe('gpt-4.1-nano');
});
it('should return "gpt-5" for model type of "gpt-5"', () => {
expect(getValueKey('gpt-5-2025-01-30')).toBe('gpt-5');
expect(getValueKey('gpt-5-2025-01-30-0130')).toBe('gpt-5');
expect(getValueKey('openai/gpt-5')).toBe('gpt-5');
expect(getValueKey('openai/gpt-5-2025-01-30')).toBe('gpt-5');
expect(getValueKey('gpt-5-turbo')).toBe('gpt-5');
expect(getValueKey('gpt-5-0130')).toBe('gpt-5');
});
it('should return "gpt-5-mini" for model type of "gpt-5-mini"', () => {
expect(getValueKey('gpt-5-mini-2025-01-30')).toBe('gpt-5-mini');
expect(getValueKey('openai/gpt-5-mini')).toBe('gpt-5-mini');
expect(getValueKey('gpt-5-mini-0130')).toBe('gpt-5-mini');
expect(getValueKey('gpt-5-mini-2025-01-30-0130')).toBe('gpt-5-mini');
});
it('should return "gpt-5-nano" for model type of "gpt-5-nano"', () => {
expect(getValueKey('gpt-5-nano-2025-01-30')).toBe('gpt-5-nano');
expect(getValueKey('openai/gpt-5-nano')).toBe('gpt-5-nano');
expect(getValueKey('gpt-5-nano-0130')).toBe('gpt-5-nano');
expect(getValueKey('gpt-5-nano-2025-01-30-0130')).toBe('gpt-5-nano');
});
it('should return "gpt-4o" for model type of "gpt-4o"', () => {
expect(getValueKey('gpt-4o-2024-08-06')).toBe('gpt-4o');
expect(getValueKey('gpt-4o-2024-08-06-0718')).toBe('gpt-4o');
@@ -207,6 +236,48 @@ describe('getMultiplier', () => {
);
});
it('should return the correct multiplier for gpt-5', () => {
const valueKey = getValueKey('gpt-5-2025-01-30');
expect(getMultiplier({ valueKey, tokenType: 'prompt' })).toBe(tokenValues['gpt-5'].prompt);
expect(getMultiplier({ valueKey, tokenType: 'completion' })).toBe(
tokenValues['gpt-5'].completion,
);
expect(getMultiplier({ model: 'gpt-5-preview', tokenType: 'prompt' })).toBe(
tokenValues['gpt-5'].prompt,
);
expect(getMultiplier({ model: 'openai/gpt-5', tokenType: 'completion' })).toBe(
tokenValues['gpt-5'].completion,
);
});
it('should return the correct multiplier for gpt-5-mini', () => {
const valueKey = getValueKey('gpt-5-mini-2025-01-30');
expect(getMultiplier({ valueKey, tokenType: 'prompt' })).toBe(tokenValues['gpt-5-mini'].prompt);
expect(getMultiplier({ valueKey, tokenType: 'completion' })).toBe(
tokenValues['gpt-5-mini'].completion,
);
expect(getMultiplier({ model: 'gpt-5-mini-preview', tokenType: 'prompt' })).toBe(
tokenValues['gpt-5-mini'].prompt,
);
expect(getMultiplier({ model: 'openai/gpt-5-mini', tokenType: 'completion' })).toBe(
tokenValues['gpt-5-mini'].completion,
);
});
it('should return the correct multiplier for gpt-5-nano', () => {
const valueKey = getValueKey('gpt-5-nano-2025-01-30');
expect(getMultiplier({ valueKey, tokenType: 'prompt' })).toBe(tokenValues['gpt-5-nano'].prompt);
expect(getMultiplier({ valueKey, tokenType: 'completion' })).toBe(
tokenValues['gpt-5-nano'].completion,
);
expect(getMultiplier({ model: 'gpt-5-nano-preview', tokenType: 'prompt' })).toBe(
tokenValues['gpt-5-nano'].prompt,
);
expect(getMultiplier({ model: 'openai/gpt-5-nano', tokenType: 'completion' })).toBe(
tokenValues['gpt-5-nano'].completion,
);
});
it('should return the correct multiplier for gpt-4o', () => {
const valueKey = getValueKey('gpt-4o-2024-08-06');
expect(getMultiplier({ valueKey, tokenType: 'prompt' })).toBe(tokenValues['gpt-4o'].prompt);
@@ -307,10 +378,22 @@ describe('getMultiplier', () => {
});
it('should return defaultRate if derived valueKey does not match any known patterns', () => {
expect(getMultiplier({ tokenType: 'prompt', model: 'gpt-5-some-other-info' })).toBe(
expect(getMultiplier({ tokenType: 'prompt', model: 'gpt-10-some-other-info' })).toBe(
defaultRate,
);
});
it('should return correct multipliers for GPT-OSS models', () => {
const models = ['gpt-oss-20b', 'gpt-oss-120b'];
models.forEach((key) => {
const expectedPrompt = tokenValues[key].prompt;
const expectedCompletion = tokenValues[key].completion;
expect(getMultiplier({ valueKey: key, tokenType: 'prompt' })).toBe(expectedPrompt);
expect(getMultiplier({ valueKey: key, tokenType: 'completion' })).toBe(expectedCompletion);
expect(getMultiplier({ model: key, tokenType: 'prompt' })).toBe(expectedPrompt);
expect(getMultiplier({ model: key, tokenType: 'completion' })).toBe(expectedCompletion);
});
});
});
describe('AWS Bedrock Model Tests', () => {

View File

@@ -1,6 +1,6 @@
{
"name": "@librechat/backend",
"version": "v0.7.9-rc1",
"version": "v0.8.0-rc2",
"description": "",
"scripts": {
"start": "echo 'please run this from the root directory'",
@@ -49,10 +49,11 @@
"@langchain/google-vertexai": "^0.2.13",
"@langchain/openai": "^0.5.18",
"@langchain/textsplitters": "^0.1.0",
"@librechat/agents": "^2.4.63",
"@librechat/agents": "^2.4.75",
"@librechat/api": "*",
"@librechat/data-schemas": "*",
"@node-saml/passport-saml": "^5.0.0",
"@modelcontextprotocol/sdk": "^1.17.1",
"@node-saml/passport-saml": "^5.1.0",
"@waylaidwanderer/fetch-event-source": "^3.0.1",
"axios": "^1.8.2",
"bcryptjs": "^2.4.3",
@@ -71,6 +72,7 @@
"express-static-gzip": "^2.2.0",
"file-type": "^18.7.0",
"firebase": "^11.0.2",
"form-data": "^4.0.4",
"googleapis": "^126.0.1",
"handlebars": "^4.7.7",
"https-proxy-agent": "^7.0.6",
@@ -93,7 +95,7 @@
"node-fetch": "^2.7.0",
"nodemailer": "^6.9.15",
"ollama": "^0.5.0",
"openai": "^4.96.2",
"openai": "^5.10.1",
"openai-chat-tokens": "^0.2.8",
"openid-client": "^6.5.0",
"passport": "^0.6.0",
@@ -118,7 +120,7 @@
},
"devDependencies": {
"jest": "^29.7.0",
"mongodb-memory-server": "^10.1.3",
"mongodb-memory-server": "^10.1.4",
"nodemon": "^3.0.3",
"supertest": "^7.1.0"
}

View File

@@ -1,46 +0,0 @@
const { logger } = require('~/config');
//handle duplicates
const handleDuplicateKeyError = (err, res) => {
logger.error('Duplicate key error:', err.keyValue);
const field = `${JSON.stringify(Object.keys(err.keyValue))}`;
const code = 409;
res
.status(code)
.send({ messages: `An document with that ${field} already exists.`, fields: field });
};
//handle validation errors
const handleValidationError = (err, res) => {
logger.error('Validation error:', err.errors);
let errors = Object.values(err.errors).map((el) => el.message);
let fields = `${JSON.stringify(Object.values(err.errors).map((el) => el.path))}`;
let code = 400;
if (errors.length > 1) {
errors = errors.join(' ');
res.status(code).send({ messages: `${JSON.stringify(errors)}`, fields: fields });
} else {
res.status(code).send({ messages: `${JSON.stringify(errors)}`, fields: fields });
}
};
module.exports = (err, _req, res, _next) => {
try {
if (err.name === 'ValidationError') {
return handleValidationError(err, res);
}
if (err.code && err.code == 11000) {
return handleDuplicateKeyError(err, res);
}
// Special handling for errors like SyntaxError
if (err.statusCode && err.body) {
return res.status(err.statusCode).send(err.body);
}
logger.error('ErrorController => error', err);
return res.status(500).send('An unknown error occurred.');
} catch (err) {
logger.error('ErrorController => processing error', err);
return res.status(500).send('Processing error in ErrorController.');
}
};

View File

@@ -5,6 +5,7 @@ const { logger } = require('~/config');
/**
* @param {ServerRequest} req
* @returns {Promise<TModelsConfig>} The models config.
*/
const getModelsConfig = async (req) => {
const cache = getLogStores(CacheKeys.CONFIG_STORE);

View File

@@ -1,54 +1,16 @@
const { logger } = require('@librechat/data-schemas');
const { CacheKeys, AuthType, Constants } = require('librechat-data-provider');
const { CacheKeys, Constants } = require('librechat-data-provider');
const {
getToolkitKey,
checkPluginAuth,
filterUniquePlugins,
convertMCPToolsToPlugins,
} = require('@librechat/api');
const { getCustomConfig, getCachedTools } = require('~/server/services/Config');
const { getToolkitKey } = require('~/server/services/ToolService');
const { availableTools, toolkits } = require('~/app/clients/tools');
const { getMCPManager, getFlowStateManager } = require('~/config');
const { availableTools } = require('~/app/clients/tools');
const { getLogStores } = require('~/cache');
/**
* Filters out duplicate plugins from the list of plugins.
*
* @param {TPlugin[]} plugins The list of plugins to filter.
* @returns {TPlugin[]} The list of plugins with duplicates removed.
*/
const filterUniquePlugins = (plugins) => {
const seen = new Set();
return plugins.filter((plugin) => {
const duplicate = seen.has(plugin.pluginKey);
seen.add(plugin.pluginKey);
return !duplicate;
});
};
/**
* Determines if a plugin is authenticated by checking if all required authentication fields have non-empty values.
* Supports alternate authentication fields, allowing validation against multiple possible environment variables.
*
* @param {TPlugin} plugin The plugin object containing the authentication configuration.
* @returns {boolean} True if the plugin is authenticated for all required fields, false otherwise.
*/
const checkPluginAuth = (plugin) => {
if (!plugin.authConfig || plugin.authConfig.length === 0) {
return false;
}
return plugin.authConfig.every((authFieldObj) => {
const authFieldOptions = authFieldObj.authField.split('||');
let isFieldAuthenticated = false;
for (const fieldOption of authFieldOptions) {
const envValue = process.env[fieldOption];
if (envValue && envValue.trim() !== '' && envValue !== AuthType.USER_PROVIDED) {
isFieldAuthenticated = true;
break;
}
}
return isFieldAuthenticated;
});
};
const getAvailablePluginsController = async (req, res) => {
try {
const cache = getLogStores(CacheKeys.CONFIG_STORE);
@@ -138,15 +100,21 @@ function createGetServerTools() {
*/
const getAvailableTools = async (req, res) => {
try {
const userId = req.user?.id;
const customConfig = await getCustomConfig();
const cache = getLogStores(CacheKeys.CONFIG_STORE);
const cachedToolsArray = await cache.get(CacheKeys.TOOLS);
if (cachedToolsArray) {
res.status(200).json(cachedToolsArray);
const cachedUserTools = await getCachedTools({ userId });
const userPlugins = convertMCPToolsToPlugins({ functionTools: cachedUserTools, customConfig });
if (cachedToolsArray != null && userPlugins != null) {
const dedupedTools = filterUniquePlugins([...userPlugins, ...cachedToolsArray]);
res.status(200).json(dedupedTools);
return;
}
// If not in cache, build from manifest
let pluginManifest = availableTools;
const customConfig = await getCustomConfig();
if (customConfig?.mcpServers != null) {
const mcpManager = getMCPManager();
const flowsCache = getLogStores(CacheKeys.FLOWS);
@@ -179,7 +147,9 @@ const getAvailableTools = async (req, res) => {
const isToolDefined = toolDefinitions[plugin.pluginKey] !== undefined;
const isToolkit =
plugin.toolkit === true &&
Object.keys(toolDefinitions).some((key) => getToolkitKey(key) === plugin.pluginKey);
Object.keys(toolDefinitions).some(
(key) => getToolkitKey({ toolkits, toolName: key }) === plugin.pluginKey,
);
if (!isToolDefined && !isToolkit) {
continue;
@@ -217,10 +187,12 @@ const getAvailableTools = async (req, res) => {
toolsOutput.push(toolToAdd);
}
const finalTools = filterUniquePlugins(toolsOutput);
await cache.set(CacheKeys.TOOLS, finalTools);
res.status(200).json(finalTools);
const dedupedTools = filterUniquePlugins([...userPlugins, ...finalTools]);
res.status(200).json(dedupedTools);
} catch (error) {
logger.error('[getAvailableTools]', error);
res.status(500).json({ message: error.message });

View File

@@ -0,0 +1,520 @@
const { Constants } = require('librechat-data-provider');
const { getCustomConfig, getCachedTools } = require('~/server/services/Config');
const { getLogStores } = require('~/cache');
// Mock the dependencies
jest.mock('@librechat/data-schemas', () => ({
logger: {
debug: jest.fn(),
error: jest.fn(),
},
}));
jest.mock('~/server/services/Config', () => ({
getCustomConfig: jest.fn(),
getCachedTools: jest.fn(),
}));
jest.mock('~/server/services/ToolService', () => ({
getToolkitKey: jest.fn(),
}));
jest.mock('~/config', () => ({
getMCPManager: jest.fn(() => ({
loadManifestTools: jest.fn().mockResolvedValue([]),
})),
getFlowStateManager: jest.fn(),
}));
jest.mock('~/app/clients/tools', () => ({
availableTools: [],
toolkits: [],
}));
jest.mock('~/cache', () => ({
getLogStores: jest.fn(),
}));
jest.mock('@librechat/api', () => ({
getToolkitKey: jest.fn(),
checkPluginAuth: jest.fn(),
filterUniquePlugins: jest.fn(),
convertMCPToolsToPlugins: jest.fn(),
}));
// Import the actual module with the function we want to test
const { getAvailableTools, getAvailablePluginsController } = require('./PluginController');
const {
filterUniquePlugins,
checkPluginAuth,
convertMCPToolsToPlugins,
getToolkitKey,
} = require('@librechat/api');
describe('PluginController', () => {
let mockReq, mockRes, mockCache;
beforeEach(() => {
jest.clearAllMocks();
mockReq = { user: { id: 'test-user-id' } };
mockRes = { status: jest.fn().mockReturnThis(), json: jest.fn() };
mockCache = { get: jest.fn(), set: jest.fn() };
getLogStores.mockReturnValue(mockCache);
});
describe('getAvailablePluginsController', () => {
beforeEach(() => {
mockReq.app = { locals: { filteredTools: [], includedTools: [] } };
});
it('should use filterUniquePlugins to remove duplicate plugins', async () => {
const mockPlugins = [
{ name: 'Plugin1', pluginKey: 'key1', description: 'First' },
{ name: 'Plugin2', pluginKey: 'key2', description: 'Second' },
];
mockCache.get.mockResolvedValue(null);
filterUniquePlugins.mockReturnValue(mockPlugins);
checkPluginAuth.mockReturnValue(true);
await getAvailablePluginsController(mockReq, mockRes);
expect(filterUniquePlugins).toHaveBeenCalled();
expect(mockRes.status).toHaveBeenCalledWith(200);
// The response includes authenticated: true for each plugin when checkPluginAuth returns true
expect(mockRes.json).toHaveBeenCalledWith([
{ name: 'Plugin1', pluginKey: 'key1', description: 'First', authenticated: true },
{ name: 'Plugin2', pluginKey: 'key2', description: 'Second', authenticated: true },
]);
});
it('should use checkPluginAuth to verify plugin authentication', async () => {
const mockPlugin = { name: 'Plugin1', pluginKey: 'key1', description: 'First' };
mockCache.get.mockResolvedValue(null);
filterUniquePlugins.mockReturnValue([mockPlugin]);
checkPluginAuth.mockReturnValueOnce(true);
await getAvailablePluginsController(mockReq, mockRes);
expect(checkPluginAuth).toHaveBeenCalledWith(mockPlugin);
const responseData = mockRes.json.mock.calls[0][0];
expect(responseData[0].authenticated).toBe(true);
});
it('should return cached plugins when available', async () => {
const cachedPlugins = [
{ name: 'CachedPlugin', pluginKey: 'cached', description: 'Cached plugin' },
];
mockCache.get.mockResolvedValue(cachedPlugins);
await getAvailablePluginsController(mockReq, mockRes);
expect(filterUniquePlugins).not.toHaveBeenCalled();
expect(checkPluginAuth).not.toHaveBeenCalled();
expect(mockRes.json).toHaveBeenCalledWith(cachedPlugins);
});
it('should filter plugins based on includedTools', async () => {
const mockPlugins = [
{ name: 'Plugin1', pluginKey: 'key1', description: 'First' },
{ name: 'Plugin2', pluginKey: 'key2', description: 'Second' },
];
mockReq.app.locals.includedTools = ['key1'];
mockCache.get.mockResolvedValue(null);
filterUniquePlugins.mockReturnValue(mockPlugins);
checkPluginAuth.mockReturnValue(false);
await getAvailablePluginsController(mockReq, mockRes);
const responseData = mockRes.json.mock.calls[0][0];
expect(responseData).toHaveLength(1);
expect(responseData[0].pluginKey).toBe('key1');
});
});
describe('getAvailableTools', () => {
it('should use convertMCPToolsToPlugins for user-specific MCP tools', async () => {
const mockUserTools = {
[`tool1${Constants.mcp_delimiter}server1`]: {
function: { name: 'tool1', description: 'Tool 1' },
},
};
const mockConvertedPlugins = [
{
name: 'tool1',
pluginKey: `tool1${Constants.mcp_delimiter}server1`,
description: 'Tool 1',
},
];
mockCache.get.mockResolvedValue(null);
getCachedTools.mockResolvedValueOnce(mockUserTools);
convertMCPToolsToPlugins.mockReturnValue(mockConvertedPlugins);
filterUniquePlugins.mockImplementation((plugins) => plugins);
getCustomConfig.mockResolvedValue(null);
await getAvailableTools(mockReq, mockRes);
expect(convertMCPToolsToPlugins).toHaveBeenCalledWith({
functionTools: mockUserTools,
customConfig: null,
});
});
it('should use filterUniquePlugins to deduplicate combined tools', async () => {
const mockUserPlugins = [
{ name: 'UserTool', pluginKey: 'user-tool', description: 'User tool' },
];
const mockManifestPlugins = [
{ name: 'ManifestTool', pluginKey: 'manifest-tool', description: 'Manifest tool' },
];
mockCache.get.mockResolvedValue(mockManifestPlugins);
getCachedTools.mockResolvedValueOnce({});
convertMCPToolsToPlugins.mockReturnValue(mockUserPlugins);
filterUniquePlugins.mockReturnValue([...mockUserPlugins, ...mockManifestPlugins]);
getCustomConfig.mockResolvedValue(null);
await getAvailableTools(mockReq, mockRes);
// Should be called to deduplicate the combined array
expect(filterUniquePlugins).toHaveBeenLastCalledWith([
...mockUserPlugins,
...mockManifestPlugins,
]);
});
it('should use checkPluginAuth to verify authentication status', async () => {
const mockPlugin = { name: 'Tool1', pluginKey: 'tool1', description: 'Tool 1' };
mockCache.get.mockResolvedValue(null);
getCachedTools.mockResolvedValue({});
convertMCPToolsToPlugins.mockReturnValue([]);
filterUniquePlugins.mockReturnValue([mockPlugin]);
checkPluginAuth.mockReturnValue(true);
getCustomConfig.mockResolvedValue(null);
// Mock getCachedTools second call to return tool definitions
getCachedTools.mockResolvedValueOnce({}).mockResolvedValueOnce({ tool1: true });
await getAvailableTools(mockReq, mockRes);
expect(checkPluginAuth).toHaveBeenCalledWith(mockPlugin);
});
it('should use getToolkitKey for toolkit validation', async () => {
const mockToolkit = {
name: 'Toolkit1',
pluginKey: 'toolkit1',
description: 'Toolkit 1',
toolkit: true,
};
mockCache.get.mockResolvedValue(null);
getCachedTools.mockResolvedValue({});
convertMCPToolsToPlugins.mockReturnValue([]);
filterUniquePlugins.mockReturnValue([mockToolkit]);
checkPluginAuth.mockReturnValue(false);
getToolkitKey.mockReturnValue('toolkit1');
getCustomConfig.mockResolvedValue(null);
// Mock getCachedTools second call to return tool definitions
getCachedTools.mockResolvedValueOnce({}).mockResolvedValueOnce({
toolkit1_function: true,
});
await getAvailableTools(mockReq, mockRes);
expect(getToolkitKey).toHaveBeenCalled();
});
});
describe('plugin.icon behavior', () => {
const callGetAvailableToolsWithMCPServer = async (mcpServers) => {
mockCache.get.mockResolvedValue(null);
getCustomConfig.mockResolvedValue({ mcpServers });
const functionTools = {
[`test-tool${Constants.mcp_delimiter}test-server`]: {
function: { name: 'test-tool', description: 'A test tool' },
},
};
const mockConvertedPlugin = {
name: 'test-tool',
pluginKey: `test-tool${Constants.mcp_delimiter}test-server`,
description: 'A test tool',
icon: mcpServers['test-server']?.iconPath,
authenticated: true,
authConfig: [],
};
getCachedTools.mockResolvedValueOnce(functionTools);
convertMCPToolsToPlugins.mockReturnValue([mockConvertedPlugin]);
filterUniquePlugins.mockImplementation((plugins) => plugins);
checkPluginAuth.mockReturnValue(true);
getToolkitKey.mockReturnValue(undefined);
getCachedTools.mockResolvedValueOnce({
[`test-tool${Constants.mcp_delimiter}test-server`]: true,
});
await getAvailableTools(mockReq, mockRes);
const responseData = mockRes.json.mock.calls[0][0];
return responseData.find((tool) => tool.name === 'test-tool');
};
it('should set plugin.icon when iconPath is defined', async () => {
const mcpServers = {
'test-server': {
iconPath: '/path/to/icon.png',
},
};
const testTool = await callGetAvailableToolsWithMCPServer(mcpServers);
expect(testTool.icon).toBe('/path/to/icon.png');
});
it('should set plugin.icon to undefined when iconPath is not defined', async () => {
const mcpServers = {
'test-server': {},
};
const testTool = await callGetAvailableToolsWithMCPServer(mcpServers);
expect(testTool.icon).toBeUndefined();
});
});
describe('helper function integration', () => {
it('should properly handle MCP tools with custom user variables', async () => {
const customConfig = {
mcpServers: {
'test-server': {
customUserVars: {
API_KEY: { title: 'API Key', description: 'Your API key' },
},
},
},
};
// We need to test the actual flow where MCP manager tools are included
const mcpManagerTools = [
{
name: 'tool1',
pluginKey: `tool1${Constants.mcp_delimiter}test-server`,
description: 'Tool 1',
authenticated: true,
},
];
// Mock the MCP manager to return tools
const mockMCPManager = {
loadManifestTools: jest.fn().mockResolvedValue(mcpManagerTools),
};
require('~/config').getMCPManager.mockReturnValue(mockMCPManager);
mockCache.get.mockResolvedValue(null);
getCustomConfig.mockResolvedValue(customConfig);
// First call returns user tools (empty in this case)
getCachedTools.mockResolvedValueOnce({});
// Mock convertMCPToolsToPlugins to return empty array for user tools
convertMCPToolsToPlugins.mockReturnValue([]);
// Mock filterUniquePlugins to pass through
filterUniquePlugins.mockImplementation((plugins) => plugins || []);
// Mock checkPluginAuth
checkPluginAuth.mockReturnValue(true);
// Second call returns tool definitions
getCachedTools.mockResolvedValueOnce({
[`tool1${Constants.mcp_delimiter}test-server`]: true,
});
await getAvailableTools(mockReq, mockRes);
const responseData = mockRes.json.mock.calls[0][0];
// Find the MCP tool in the response
const mcpTool = responseData.find(
(tool) => tool.pluginKey === `tool1${Constants.mcp_delimiter}test-server`,
);
// The actual implementation adds authConfig and sets authenticated to false when customUserVars exist
expect(mcpTool).toBeDefined();
expect(mcpTool.authConfig).toEqual([
{ authField: 'API_KEY', label: 'API Key', description: 'Your API key' },
]);
expect(mcpTool.authenticated).toBe(false);
});
it('should handle error cases gracefully', async () => {
mockCache.get.mockRejectedValue(new Error('Cache error'));
await getAvailableTools(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(500);
expect(mockRes.json).toHaveBeenCalledWith({ message: 'Cache error' });
});
});
describe('edge cases with undefined/null values', () => {
it('should handle undefined cache gracefully', async () => {
getLogStores.mockReturnValue(undefined);
await getAvailableTools(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(500);
});
it('should handle null cachedTools and cachedUserTools', async () => {
mockCache.get.mockResolvedValue(null);
getCachedTools.mockResolvedValue(null);
convertMCPToolsToPlugins.mockReturnValue(undefined);
filterUniquePlugins.mockImplementation((plugins) => plugins || []);
getCustomConfig.mockResolvedValue(null);
await getAvailableTools(mockReq, mockRes);
expect(convertMCPToolsToPlugins).toHaveBeenCalledWith({
functionTools: null,
customConfig: null,
});
});
it('should handle when getCachedTools returns undefined', async () => {
mockCache.get.mockResolvedValue(null);
getCachedTools.mockResolvedValue(undefined);
convertMCPToolsToPlugins.mockReturnValue(undefined);
filterUniquePlugins.mockImplementation((plugins) => plugins || []);
getCustomConfig.mockResolvedValue(null);
checkPluginAuth.mockReturnValue(false);
// Mock getCachedTools to return undefined for both calls
getCachedTools.mockReset();
getCachedTools.mockResolvedValueOnce(undefined).mockResolvedValueOnce(undefined);
await getAvailableTools(mockReq, mockRes);
expect(convertMCPToolsToPlugins).toHaveBeenCalledWith({
functionTools: undefined,
customConfig: null,
});
});
it('should handle cachedToolsArray and userPlugins both being defined', async () => {
const cachedTools = [{ name: 'CachedTool', pluginKey: 'cached-tool', description: 'Cached' }];
const userTools = {
'user-tool': { function: { name: 'user-tool', description: 'User tool' } },
};
const userPlugins = [{ name: 'UserTool', pluginKey: 'user-tool', description: 'User tool' }];
mockCache.get.mockResolvedValue(cachedTools);
getCachedTools.mockResolvedValue(userTools);
convertMCPToolsToPlugins.mockReturnValue(userPlugins);
filterUniquePlugins.mockReturnValue([...userPlugins, ...cachedTools]);
await getAvailableTools(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(200);
expect(mockRes.json).toHaveBeenCalledWith([...userPlugins, ...cachedTools]);
});
it('should handle empty toolDefinitions object', async () => {
mockCache.get.mockResolvedValue(null);
getCachedTools.mockResolvedValueOnce({}).mockResolvedValueOnce({});
convertMCPToolsToPlugins.mockReturnValue([]);
filterUniquePlugins.mockImplementation((plugins) => plugins || []);
getCustomConfig.mockResolvedValue(null);
checkPluginAuth.mockReturnValue(true);
await getAvailableTools(mockReq, mockRes);
// With empty tool definitions, no tools should be in the final output
expect(mockRes.json).toHaveBeenCalledWith([]);
});
it('should handle MCP tools without customUserVars', async () => {
const customConfig = {
mcpServers: {
'test-server': {
// No customUserVars defined
},
},
};
const mockUserTools = {
[`tool1${Constants.mcp_delimiter}test-server`]: {
function: { name: 'tool1', description: 'Tool 1' },
},
};
mockCache.get.mockResolvedValue(null);
getCustomConfig.mockResolvedValue(customConfig);
getCachedTools.mockResolvedValueOnce(mockUserTools);
const mockPlugin = {
name: 'tool1',
pluginKey: `tool1${Constants.mcp_delimiter}test-server`,
description: 'Tool 1',
authenticated: true,
authConfig: [],
};
convertMCPToolsToPlugins.mockReturnValue([mockPlugin]);
filterUniquePlugins.mockImplementation((plugins) => plugins);
checkPluginAuth.mockReturnValue(true);
getCachedTools.mockResolvedValueOnce({
[`tool1${Constants.mcp_delimiter}test-server`]: true,
});
await getAvailableTools(mockReq, mockRes);
const responseData = mockRes.json.mock.calls[0][0];
expect(responseData[0].authenticated).toBe(true);
// The actual implementation doesn't set authConfig on tools without customUserVars
expect(responseData[0].authConfig).toEqual([]);
});
it('should handle req.app.locals with undefined filteredTools and includedTools', async () => {
mockReq.app = { locals: {} };
mockCache.get.mockResolvedValue(null);
filterUniquePlugins.mockReturnValue([]);
checkPluginAuth.mockReturnValue(false);
await getAvailablePluginsController(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(200);
expect(mockRes.json).toHaveBeenCalledWith([]);
});
it('should handle toolkit with undefined toolDefinitions keys', async () => {
const mockToolkit = {
name: 'Toolkit1',
pluginKey: 'toolkit1',
description: 'Toolkit 1',
toolkit: true,
};
mockCache.get.mockResolvedValue(null);
getCachedTools.mockResolvedValue({});
convertMCPToolsToPlugins.mockReturnValue([]);
filterUniquePlugins.mockReturnValue([mockToolkit]);
checkPluginAuth.mockReturnValue(false);
getToolkitKey.mockReturnValue(undefined);
getCustomConfig.mockResolvedValue(null);
// Mock getCachedTools second call to return null
getCachedTools.mockResolvedValueOnce({}).mockResolvedValueOnce(null);
await getAvailableTools(mockReq, mockRes);
// Should handle null toolDefinitions gracefully
expect(mockRes.status).toHaveBeenCalledWith(200);
});
});
});

View File

@@ -99,10 +99,36 @@ const confirm2FA = async (req, res) => {
/**
* Disable 2FA by clearing the stored secret and backup codes.
* Requires verification with either TOTP token or backup code if 2FA is fully enabled.
*/
const disable2FA = async (req, res) => {
try {
const userId = req.user.id;
const { token, backupCode } = req.body;
const user = await getUserById(userId);
if (!user || !user.totpSecret) {
return res.status(400).json({ message: '2FA is not setup for this user' });
}
if (user.twoFactorEnabled) {
const secret = await getTOTPSecret(user.totpSecret);
let isVerified = false;
if (token) {
isVerified = await verifyTOTP(secret, token);
} else if (backupCode) {
isVerified = await verifyBackupCode({ user, backupCode });
} else {
return res
.status(400)
.json({ message: 'Either token or backup code is required to disable 2FA' });
}
if (!isVerified) {
return res.status(401).json({ message: 'Invalid token or backup code' });
}
}
await updateUser(userId, { totpSecret: null, backupCodes: [], twoFactorEnabled: false });
return res.status(200).json();
} catch (err) {

View File

@@ -1,5 +1,5 @@
const { logger } = require('@librechat/data-schemas');
const { webSearchKeys, extractWebSearchEnvVars } = require('@librechat/api');
const { webSearchKeys, extractWebSearchEnvVars, normalizeHttpError } = require('@librechat/api');
const {
getFiles,
updateUser,
@@ -89,8 +89,8 @@ const updateUserPluginsController = async (req, res) => {
if (userPluginsService instanceof Error) {
logger.error('[userPluginsService]', userPluginsService);
const { status, message } = userPluginsService;
res.status(status).send({ message });
const { status, message } = normalizeHttpError(userPluginsService);
return res.status(status).send({ message });
}
}
@@ -137,7 +137,7 @@ const updateUserPluginsController = async (req, res) => {
authService = await updateUserPluginAuth(user.id, keys[i], pluginKey, values[i]);
if (authService instanceof Error) {
logger.error('[authService]', authService);
({ status, message } = authService);
({ status, message } = normalizeHttpError(authService));
}
}
} else if (action === 'uninstall') {
@@ -151,7 +151,7 @@ const updateUserPluginsController = async (req, res) => {
`[authService] Error deleting all auth for MCP tool ${pluginKey}:`,
authService,
);
({ status, message } = authService);
({ status, message } = normalizeHttpError(authService));
}
} else {
// This handles:
@@ -163,7 +163,7 @@ const updateUserPluginsController = async (req, res) => {
authService = await deleteUserPluginAuth(user.id, keys[i]); // Deletes by authField name
if (authService instanceof Error) {
logger.error('[authService] Error deleting specific auth key:', authService);
({ status, message } = authService);
({ status, message } = normalizeHttpError(authService));
}
}
}
@@ -175,14 +175,16 @@ const updateUserPluginsController = async (req, res) => {
try {
const mcpManager = getMCPManager(user.id);
if (mcpManager) {
// Extract server name from pluginKey (format: "mcp_<serverName>")
const serverName = pluginKey.replace(Constants.mcp_prefix, '');
logger.info(
`[updateUserPluginsController] Disconnecting MCP connections for user ${user.id} after plugin auth update for ${pluginKey}.`,
`[updateUserPluginsController] Disconnecting MCP server ${serverName} for user ${user.id} after plugin auth update for ${pluginKey}.`,
);
await mcpManager.disconnectUserConnections(user.id);
await mcpManager.disconnectUserConnection(user.id, serverName);
}
} catch (disconnectError) {
logger.error(
`[updateUserPluginsController] Error disconnecting MCP connections for user ${user.id} after plugin auth update:`,
`[updateUserPluginsController] Error disconnecting MCP connection for user ${user.id} after plugin auth update:`,
disconnectError,
);
// Do not fail the request for this, but log it.
@@ -191,7 +193,8 @@ const updateUserPluginsController = async (req, res) => {
return res.status(status).send();
}
res.status(status).send({ message });
const normalized = normalizeHttpError({ status, message });
return res.status(normalized.status).send({ message: normalized.message });
} catch (err) {
logger.error('[updateUserPluginsController]', err);
return res.status(500).json({ message: 'Something went wrong.' });

View File

@@ -15,6 +15,7 @@ const {
Callback,
Providers,
GraphEvents,
TitleMethod,
formatMessage,
formatAgentMessages,
getTokenCountForMessage,
@@ -70,7 +71,11 @@ const payloadParser = ({ req, agent, endpoint }) => {
if (isAgentsEndpoint(endpoint)) {
return { model: undefined };
} else if (endpoint === EModelEndpoint.bedrock) {
return bedrockInputSchema.parse(agent.model_parameters);
const parsedValues = bedrockInputSchema.parse(agent.model_parameters);
if (parsedValues.thinking == null) {
parsedValues.thinking = false;
}
return parsedValues;
}
return req.body.endpointOption.model_parameters;
};
@@ -221,6 +226,42 @@ class AgentClient extends BaseClient {
return files;
}
async addDocuments(message, attachments) {
const documentResult =
await require('~/server/services/Files/documents').encodeAndFormatDocuments(
this.options.req,
attachments,
this.options.agent.provider,
);
message.documents =
documentResult.documents && documentResult.documents.length
? documentResult.documents
: undefined;
return documentResult.files;
}
async processAttachments(message, attachments) {
const [imageFiles, documentFiles] = await Promise.all([
this.addImageURLs(message, attachments),
this.addDocuments(message, attachments),
]);
const allFiles = [...imageFiles, ...documentFiles];
const seenFileIds = new Set();
const uniqueFiles = [];
for (const file of allFiles) {
if (file.file_id && !seenFileIds.has(file.file_id)) {
seenFileIds.add(file.file_id);
uniqueFiles.push(file);
} else if (!file.file_id) {
uniqueFiles.push(file);
}
}
return uniqueFiles;
}
async buildMessages(
messages,
parentMessageId,
@@ -254,7 +295,7 @@ class AgentClient extends BaseClient {
};
}
const files = await this.addImageURLs(
const files = await this.processAttachments(
orderedMessages[orderedMessages.length - 1],
attachments,
);
@@ -277,6 +318,23 @@ class AgentClient extends BaseClient {
assistantName: this.options?.modelLabel,
});
if (
message.documents &&
message.documents.length > 0 &&
message.role === 'user' &&
this.options.agent.provider === EModelEndpoint.anthropic
) {
const contentParts = [];
contentParts.push(...message.documents);
if (message.image_urls && message.image_urls.length > 0) {
contentParts.push(...message.image_urls);
}
const textContent =
typeof formattedMessage.content === 'string' ? formattedMessage.content : '';
contentParts.push({ type: 'text', text: textContent });
formattedMessage.content = contentParts;
}
if (message.ocr && i !== orderedMessages.length - 1) {
if (typeof formattedMessage.content === 'string') {
formattedMessage.content = message.ocr + '\n' + formattedMessage.content;
@@ -397,6 +455,34 @@ class AgentClient extends BaseClient {
return result;
}
/**
* Creates a promise that resolves with the memory promise result or undefined after a timeout
* @param {Promise<(TAttachment | null)[] | undefined>} memoryPromise - The memory promise to await
* @param {number} timeoutMs - Timeout in milliseconds (default: 3000)
* @returns {Promise<(TAttachment | null)[] | undefined>}
*/
async awaitMemoryWithTimeout(memoryPromise, timeoutMs = 3000) {
if (!memoryPromise) {
return;
}
try {
const timeoutPromise = new Promise((_, reject) =>
setTimeout(() => reject(new Error('Memory processing timeout')), timeoutMs),
);
const attachments = await Promise.race([memoryPromise, timeoutPromise]);
return attachments;
} catch (error) {
if (error.message === 'Memory processing timeout') {
logger.warn('[AgentClient] Memory processing timed out after 3 seconds');
} else {
logger.error('[AgentClient] Error processing memory:', error);
}
return;
}
}
/**
* @returns {Promise<string | undefined>}
*/
@@ -507,6 +593,39 @@ class AgentClient extends BaseClient {
return withoutKeys;
}
/**
* Filters out image URLs from message content
* @param {BaseMessage} message - The message to filter
* @returns {BaseMessage} - A new message with image URLs removed
*/
filterImageUrls(message) {
if (!message.content || typeof message.content === 'string') {
return message;
}
if (Array.isArray(message.content)) {
const filteredContent = message.content.filter(
(part) => part.type !== ContentTypes.IMAGE_URL,
);
if (filteredContent.length === 1 && filteredContent[0].type === ContentTypes.TEXT) {
const MessageClass = message.constructor;
return new MessageClass({
content: filteredContent[0].text,
additional_kwargs: message.additional_kwargs,
});
}
const MessageClass = message.constructor;
return new MessageClass({
content: filteredContent,
additional_kwargs: message.additional_kwargs,
});
}
return message;
}
/**
* @param {BaseMessage[]} messages
* @returns {Promise<void | (TAttachment | null)[]>}
@@ -535,7 +654,8 @@ class AgentClient extends BaseClient {
}
}
const bufferString = getBufferString(messagesToProcess);
const filteredMessages = messagesToProcess.map((msg) => this.filterImageUrls(msg));
const bufferString = getBufferString(filteredMessages);
const bufferMessage = new HumanMessage(`# Current Chat:\n\n${bufferString}`);
return await this.processMemory([bufferMessage]);
} catch (error) {
@@ -710,6 +830,51 @@ class AgentClient extends BaseClient {
};
const toolSet = new Set((this.options.agent.tools ?? []).map((tool) => tool && tool.name));
if (
this.options.agent.provider === EModelEndpoint.anthropic &&
payload &&
Array.isArray(payload)
) {
let userMessageWithDocs = null;
if (this.userMessage?.documents) {
userMessageWithDocs = this.userMessage;
} else if (this.currentMessages?.length > 0) {
const lastMessage = this.currentMessages[this.currentMessages.length - 1];
if (lastMessage.documents?.length > 0) {
userMessageWithDocs = lastMessage;
}
} else if (this.messages?.length > 0) {
const lastMessage = this.messages[this.messages.length - 1];
if (lastMessage.documents?.length > 0) {
userMessageWithDocs = lastMessage;
}
}
if (userMessageWithDocs) {
for (const payloadMessage of payload) {
if (
payloadMessage.role === 'user' &&
userMessageWithDocs.text === payloadMessage.content
) {
if (typeof payloadMessage.content === 'string') {
payloadMessage.content = [
...userMessageWithDocs.documents,
{ type: 'text', text: payloadMessage.content },
];
} else if (Array.isArray(payloadMessage.content)) {
payloadMessage.content = [
...userMessageWithDocs.documents,
...payloadMessage.content,
];
}
break;
}
}
}
}
let { messages: initialMessages, indexTokenCountMap } = formatAgentMessages(
payload,
this.indexTokenCountMap,
@@ -730,6 +895,9 @@ class AgentClient extends BaseClient {
if (i > 0) {
this.model = agent.model_parameters.model;
}
if (i > 0 && config.signal == null) {
config.signal = abortController.signal;
}
if (agent.recursion_limit && typeof agent.recursion_limit === 'number') {
config.recursionLimit = agent.recursion_limit;
}
@@ -960,11 +1128,9 @@ class AgentClient extends BaseClient {
});
try {
if (memoryPromise) {
const attachments = await memoryPromise;
if (attachments && attachments.length > 0) {
this.artifactPromises.push(...attachments);
}
const attachments = await this.awaitMemoryWithTimeout(memoryPromise);
if (attachments && attachments.length > 0) {
this.artifactPromises.push(...attachments);
}
await this.recordCollectedUsage({ context: 'message' });
} catch (err) {
@@ -974,11 +1140,9 @@ class AgentClient extends BaseClient {
);
}
} catch (err) {
if (memoryPromise) {
const attachments = await memoryPromise;
if (attachments && attachments.length > 0) {
this.artifactPromises.push(...attachments);
}
const attachments = await this.awaitMemoryWithTimeout(memoryPromise);
if (attachments && attachments.length > 0) {
this.artifactPromises.push(...attachments);
}
logger.error(
'[api/server/controllers/agents/client.js #sendCompletion] Operation aborted',
@@ -1009,25 +1173,40 @@ class AgentClient extends BaseClient {
}
const { handleLLMEnd, collected: collectedMetadata } = createMetadataAggregator();
const { req, res, agent } = this.options;
const endpoint = agent.endpoint;
let endpoint = agent.endpoint;
/** @type {import('@librechat/agents').ClientOptions} */
let clientOptions = {
maxTokens: 75,
model: agent.model_parameters.model,
model: agent.model || agent.model_parameters.model,
};
const { getOptions, overrideProvider, customEndpointConfig } =
await getProviderConfig(endpoint);
let titleProviderConfig = await getProviderConfig(endpoint);
/** @type {TEndpoint | undefined} */
const endpointConfig = req.app.locals[endpoint] ?? customEndpointConfig;
const endpointConfig =
req.app.locals.all ?? req.app.locals[endpoint] ?? titleProviderConfig.customEndpointConfig;
if (!endpointConfig) {
logger.warn(
'[api/server/controllers/agents/client.js #titleConvo] Error getting endpoint config',
);
}
if (endpointConfig?.titleEndpoint && endpointConfig.titleEndpoint !== endpoint) {
try {
titleProviderConfig = await getProviderConfig(endpointConfig.titleEndpoint);
endpoint = endpointConfig.titleEndpoint;
} catch (error) {
logger.warn(
`[api/server/controllers/agents/client.js #titleConvo] Error getting title endpoint config for ${endpointConfig.titleEndpoint}, falling back to default`,
error,
);
// Fall back to original provider config
endpoint = agent.endpoint;
titleProviderConfig = await getProviderConfig(endpoint);
}
}
if (
endpointConfig &&
endpointConfig.titleModel &&
@@ -1036,7 +1215,7 @@ class AgentClient extends BaseClient {
clientOptions.model = endpointConfig.titleModel;
}
const options = await getOptions({
const options = await titleProviderConfig.getOptions({
req,
res,
optionsOnly: true,
@@ -1045,7 +1224,7 @@ class AgentClient extends BaseClient {
endpointOption: { model_parameters: clientOptions },
});
let provider = options.provider ?? overrideProvider ?? agent.provider;
let provider = options.provider ?? titleProviderConfig.overrideProvider ?? agent.provider;
if (
endpoint === EModelEndpoint.azureOpenAI &&
options.llmConfig?.azureOpenAIApiInstanceName == null
@@ -1065,11 +1244,16 @@ class AgentClient extends BaseClient {
clientOptions.configuration = options.configOptions;
}
// Ensure maxTokens is set for non-o1 models
if (!/\b(o\d)\b/i.test(clientOptions.model) && !clientOptions.maxTokens) {
clientOptions.maxTokens = 75;
} else if (/\b(o\d)\b/i.test(clientOptions.model) && clientOptions.maxTokens != null) {
const shouldRemoveMaxTokens = /\b(o\d|gpt-[5-9])\b/i.test(clientOptions.model);
if (shouldRemoveMaxTokens && clientOptions.maxTokens != null) {
delete clientOptions.maxTokens;
} else if (!shouldRemoveMaxTokens && !clientOptions.maxTokens) {
clientOptions.maxTokens = 75;
}
if (shouldRemoveMaxTokens && clientOptions?.modelKwargs?.max_completion_tokens != null) {
delete clientOptions.modelKwargs.max_completion_tokens;
} else if (shouldRemoveMaxTokens && clientOptions?.modelKwargs?.max_output_tokens != null) {
delete clientOptions.modelKwargs.max_output_tokens;
}
clientOptions = Object.assign(
@@ -1078,16 +1262,23 @@ class AgentClient extends BaseClient {
),
);
if (provider === Providers.GOOGLE) {
if (
provider === Providers.GOOGLE &&
(endpointConfig?.titleMethod === TitleMethod.FUNCTIONS ||
endpointConfig?.titleMethod === TitleMethod.STRUCTURED)
) {
clientOptions.json = true;
}
try {
const titleResult = await this.run.generateTitle({
provider,
clientOptions,
inputText: text,
contentParts: this.contentParts,
clientOptions,
titleMethod: endpointConfig?.titleMethod,
titlePrompt: endpointConfig?.titlePrompt,
titlePromptTemplate: endpointConfig?.titlePromptTemplate,
chainOptions: {
signal: abortController.signal,
callbacks: [

File diff suppressed because it is too large Load Diff

View File

@@ -105,8 +105,6 @@ const createErrorHandler = ({ req, res, getContext, originPath = '/assistants/ch
return res.end();
}
await cache.delete(cacheKey);
// const cancelledRun = await openai.beta.threads.runs.cancel(thread_id, run_id);
// logger.debug(`[${originPath}] Cancelled run:`, cancelledRun);
} catch (error) {
logger.error(`[${originPath}] Error cancelling run`, error);
}
@@ -115,7 +113,6 @@ const createErrorHandler = ({ req, res, getContext, originPath = '/assistants/ch
let run;
try {
// run = await openai.beta.threads.runs.retrieve(thread_id, run_id);
await recordUsage({
...run.usage,
model: run.model,
@@ -128,18 +125,9 @@ const createErrorHandler = ({ req, res, getContext, originPath = '/assistants/ch
let finalEvent;
try {
// const errorContentPart = {
// text: {
// value:
// error?.message ?? 'There was an error processing your request. Please try again later.',
// },
// type: ContentTypes.ERROR,
// };
finalEvent = {
final: true,
conversation: await getConvo(req.user.id, conversationId),
// runMessages,
};
} catch (error) {
logger.error(`[${originPath}] Error finalizing error process`, error);

View File

@@ -233,6 +233,26 @@ const AgentController = async (req, res, next, initializeClient, addTitle) => {
);
}
}
// Edge case: sendMessage completed but abort happened during sendCompletion
// We need to ensure a final event is sent
else if (!res.headersSent && !res.finished) {
logger.debug(
'[AgentController] Handling edge case: `sendMessage` completed but aborted during `sendCompletion`',
);
const finalResponse = { ...response };
finalResponse.error = true;
sendEvent(res, {
final: true,
conversation,
title: conversation.title,
requestMessage: userMessage,
responseMessage: finalResponse,
error: { message: 'Request was aborted during completion' },
});
res.end();
}
// Save user message if needed
if (!client.skipSaveUserMessage) {

View File

@@ -194,6 +194,9 @@ const updateAgentHandler = async (req, res) => {
});
}
// Add version count to the response
updatedAgent.version = updatedAgent.versions ? updatedAgent.versions.length : 0;
if (updatedAgent.author) {
updatedAgent.author = updatedAgent.author.toString();
}

View File

@@ -498,6 +498,28 @@ describe('Agent Controllers - Mass Assignment Protection', () => {
expect(mockRes.json).toHaveBeenCalledWith({ error: 'Agent not found' });
});
test('should include version field in update response', async () => {
mockReq.user.id = existingAgentAuthorId.toString();
mockReq.params.id = existingAgentId;
mockReq.body = {
name: 'Updated with Version Check',
};
await updateAgentHandler(mockReq, mockRes);
expect(mockRes.json).toHaveBeenCalled();
const updatedAgent = mockRes.json.mock.calls[0][0];
// Verify version field is included and is a number
expect(updatedAgent).toHaveProperty('version');
expect(typeof updatedAgent.version).toBe('number');
expect(updatedAgent.version).toBeGreaterThanOrEqual(1);
// Verify in database
const agentInDb = await Agent.findOne({ id: existingAgentId });
expect(updatedAgent.version).toBe(agentInDb.versions.length);
});
test('should handle validation errors properly', async () => {
mockReq.user.id = existingAgentAuthorId.toString();
mockReq.params.id = existingAgentId;

View File

@@ -152,7 +152,7 @@ const chatV1 = async (req, res) => {
return res.end();
}
await cache.delete(cacheKey);
const cancelledRun = await openai.beta.threads.runs.cancel(thread_id, run_id);
const cancelledRun = await openai.beta.threads.runs.cancel(run_id, { thread_id });
logger.debug('[/assistants/chat/] Cancelled run:', cancelledRun);
} catch (error) {
logger.error('[/assistants/chat/] Error cancelling run', error);
@@ -162,7 +162,7 @@ const chatV1 = async (req, res) => {
let run;
try {
run = await openai.beta.threads.runs.retrieve(thread_id, run_id);
run = await openai.beta.threads.runs.retrieve(run_id, { thread_id });
await recordUsage({
...run.usage,
model: run.model,
@@ -623,7 +623,7 @@ const chatV1 = async (req, res) => {
if (!response.run.usage) {
await sleep(3000);
completedRun = await openai.beta.threads.runs.retrieve(thread_id, response.run.id);
completedRun = await openai.beta.threads.runs.retrieve(response.run.id, { thread_id });
if (completedRun.usage) {
await recordUsage({
...completedRun.usage,

View File

@@ -467,7 +467,7 @@ const chatV2 = async (req, res) => {
if (!response.run.usage) {
await sleep(3000);
completedRun = await openai.beta.threads.runs.retrieve(thread_id, response.run.id);
completedRun = await openai.beta.threads.runs.retrieve(response.run.id, { thread_id });
if (completedRun.usage) {
await recordUsage({
...completedRun.usage,

View File

@@ -108,7 +108,7 @@ const createErrorHandler = ({ req, res, getContext, originPath = '/assistants/ch
return res.end();
}
await cache.delete(cacheKey);
const cancelledRun = await openai.beta.threads.runs.cancel(thread_id, run_id);
const cancelledRun = await openai.beta.threads.runs.cancel(run_id, { thread_id });
logger.debug(`[${originPath}] Cancelled run:`, cancelledRun);
} catch (error) {
logger.error(`[${originPath}] Error cancelling run`, error);
@@ -118,7 +118,7 @@ const createErrorHandler = ({ req, res, getContext, originPath = '/assistants/ch
let run;
try {
run = await openai.beta.threads.runs.retrieve(thread_id, run_id);
run = await openai.beta.threads.runs.retrieve(run_id, { thread_id });
await recordUsage({
...run.usage,
model: run.model,

View File

@@ -173,6 +173,16 @@ const listAssistantsForAzure = async ({ req, res, version, azureConfig = {}, que
};
};
/**
* Initializes the OpenAI client.
* @param {object} params - The parameters object.
* @param {ServerRequest} params.req - The request object.
* @param {ServerResponse} params.res - The response object.
* @param {TEndpointOption} params.endpointOption - The endpoint options.
* @param {boolean} params.initAppClient - Whether to initialize the app client.
* @param {string} params.overrideEndpoint - The endpoint to override.
* @returns {Promise<{ openai: OpenAIClient, openAIApiKey: string; client: import('~/app/clients/OpenAIClient') }>} - The initialized OpenAI client.
*/
async function getOpenAIClient({ req, res, endpointOption, initAppClient, overrideEndpoint }) {
let endpoint = overrideEndpoint ?? req.body.endpoint ?? req.query.endpoint;
const version = await getCurrentVersion(req, endpoint);

View File

@@ -197,7 +197,7 @@ const deleteAssistant = async (req, res) => {
await validateAuthor({ req, openai });
const assistant_id = req.params.id;
const deletionStatus = await openai.beta.assistants.del(assistant_id);
const deletionStatus = await openai.beta.assistants.delete(assistant_id);
if (deletionStatus?.deleted) {
await deleteAssistantActions({ req, assistant_id });
}
@@ -365,7 +365,7 @@ const uploadAssistantAvatar = async (req, res) => {
try {
await fs.unlink(req.file.path);
logger.debug('[/:agent_id/avatar] Temp. image upload file deleted');
} catch (error) {
} catch {
logger.debug('[/:agent_id/avatar] Temp. image upload file already deleted');
}
}

View File

@@ -8,15 +8,13 @@ const express = require('express');
const passport = require('passport');
const compression = require('compression');
const cookieParser = require('cookie-parser');
const { isEnabled } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const mongoSanitize = require('express-mongo-sanitize');
const { isEnabled, ErrorController } = require('@librechat/api');
const { connectDb, indexSync } = require('~/db');
const validateImageRequest = require('./middleware/validateImageRequest');
const { jwtLogin, ldapLogin, passportLogin } = require('~/strategies');
const errorController = require('./controllers/ErrorController');
const initializeMCP = require('./services/initializeMCP');
const initializeMCPs = require('./services/initializeMCPs');
const configureSocialLogins = require('./socialLogins');
const AppService = require('./services/AppService');
const staticCache = require('./utils/staticCache');
@@ -120,8 +118,7 @@ const startServer = async () => {
app.use('/api/tags', routes.tags);
app.use('/api/mcp', routes.mcp);
// Add the error controller one more time after all routes
app.use(errorController);
app.use(ErrorController);
app.use((req, res) => {
res.set({
@@ -146,7 +143,7 @@ const startServer = async () => {
logger.info(`Server listening at http://${host == '0.0.0.0' ? 'localhost' : host}:${port}`);
}
initializeMCP(app);
initializeMCPs(app);
});
};

View File

@@ -92,7 +92,7 @@ async function healthCheckPoll(app, retries = 0) {
if (response.status === 200) {
return; // App is healthy
}
} catch (error) {
} catch {
// Ignore connection errors during polling
}

View File

@@ -47,7 +47,7 @@ async function abortRun(req, res) {
try {
await cache.set(cacheKey, 'cancelled', three_minutes);
const cancelledRun = await openai.beta.threads.runs.cancel(thread_id, run_id);
const cancelledRun = await openai.beta.threads.runs.cancel(run_id, { thread_id });
logger.debug('[abortRun] Cancelled run:', cancelledRun);
} catch (error) {
logger.error('[abortRun] Error cancelling run', error);
@@ -60,7 +60,7 @@ async function abortRun(req, res) {
}
try {
const run = await openai.beta.threads.runs.retrieve(thread_id, run_id);
const run = await openai.beta.threads.runs.retrieve(run_id, { thread_id });
await recordUsage({
...run.usage,
model: run.model,

View File

@@ -33,7 +33,7 @@ const validateModel = async (req, res, next) => {
return next();
}
const { ILLEGAL_MODEL_REQ_SCORE: score = 5 } = process.env ?? {};
const { ILLEGAL_MODEL_REQ_SCORE: score = 1 } = process.env ?? {};
const type = ViolationTypes.ILLEGAL_MODEL_REQUEST;
const errorMessage = {

File diff suppressed because it is too large Load Diff

View File

@@ -1,10 +1,11 @@
const express = require('express');
const { isEnabled } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { CacheKeys, defaultSocialLogins, Constants } = require('librechat-data-provider');
const { getCustomConfig } = require('~/server/services/Config/getCustomConfig');
const { getLdapConfig } = require('~/server/services/Config/ldap');
const { getProjectByName } = require('~/models/Project');
const { isEnabled } = require('~/server/utils');
const { getMCPManager } = require('~/config');
const { getLogStores } = require('~/cache');
const router = express.Router();
@@ -102,10 +103,16 @@ router.get('/', async function (req, res) {
payload.mcpServers = {};
const config = await getCustomConfig();
if (config?.mcpServers != null) {
const mcpManager = getMCPManager();
const oauthServers = mcpManager.getOAuthServers();
for (const serverName in config.mcpServers) {
const serverConfig = config.mcpServers[serverName];
payload.mcpServers[serverName] = {
customUserVars: serverConfig?.customUserVars || {},
chatMenu: serverConfig?.chatMenu,
isOAuth: oauthServers.has(serverName),
startup: serverConfig?.startup,
};
}
}

View File

@@ -111,7 +111,7 @@ router.delete('/', async (req, res) => {
/** @type {{ openai: OpenAI }} */
const { openai } = await assistantClients[endpoint].initializeClient({ req, res });
try {
const response = await openai.beta.threads.del(thread_id);
const response = await openai.beta.threads.delete(thread_id);
logger.debug('Deleted OpenAI thread:', response);
} catch (error) {
logger.error('Error deleting OpenAI thread:', error);

View File

@@ -413,13 +413,15 @@ router.post('/', async (req, res) => {
logger.error('[/files] Error deleting file:', error);
}
res.status(500).json({ message });
}
if (cleanup) {
try {
await fs.unlink(req.file.path);
} catch (error) {
logger.error('[/files] Error deleting file after file processing:', error);
} finally {
if (cleanup) {
try {
await fs.unlink(req.file.path);
} catch (error) {
logger.error('[/files] Error deleting file after file processing:', error);
}
} else {
logger.debug('[/files] File processing completed without cleanup');
}
}
});

View File

@@ -1,9 +1,13 @@
const { Router } = require('express');
const { MCPOAuthHandler } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { CacheKeys } = require('librechat-data-provider');
const { MCPOAuthHandler } = require('@librechat/api');
const { CacheKeys, Constants } = require('librechat-data-provider');
const { findToken, updateToken, createToken, deleteTokens } = require('~/models');
const { setCachedTools, getCachedTools, loadCustomConfig } = require('~/server/services/Config');
const { getMCPSetupData, getServerConnectionStatus } = require('~/server/services/MCP');
const { getUserPluginAuthValue } = require('~/server/services/PluginService');
const { getMCPManager, getFlowStateManager } = require('~/config');
const { requireJwtAuth } = require('~/server/middleware');
const { getFlowStateManager } = require('~/config');
const { getLogStores } = require('~/cache');
const router = Router();
@@ -89,7 +93,6 @@ router.get('/:serverName/oauth/callback', async (req, res) => {
return res.redirect('/oauth/error?error=missing_state');
}
// Extract flow ID from state
const flowId = state;
logger.debug('[MCP OAuth] Using flow ID from state', { flowId });
@@ -112,14 +115,68 @@ router.get('/:serverName/oauth/callback', async (req, res) => {
hasCodeVerifier: !!flowState.codeVerifier,
});
// Complete the OAuth flow
logger.debug('[MCP OAuth] Completing OAuth flow');
const tokens = await MCPOAuthHandler.completeOAuthFlow(flowId, code, flowManager);
logger.info('[MCP OAuth] OAuth flow completed, tokens received in callback route');
// For system-level OAuth, we need to store the tokens and retry the connection
if (flowState.userId === 'system') {
logger.debug(`[MCP OAuth] System-level OAuth completed for ${serverName}`);
try {
const mcpManager = getMCPManager(flowState.userId);
logger.debug(`[MCP OAuth] Attempting to reconnect ${serverName} with new OAuth tokens`);
if (flowState.userId !== 'system') {
const user = { id: flowState.userId };
const userConnection = await mcpManager.getUserConnection({
user,
serverName,
flowManager,
tokenMethods: {
findToken,
updateToken,
createToken,
deleteTokens,
},
});
logger.info(
`[MCP OAuth] Successfully reconnected ${serverName} for user ${flowState.userId}`,
);
const userTools = (await getCachedTools({ userId: flowState.userId })) || {};
const mcpDelimiter = Constants.mcp_delimiter;
for (const key of Object.keys(userTools)) {
if (key.endsWith(`${mcpDelimiter}${serverName}`)) {
delete userTools[key];
}
}
const tools = await userConnection.fetchTools();
for (const tool of tools) {
const name = `${tool.name}${Constants.mcp_delimiter}${serverName}`;
userTools[name] = {
type: 'function',
['function']: {
name,
description: tool.description,
parameters: tool.inputSchema,
},
};
}
await setCachedTools(userTools, { userId: flowState.userId });
logger.debug(
`[MCP OAuth] Cached ${tools.length} tools for ${serverName} user ${flowState.userId}`,
);
} else {
logger.debug(`[MCP OAuth] System-level OAuth completed for ${serverName}`);
}
} catch (error) {
logger.warn(
`[MCP OAuth] Failed to reconnect ${serverName} after OAuth, but tokens are saved:`,
error,
);
}
/** ID of the flow that the tool/connection is waiting for */
@@ -151,7 +208,6 @@ router.get('/oauth/tokens/:flowId', requireJwtAuth, async (req, res) => {
return res.status(401).json({ error: 'User not authenticated' });
}
// Allow system flows or user-owned flows
if (!flowId.startsWith(`${user.id}:`) && !flowId.startsWith('system:')) {
return res.status(403).json({ error: 'Access denied' });
}
@@ -202,4 +258,338 @@ router.get('/oauth/status/:flowId', async (req, res) => {
}
});
/**
* Cancel OAuth flow
* This endpoint cancels a pending OAuth flow
*/
router.post('/oauth/cancel/:serverName', requireJwtAuth, async (req, res) => {
try {
const { serverName } = req.params;
const user = req.user;
if (!user?.id) {
return res.status(401).json({ error: 'User not authenticated' });
}
logger.info(`[MCP OAuth Cancel] Cancelling OAuth flow for ${serverName} by user ${user.id}`);
const flowsCache = getLogStores(CacheKeys.FLOWS);
const flowManager = getFlowStateManager(flowsCache);
const flowId = MCPOAuthHandler.generateFlowId(user.id, serverName);
const flowState = await flowManager.getFlowState(flowId, 'mcp_oauth');
if (!flowState) {
logger.debug(`[MCP OAuth Cancel] No active flow found for ${serverName}`);
return res.json({
success: true,
message: 'No active OAuth flow to cancel',
});
}
await flowManager.failFlow(flowId, 'mcp_oauth', 'User cancelled OAuth flow');
logger.info(`[MCP OAuth Cancel] Successfully cancelled OAuth flow for ${serverName}`);
res.json({
success: true,
message: `OAuth flow for ${serverName} cancelled successfully`,
});
} catch (error) {
logger.error('[MCP OAuth Cancel] Failed to cancel OAuth flow', error);
res.status(500).json({ error: 'Failed to cancel OAuth flow' });
}
});
/**
* Reinitialize MCP server
* This endpoint allows reinitializing a specific MCP server
*/
router.post('/:serverName/reinitialize', requireJwtAuth, async (req, res) => {
try {
const { serverName } = req.params;
const user = req.user;
if (!user?.id) {
return res.status(401).json({ error: 'User not authenticated' });
}
logger.info(`[MCP Reinitialize] Reinitializing server: ${serverName}`);
const printConfig = false;
const config = await loadCustomConfig(printConfig);
if (!config || !config.mcpServers || !config.mcpServers[serverName]) {
return res.status(404).json({
error: `MCP server '${serverName}' not found in configuration`,
});
}
const flowsCache = getLogStores(CacheKeys.FLOWS);
const flowManager = getFlowStateManager(flowsCache);
const mcpManager = getMCPManager();
await mcpManager.disconnectServer(serverName);
logger.info(`[MCP Reinitialize] Disconnected existing server: ${serverName}`);
const serverConfig = config.mcpServers[serverName];
mcpManager.mcpConfigs[serverName] = serverConfig;
let customUserVars = {};
if (serverConfig.customUserVars && typeof serverConfig.customUserVars === 'object') {
for (const varName of Object.keys(serverConfig.customUserVars)) {
try {
const value = await getUserPluginAuthValue(user.id, varName, false);
customUserVars[varName] = value;
} catch (err) {
logger.error(`[MCP Reinitialize] Error fetching ${varName} for user ${user.id}:`, err);
}
}
}
let userConnection = null;
let oauthRequired = false;
let oauthUrl = null;
try {
userConnection = await mcpManager.getUserConnection({
user,
serverName,
flowManager,
customUserVars,
tokenMethods: {
findToken,
updateToken,
createToken,
deleteTokens,
},
returnOnOAuth: true,
oauthStart: async (authURL) => {
logger.info(`[MCP Reinitialize] OAuth URL received: ${authURL}`);
oauthUrl = authURL;
oauthRequired = true;
},
});
logger.info(`[MCP Reinitialize] Successfully established connection for ${serverName}`);
} catch (err) {
logger.info(`[MCP Reinitialize] getUserConnection threw error: ${err.message}`);
logger.info(
`[MCP Reinitialize] OAuth state - oauthRequired: ${oauthRequired}, oauthUrl: ${oauthUrl ? 'present' : 'null'}`,
);
const isOAuthError =
err.message?.includes('OAuth') ||
err.message?.includes('authentication') ||
err.message?.includes('401');
const isOAuthFlowInitiated = err.message === 'OAuth flow initiated - return early';
if (isOAuthError || oauthRequired || isOAuthFlowInitiated) {
logger.info(
`[MCP Reinitialize] OAuth required for ${serverName} (isOAuthError: ${isOAuthError}, oauthRequired: ${oauthRequired}, isOAuthFlowInitiated: ${isOAuthFlowInitiated})`,
);
oauthRequired = true;
} else {
logger.error(
`[MCP Reinitialize] Error initializing MCP server ${serverName} for user:`,
err,
);
return res.status(500).json({ error: 'Failed to reinitialize MCP server for user' });
}
}
if (userConnection && !oauthRequired) {
const userTools = (await getCachedTools({ userId: user.id })) || {};
const mcpDelimiter = Constants.mcp_delimiter;
for (const key of Object.keys(userTools)) {
if (key.endsWith(`${mcpDelimiter}${serverName}`)) {
delete userTools[key];
}
}
const tools = await userConnection.fetchTools();
for (const tool of tools) {
const name = `${tool.name}${Constants.mcp_delimiter}${serverName}`;
userTools[name] = {
type: 'function',
['function']: {
name,
description: tool.description,
parameters: tool.inputSchema,
},
};
}
await setCachedTools(userTools, { userId: user.id });
}
logger.debug(
`[MCP Reinitialize] Sending response for ${serverName} - oauthRequired: ${oauthRequired}, oauthUrl: ${oauthUrl ? 'present' : 'null'}`,
);
const getResponseMessage = () => {
if (oauthRequired) {
return `MCP server '${serverName}' ready for OAuth authentication`;
}
if (userConnection) {
return `MCP server '${serverName}' reinitialized successfully`;
}
return `Failed to reinitialize MCP server '${serverName}'`;
};
res.json({
success: (userConnection && !oauthRequired) || (oauthRequired && oauthUrl),
message: getResponseMessage(),
serverName,
oauthRequired,
oauthUrl,
});
} catch (error) {
logger.error('[MCP Reinitialize] Unexpected error', error);
res.status(500).json({ error: 'Internal server error' });
}
});
/**
* Get connection status for all MCP servers
* This endpoint returns all app level and user-scoped connection statuses from MCPManager without disconnecting idle connections
*/
router.get('/connection/status', requireJwtAuth, async (req, res) => {
try {
const user = req.user;
if (!user?.id) {
return res.status(401).json({ error: 'User not authenticated' });
}
const { mcpConfig, appConnections, userConnections, oauthServers } = await getMCPSetupData(
user.id,
);
const connectionStatus = {};
for (const [serverName] of Object.entries(mcpConfig)) {
connectionStatus[serverName] = await getServerConnectionStatus(
user.id,
serverName,
appConnections,
userConnections,
oauthServers,
);
}
res.json({
success: true,
connectionStatus,
});
} catch (error) {
if (error.message === 'MCP config not found') {
return res.status(404).json({ error: error.message });
}
logger.error('[MCP Connection Status] Failed to get connection status', error);
res.status(500).json({ error: 'Failed to get connection status' });
}
});
/**
* Get connection status for a single MCP server
* This endpoint returns the connection status for a specific server for a given user
*/
router.get('/connection/status/:serverName', requireJwtAuth, async (req, res) => {
try {
const user = req.user;
const { serverName } = req.params;
if (!user?.id) {
return res.status(401).json({ error: 'User not authenticated' });
}
const { mcpConfig, appConnections, userConnections, oauthServers } = await getMCPSetupData(
user.id,
);
if (!mcpConfig[serverName]) {
return res
.status(404)
.json({ error: `MCP server '${serverName}' not found in configuration` });
}
const serverStatus = await getServerConnectionStatus(
user.id,
serverName,
appConnections,
userConnections,
oauthServers,
);
res.json({
success: true,
serverName,
connectionStatus: serverStatus.connectionState,
requiresOAuth: serverStatus.requiresOAuth,
});
} catch (error) {
if (error.message === 'MCP config not found') {
return res.status(404).json({ error: error.message });
}
logger.error(
`[MCP Per-Server Status] Failed to get connection status for ${req.params.serverName}`,
error,
);
res.status(500).json({ error: 'Failed to get connection status' });
}
});
/**
* Check which authentication values exist for a specific MCP server
* This endpoint returns only boolean flags indicating if values are set, not the actual values
*/
router.get('/:serverName/auth-values', requireJwtAuth, async (req, res) => {
try {
const { serverName } = req.params;
const user = req.user;
if (!user?.id) {
return res.status(401).json({ error: 'User not authenticated' });
}
const printConfig = false;
const config = await loadCustomConfig(printConfig);
if (!config || !config.mcpServers || !config.mcpServers[serverName]) {
return res.status(404).json({
error: `MCP server '${serverName}' not found in configuration`,
});
}
const serverConfig = config.mcpServers[serverName];
const pluginKey = `${Constants.mcp_prefix}${serverName}`;
const authValueFlags = {};
if (serverConfig.customUserVars && typeof serverConfig.customUserVars === 'object') {
for (const varName of Object.keys(serverConfig.customUserVars)) {
try {
const value = await getUserPluginAuthValue(user.id, varName, false, pluginKey);
authValueFlags[varName] = !!(value && value.length > 0);
} catch (err) {
logger.error(
`[MCP Auth Value Flags] Error checking ${varName} for user ${user.id}:`,
err,
);
authValueFlags[varName] = false;
}
}
}
res.json({
success: true,
serverName,
authValueFlags,
});
} catch (error) {
logger.error(
`[MCP Auth Value Flags] Failed to check auth value flags for ${req.params.serverName}`,
error,
);
res.status(500).json({ error: 'Failed to check auth value flags' });
}
});
module.exports = router;

View File

@@ -13,6 +13,8 @@ const { getRoleByName } = require('~/models/Role');
const router = express.Router();
const memoryPayloadLimit = express.json({ limit: '100kb' });
const checkMemoryRead = generateCheckAccess({
permissionType: PermissionTypes.MEMORIES,
permissions: [Permissions.USE, Permissions.READ],
@@ -60,6 +62,7 @@ router.get('/', checkMemoryRead, async (req, res) => {
const memoryConfig = req.app.locals?.memory;
const tokenLimit = memoryConfig?.tokenLimit;
const charLimit = memoryConfig?.charLimit || 10000;
let usagePercentage = null;
if (tokenLimit && tokenLimit > 0) {
@@ -70,6 +73,7 @@ router.get('/', checkMemoryRead, async (req, res) => {
memories: sortedMemories,
totalTokens,
tokenLimit: tokenLimit || null,
charLimit,
usagePercentage,
});
} catch (error) {
@@ -83,7 +87,7 @@ router.get('/', checkMemoryRead, async (req, res) => {
* Body: { key: string, value: string }
* Returns 201 and { created: true, memory: <createdDoc> } when successful.
*/
router.post('/', checkMemoryCreate, async (req, res) => {
router.post('/', memoryPayloadLimit, checkMemoryCreate, async (req, res) => {
const { key, value } = req.body;
if (typeof key !== 'string' || key.trim() === '') {
@@ -94,13 +98,25 @@ router.post('/', checkMemoryCreate, async (req, res) => {
return res.status(400).json({ error: 'Value is required and must be a non-empty string.' });
}
const memoryConfig = req.app.locals?.memory;
const charLimit = memoryConfig?.charLimit || 10000;
if (key.length > 1000) {
return res.status(400).json({
error: `Key exceeds maximum length of 1000 characters. Current length: ${key.length} characters.`,
});
}
if (value.length > charLimit) {
return res.status(400).json({
error: `Value exceeds maximum length of ${charLimit} characters. Current length: ${value.length} characters.`,
});
}
try {
const tokenCount = Tokenizer.getTokenCount(value, 'o200k_base');
const memories = await getAllUserMemories(req.user.id);
// Check token limit
const memoryConfig = req.app.locals?.memory;
const tokenLimit = memoryConfig?.tokenLimit;
if (tokenLimit) {
@@ -175,7 +191,7 @@ router.patch('/preferences', checkMemoryOptOut, async (req, res) => {
* Body: { key?: string, value: string }
* Returns 200 and { updated: true, memory: <updatedDoc> } when successful.
*/
router.patch('/:key', checkMemoryUpdate, async (req, res) => {
router.patch('/:key', memoryPayloadLimit, checkMemoryUpdate, async (req, res) => {
const { key: urlKey } = req.params;
const { key: bodyKey, value } = req.body || {};
@@ -183,9 +199,23 @@ router.patch('/:key', checkMemoryUpdate, async (req, res) => {
return res.status(400).json({ error: 'Value is required and must be a non-empty string.' });
}
// Use the key from the body if provided, otherwise use the key from the URL
const newKey = bodyKey || urlKey;
const memoryConfig = req.app.locals?.memory;
const charLimit = memoryConfig?.charLimit || 10000;
if (newKey.length > 1000) {
return res.status(400).json({
error: `Key exceeds maximum length of 1000 characters. Current length: ${newKey.length} characters.`,
});
}
if (value.length > charLimit) {
return res.status(400).json({
error: `Value exceeds maximum length of ${charLimit} characters. Current length: ${value.length} characters.`,
});
}
try {
const tokenCount = Tokenizer.getTokenCount(value, 'o200k_base');
@@ -196,7 +226,6 @@ router.patch('/:key', checkMemoryUpdate, async (req, res) => {
return res.status(404).json({ error: 'Memory not found.' });
}
// If the key is changing, we need to handle it specially
if (newKey !== urlKey) {
const keyExists = memories.find((m) => m.key === newKey);
if (keyExists) {
@@ -219,7 +248,6 @@ router.patch('/:key', checkMemoryUpdate, async (req, res) => {
return res.status(500).json({ error: 'Failed to delete old memory.' });
}
} else {
// Key is not changing, just update the value
const result = await setMemory({
userId: req.user.id,
key: newKey,

View File

@@ -1,7 +1,10 @@
// file deepcode ignore NoRateLimitingForLogin: Rate limiting is handled by the `loginLimiter` middleware
const express = require('express');
const passport = require('passport');
const { isEnabled } = require('@librechat/api');
const { randomState } = require('openid-client');
const { logger } = require('@librechat/data-schemas');
const { ErrorTypes } = require('librechat-data-provider');
const {
checkBan,
logHeaders,
@@ -10,8 +13,6 @@ const {
checkDomainAllowed,
} = require('~/server/middleware');
const { setAuthTokens, setOpenIDAuthTokens } = require('~/server/services/AuthService');
const { isEnabled } = require('~/server/utils');
const { logger } = require('~/config');
const router = express.Router();
@@ -46,13 +47,13 @@ const oauthHandler = async (req, res) => {
};
router.get('/error', (req, res) => {
// A single error message is pushed by passport when authentication fails.
/** A single error message is pushed by passport when authentication fails. */
const errorMessage = req.session?.messages?.pop() || 'Unknown error';
logger.error('Error in OAuth authentication:', {
message: req.session?.messages?.pop() || 'Unknown error',
message: errorMessage,
});
// Redirect to login page with auth_failed parameter to prevent infinite redirect loops
res.redirect(`${domains.client}/login?redirect=false`);
res.redirect(`${domains.client}/login?redirect=false&error=${ErrorTypes.AUTH_FAILED}`);
});
/**

View File

@@ -1,9 +1,8 @@
const { agentsConfigSetup, loadWebSearchConfig } = require('@librechat/api');
const { loadMemoryConfig, agentsConfigSetup, loadWebSearchConfig } = require('@librechat/api');
const {
FileSources,
loadOCRConfig,
EModelEndpoint,
loadMemoryConfig,
getConfigDefaults,
} = require('librechat-data-provider');
const {
@@ -157,6 +156,10 @@ const AppService = async (app) => {
}
});
if (endpoints?.all) {
endpointLocals.all = endpoints.all;
}
app.locals = {
...defaultLocals,
fileConfig: config?.fileConfig,

View File

@@ -543,6 +543,206 @@ describe('AppService', () => {
expect(process.env.IMPORT_USER_MAX).toEqual('initialUserMax');
expect(process.env.IMPORT_USER_WINDOW).toEqual('initialUserWindow');
});
it('should correctly configure endpoint with titlePrompt, titleMethod, and titlePromptTemplate', async () => {
require('./Config/loadCustomConfig').mockImplementationOnce(() =>
Promise.resolve({
endpoints: {
[EModelEndpoint.openAI]: {
titleConvo: true,
titleModel: 'gpt-3.5-turbo',
titleMethod: 'structured',
titlePrompt: 'Custom title prompt for conversation',
titlePromptTemplate: 'Summarize this conversation: {{conversation}}',
},
[EModelEndpoint.assistants]: {
titleMethod: 'functions',
titlePrompt: 'Generate a title for this assistant conversation',
titlePromptTemplate: 'Assistant conversation template: {{messages}}',
},
[EModelEndpoint.azureOpenAI]: {
groups: azureGroups,
titleConvo: true,
titleMethod: 'completion',
titleModel: 'gpt-4',
titlePrompt: 'Azure title prompt',
titlePromptTemplate: 'Azure conversation: {{context}}',
},
},
}),
);
await AppService(app);
// Check OpenAI endpoint configuration
expect(app.locals).toHaveProperty(EModelEndpoint.openAI);
expect(app.locals[EModelEndpoint.openAI]).toEqual(
expect.objectContaining({
titleConvo: true,
titleModel: 'gpt-3.5-turbo',
titleMethod: 'structured',
titlePrompt: 'Custom title prompt for conversation',
titlePromptTemplate: 'Summarize this conversation: {{conversation}}',
}),
);
// Check Assistants endpoint configuration
expect(app.locals).toHaveProperty(EModelEndpoint.assistants);
expect(app.locals[EModelEndpoint.assistants]).toMatchObject({
titleMethod: 'functions',
titlePrompt: 'Generate a title for this assistant conversation',
titlePromptTemplate: 'Assistant conversation template: {{messages}}',
});
// Check Azure OpenAI endpoint configuration
expect(app.locals).toHaveProperty(EModelEndpoint.azureOpenAI);
expect(app.locals[EModelEndpoint.azureOpenAI]).toEqual(
expect.objectContaining({
titleConvo: true,
titleMethod: 'completion',
titleModel: 'gpt-4',
titlePrompt: 'Azure title prompt',
titlePromptTemplate: 'Azure conversation: {{context}}',
}),
);
});
it('should configure Agent endpoint with title generation settings', async () => {
require('./Config/loadCustomConfig').mockImplementationOnce(() =>
Promise.resolve({
endpoints: {
[EModelEndpoint.agents]: {
disableBuilder: false,
titleConvo: true,
titleModel: 'gpt-4',
titleMethod: 'structured',
titlePrompt: 'Generate a descriptive title for this agent conversation',
titlePromptTemplate: 'Agent conversation summary: {{content}}',
recursionLimit: 15,
capabilities: [AgentCapabilities.tools, AgentCapabilities.actions],
},
},
}),
);
await AppService(app);
expect(app.locals).toHaveProperty(EModelEndpoint.agents);
expect(app.locals[EModelEndpoint.agents]).toMatchObject({
disableBuilder: false,
titleConvo: true,
titleModel: 'gpt-4',
titleMethod: 'structured',
titlePrompt: 'Generate a descriptive title for this agent conversation',
titlePromptTemplate: 'Agent conversation summary: {{content}}',
recursionLimit: 15,
capabilities: expect.arrayContaining([AgentCapabilities.tools, AgentCapabilities.actions]),
});
});
it('should handle missing title configuration options with defaults', async () => {
require('./Config/loadCustomConfig').mockImplementationOnce(() =>
Promise.resolve({
endpoints: {
[EModelEndpoint.openAI]: {
titleConvo: true,
// titlePrompt and titlePromptTemplate are not provided
},
},
}),
);
await AppService(app);
expect(app.locals).toHaveProperty(EModelEndpoint.openAI);
expect(app.locals[EModelEndpoint.openAI]).toMatchObject({
titleConvo: true,
});
// Check that the optional fields are undefined when not provided
expect(app.locals[EModelEndpoint.openAI].titlePrompt).toBeUndefined();
expect(app.locals[EModelEndpoint.openAI].titlePromptTemplate).toBeUndefined();
expect(app.locals[EModelEndpoint.openAI].titleMethod).toBeUndefined();
});
it('should correctly configure titleEndpoint when specified', async () => {
require('./Config/loadCustomConfig').mockImplementationOnce(() =>
Promise.resolve({
endpoints: {
[EModelEndpoint.openAI]: {
titleConvo: true,
titleModel: 'gpt-3.5-turbo',
titleEndpoint: EModelEndpoint.anthropic,
titlePrompt: 'Generate a concise title',
},
[EModelEndpoint.agents]: {
titleEndpoint: 'custom-provider',
titleMethod: 'structured',
},
},
}),
);
await AppService(app);
// Check OpenAI endpoint has titleEndpoint
expect(app.locals).toHaveProperty(EModelEndpoint.openAI);
expect(app.locals[EModelEndpoint.openAI]).toMatchObject({
titleConvo: true,
titleModel: 'gpt-3.5-turbo',
titleEndpoint: EModelEndpoint.anthropic,
titlePrompt: 'Generate a concise title',
});
// Check Agents endpoint has titleEndpoint
expect(app.locals).toHaveProperty(EModelEndpoint.agents);
expect(app.locals[EModelEndpoint.agents]).toMatchObject({
titleEndpoint: 'custom-provider',
titleMethod: 'structured',
});
});
it('should correctly configure all endpoint when specified', async () => {
require('./Config/loadCustomConfig').mockImplementationOnce(() =>
Promise.resolve({
endpoints: {
all: {
titleConvo: true,
titleModel: 'gpt-4o-mini',
titleMethod: 'structured',
titlePrompt: 'Default title prompt for all endpoints',
titlePromptTemplate: 'Default template: {{conversation}}',
titleEndpoint: EModelEndpoint.anthropic,
streamRate: 50,
},
[EModelEndpoint.openAI]: {
titleConvo: true,
titleModel: 'gpt-3.5-turbo',
},
},
}),
);
await AppService(app);
// Check that 'all' endpoint config is loaded
expect(app.locals).toHaveProperty('all');
expect(app.locals.all).toMatchObject({
titleConvo: true,
titleModel: 'gpt-4o-mini',
titleMethod: 'structured',
titlePrompt: 'Default title prompt for all endpoints',
titlePromptTemplate: 'Default template: {{conversation}}',
titleEndpoint: EModelEndpoint.anthropic,
streamRate: 50,
});
// Check that OpenAI endpoint has its own config
expect(app.locals).toHaveProperty(EModelEndpoint.openAI);
expect(app.locals[EModelEndpoint.openAI]).toMatchObject({
titleConvo: true,
titleModel: 'gpt-3.5-turbo',
});
});
});
describe('AppService updating app.locals and issuing warnings', () => {

View File

@@ -60,7 +60,14 @@ const replaceArtifactContent = (originalText, artifact, original, updated) => {
// Find boundaries between ARTIFACT_START and ARTIFACT_END
const contentStart = artifactContent.indexOf('\n', artifactContent.indexOf(ARTIFACT_START)) + 1;
const contentEnd = artifactContent.lastIndexOf(ARTIFACT_END);
let contentEnd = artifactContent.lastIndexOf(ARTIFACT_END);
// Special case: if contentEnd is 0, it means the only ::: found is at the start of :::artifact
// This indicates an incomplete artifact (no closing :::)
// We need to check that it's exactly at position 0 (the beginning of artifactContent)
if (contentEnd === 0 && artifactContent.indexOf(ARTIFACT_START) === 0) {
contentEnd = artifactContent.length;
}
if (contentStart === -1 || contentEnd === -1) {
return null;
@@ -72,12 +79,20 @@ const replaceArtifactContent = (originalText, artifact, original, updated) => {
// Determine where to look for the original content
let searchStart, searchEnd;
if (codeBlockStart !== -1 && codeBlockEnd !== -1) {
// If code blocks exist, search between them
if (codeBlockStart !== -1) {
// Code block starts
searchStart = codeBlockStart + 4; // after ```\n
searchEnd = codeBlockEnd;
if (codeBlockEnd !== -1 && codeBlockEnd > codeBlockStart) {
// Code block has proper ending
searchEnd = codeBlockEnd;
} else {
// No closing backticks found or they're before the opening (shouldn't happen)
// This might be an incomplete artifact - search to contentEnd
searchEnd = contentEnd;
}
} else {
// Otherwise search in the whole artifact content
// No code blocks at all
searchStart = contentStart;
searchEnd = contentEnd;
}

View File

@@ -89,9 +89,9 @@ describe('replaceArtifactContent', () => {
};
test('should replace content within artifact boundaries', () => {
const original = 'console.log(\'hello\')';
const original = "console.log('hello')";
const artifact = createTestArtifact(original);
const updated = 'console.log(\'updated\')';
const updated = "console.log('updated')";
const result = replaceArtifactContent(artifact.text, artifact, original, updated);
expect(result).toContain(updated);
@@ -317,4 +317,182 @@ console.log(greeting);`;
expect(result).not.toContain('\n\n```');
expect(result).not.toContain('```\n\n');
});
describe('incomplete artifacts', () => {
test('should handle incomplete artifacts (missing closing ::: and ```)', () => {
const original = `<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>Pomodoro</title>
<meta name="description" content="A single-file Pomodoro timer with logs, charts, sounds, and dark mode." />
<style>
:root{`;
const prefix = `Awesome idea! I'll deliver a complete single-file HTML app called "Pomodoro" with:
- Custom session/break durations
You can save this as pomodoro.html and open it directly in your browser.
`;
// This simulates the real incomplete artifact case - no closing ``` or :::
const incompleteArtifact = `${ARTIFACT_START}{identifier="pomodoro-single-file-app" type="text/html" title="Pomodoro — Single File App"}
\`\`\`
${original}`;
const fullText = prefix + incompleteArtifact;
const message = { text: fullText };
const artifacts = findAllArtifacts(message);
expect(artifacts).toHaveLength(1);
expect(artifacts[0].end).toBe(fullText.length);
const updated = original.replace('Pomodoro</title>', 'Pomodoro</title>UPDATED');
const result = replaceArtifactContent(fullText, artifacts[0], original, updated);
expect(result).not.toBeNull();
expect(result).toContain('UPDATED');
expect(result).toContain(prefix);
// Should not have added closing markers
expect(result).not.toMatch(/:::\s*$/);
});
test('should handle incomplete artifacts with only opening code block', () => {
const original = 'function hello() { console.log("world"); }';
const incompleteArtifact = `${ARTIFACT_START}{id="test"}\n\`\`\`\n${original}`;
const message = { text: incompleteArtifact };
const artifacts = findAllArtifacts(message);
expect(artifacts).toHaveLength(1);
const updated = 'function hello() { console.log("UPDATED"); }';
const result = replaceArtifactContent(incompleteArtifact, artifacts[0], original, updated);
expect(result).not.toBeNull();
expect(result).toContain('UPDATED');
});
test('should handle incomplete artifacts without code blocks', () => {
const original = 'Some plain text content';
const incompleteArtifact = `${ARTIFACT_START}{id="test"}\n${original}`;
const message = { text: incompleteArtifact };
const artifacts = findAllArtifacts(message);
expect(artifacts).toHaveLength(1);
const updated = 'Some UPDATED text content';
const result = replaceArtifactContent(incompleteArtifact, artifacts[0], original, updated);
expect(result).not.toBeNull();
expect(result).toContain('UPDATED');
});
});
describe('regression tests for edge cases', () => {
test('should still handle complete artifacts correctly', () => {
// Ensure we didn't break normal artifact handling
const original = 'console.log("test");';
const artifact = createArtifactText({ content: original });
const message = { text: artifact };
const artifacts = findAllArtifacts(message);
expect(artifacts).toHaveLength(1);
const updated = 'console.log("updated");';
const result = replaceArtifactContent(artifact, artifacts[0], original, updated);
expect(result).not.toBeNull();
expect(result).toContain(updated);
expect(result).toContain(ARTIFACT_END);
expect(result).toMatch(/```\nconsole\.log\("updated"\);\n```/);
});
test('should handle multiple complete artifacts', () => {
// Ensure multiple artifacts still work
const content1 = 'First artifact';
const content2 = 'Second artifact';
const text = `${createArtifactText({ content: content1 })}\n\n${createArtifactText({ content: content2 })}`;
const message = { text };
const artifacts = findAllArtifacts(message);
expect(artifacts).toHaveLength(2);
// Update first artifact
const result1 = replaceArtifactContent(text, artifacts[0], content1, 'First UPDATED');
expect(result1).not.toBeNull();
expect(result1).toContain('First UPDATED');
expect(result1).toContain(content2);
// Update second artifact
const result2 = replaceArtifactContent(text, artifacts[1], content2, 'Second UPDATED');
expect(result2).not.toBeNull();
expect(result2).toContain(content1);
expect(result2).toContain('Second UPDATED');
});
test('should not mistake ::: at position 0 for artifact end in complete artifacts', () => {
// This tests the specific fix - ensuring contentEnd=0 doesn't break complete artifacts
const original = 'test content';
// Create an artifact that will have ::: at position 0 when substring'd
const artifact = `${ARTIFACT_START}\n\`\`\`\n${original}\n\`\`\`\n${ARTIFACT_END}`;
const message = { text: artifact };
const artifacts = findAllArtifacts(message);
expect(artifacts).toHaveLength(1);
const updated = 'updated content';
const result = replaceArtifactContent(artifact, artifacts[0], original, updated);
expect(result).not.toBeNull();
expect(result).toContain(updated);
expect(result).toContain(ARTIFACT_END);
});
test('should handle empty artifacts', () => {
// Edge case: empty artifact
const artifact = `${ARTIFACT_START}\n${ARTIFACT_END}`;
const message = { text: artifact };
const artifacts = findAllArtifacts(message);
expect(artifacts).toHaveLength(1);
// Trying to replace non-existent content should return null
const result = replaceArtifactContent(artifact, artifacts[0], 'something', 'updated');
expect(result).toBeNull();
});
test('should preserve whitespace and formatting in complete artifacts', () => {
const original = ` function test() {
return {
value: 42
};
}`;
const artifact = createArtifactText({ content: original });
const message = { text: artifact };
const artifacts = findAllArtifacts(message);
const updated = ` function test() {
return {
value: 100
};
}`;
const result = replaceArtifactContent(artifact, artifacts[0], original, updated);
expect(result).not.toBeNull();
expect(result).toContain('value: 100');
// Should preserve exact formatting
expect(result).toMatch(
/```\n {2}function test\(\) \{\n {4}return \{\n {6}value: 100\n {4}\};\n {2}\}\n```/,
);
});
});
});

View File

@@ -281,7 +281,7 @@ function createInProgressHandler(openai, thread_id, messages) {
openai.seenCompletedMessages.add(message_id);
const message = await openai.beta.threads.messages.retrieve(thread_id, message_id);
const message = await openai.beta.threads.messages.retrieve(message_id, { thread_id });
if (!message?.content?.length) {
return;
}
@@ -435,9 +435,11 @@ async function runAssistant({
};
});
const outputs = await processRequiredActions(openai, actions);
const toolRun = await openai.beta.threads.runs.submitToolOutputs(run.thread_id, run.id, outputs);
const tool_outputs = await processRequiredActions(openai, actions);
const toolRun = await openai.beta.threads.runs.submitToolOutputs(run.id, {
thread_id: run.thread_id,
tool_outputs,
});
// Recursive call with accumulated steps and messages
return await runAssistant({

View File

@@ -3,7 +3,6 @@ const { isEnabled, getUserMCPAuthMap } = require('@librechat/api');
const { CacheKeys, EModelEndpoint } = require('librechat-data-provider');
const { normalizeEndpointName } = require('~/server/utils');
const loadCustomConfig = require('./loadCustomConfig');
const { getCachedTools } = require('./getCachedTools');
const getLogStores = require('~/cache/getLogStores');
/**
@@ -12,8 +11,8 @@ const getLogStores = require('~/cache/getLogStores');
* @returns {Promise<TCustomConfig | null>}
* */
async function getCustomConfig() {
const cache = getLogStores(CacheKeys.CONFIG_STORE);
return (await cache.get(CacheKeys.CUSTOM_CONFIG)) || (await loadCustomConfig());
const cache = getLogStores(CacheKeys.STATIC_CONFIG);
return (await cache.get(CacheKeys.LIBRECHAT_YAML_CONFIG)) || (await loadCustomConfig());
}
/**
@@ -66,13 +65,9 @@ async function getMCPAuthMap({ userId, tools, findPluginAuthsByKeys }) {
if (!tools || tools.length === 0) {
return;
}
const appTools = await getCachedTools({
userId,
});
return await getUserMCPAuthMap({
tools,
userId,
appTools,
findPluginAuthsByKeys,
});
} catch (err) {

View File

@@ -25,7 +25,7 @@ let i = 0;
* @function loadCustomConfig
* @returns {Promise<TCustomConfig | null>} A promise that resolves to null or the custom config object.
* */
async function loadCustomConfig() {
async function loadCustomConfig(printConfig = true) {
// Use CONFIG_PATH if set, otherwise fallback to defaultConfigPath
const configPath = process.env.CONFIG_PATH || defaultConfigPath;
@@ -108,9 +108,11 @@ https://www.librechat.ai/docs/configuration/stt_tts`);
return null;
} else {
logger.info('Custom config file loaded:');
logger.info(JSON.stringify(customConfig, null, 2));
logger.debug('Custom config:', customConfig);
if (printConfig) {
logger.info('Custom config file loaded:');
logger.info(JSON.stringify(customConfig, null, 2));
logger.debug('Custom config:', customConfig);
}
}
(customConfig.endpoints?.custom ?? [])
@@ -118,8 +120,8 @@ https://www.librechat.ai/docs/configuration/stt_tts`);
.forEach((endpoint) => parseCustomParams(endpoint.name, endpoint.customParams));
if (customConfig.cache) {
const cache = getLogStores(CacheKeys.CONFIG_STORE);
await cache.set(CacheKeys.CUSTOM_CONFIG, customConfig);
const cache = getLogStores(CacheKeys.STATIC_CONFIG);
await cache.set(CacheKeys.LIBRECHAT_YAML_CONFIG, customConfig);
}
if (result.data.modelSpecs) {

View File

@@ -1,12 +1,11 @@
const { logger } = require('@librechat/data-schemas');
const { EModelEndpoint } = require('librechat-data-provider');
const { useAzurePlugins } = require('~/server/services/Config/EndpointService').config;
const {
getAnthropicModels,
getBedrockModels,
getOpenAIModels,
getGoogleModels,
getBedrockModels,
getAnthropicModels,
} = require('~/server/services/ModelService');
const { logger } = require('~/config');
/**
* Loads the default models for the application.
@@ -16,58 +15,42 @@ const { logger } = require('~/config');
*/
async function loadDefaultModels(req) {
try {
const [
openAI,
anthropic,
azureOpenAI,
gptPlugins,
assistants,
azureAssistants,
google,
bedrock,
] = await Promise.all([
getOpenAIModels({ user: req.user.id }).catch((error) => {
logger.error('Error fetching OpenAI models:', error);
return [];
}),
getAnthropicModels({ user: req.user.id }).catch((error) => {
logger.error('Error fetching Anthropic models:', error);
return [];
}),
getOpenAIModels({ user: req.user.id, azure: true }).catch((error) => {
logger.error('Error fetching Azure OpenAI models:', error);
return [];
}),
getOpenAIModels({ user: req.user.id, azure: useAzurePlugins, plugins: true }).catch(
(error) => {
logger.error('Error fetching Plugin models:', error);
const [openAI, anthropic, azureOpenAI, assistants, azureAssistants, google, bedrock] =
await Promise.all([
getOpenAIModels({ user: req.user.id }).catch((error) => {
logger.error('Error fetching OpenAI models:', error);
return [];
},
),
getOpenAIModels({ assistants: true }).catch((error) => {
logger.error('Error fetching OpenAI Assistants API models:', error);
return [];
}),
getOpenAIModels({ azureAssistants: true }).catch((error) => {
logger.error('Error fetching Azure OpenAI Assistants API models:', error);
return [];
}),
Promise.resolve(getGoogleModels()).catch((error) => {
logger.error('Error getting Google models:', error);
return [];
}),
Promise.resolve(getBedrockModels()).catch((error) => {
logger.error('Error getting Bedrock models:', error);
return [];
}),
]);
}),
getAnthropicModels({ user: req.user.id }).catch((error) => {
logger.error('Error fetching Anthropic models:', error);
return [];
}),
getOpenAIModels({ user: req.user.id, azure: true }).catch((error) => {
logger.error('Error fetching Azure OpenAI models:', error);
return [];
}),
getOpenAIModels({ assistants: true }).catch((error) => {
logger.error('Error fetching OpenAI Assistants API models:', error);
return [];
}),
getOpenAIModels({ azureAssistants: true }).catch((error) => {
logger.error('Error fetching Azure OpenAI Assistants API models:', error);
return [];
}),
Promise.resolve(getGoogleModels()).catch((error) => {
logger.error('Error getting Google models:', error);
return [];
}),
Promise.resolve(getBedrockModels()).catch((error) => {
logger.error('Error getting Bedrock models:', error);
return [];
}),
]);
return {
[EModelEndpoint.openAI]: openAI,
[EModelEndpoint.agents]: openAI,
[EModelEndpoint.google]: google,
[EModelEndpoint.anthropic]: anthropic,
[EModelEndpoint.gptPlugins]: gptPlugins,
[EModelEndpoint.azureOpenAI]: azureOpenAI,
[EModelEndpoint.assistants]: assistants,
[EModelEndpoint.azureAssistants]: azureAssistants,

View File

@@ -104,7 +104,7 @@ const initializeAgent = async ({
agent.endpoint = provider;
const { getOptions, overrideProvider } = await getProviderConfig(provider);
if (overrideProvider) {
if (overrideProvider !== agent.provider) {
agent.provider = overrideProvider;
}
@@ -131,7 +131,7 @@ const initializeAgent = async ({
);
const agentMaxContextTokens = optionalChainWithEmptyCheck(
maxContextTokens,
getModelMaxTokens(tokensModel, providerEndpointMap[provider]),
getModelMaxTokens(tokensModel, providerEndpointMap[provider], options.endpointTokenConfig),
4096,
);
@@ -191,7 +191,7 @@ const initializeAgent = async ({
resendFiles,
toolContextMap,
useLegacyContent: !!options.useLegacyContent,
maxContextTokens: (agentMaxContextTokens - maxTokens) * 0.9,
maxContextTokens: Math.round((agentMaxContextTokens - maxTokens) * 0.9),
};
};

View File

@@ -1,6 +1,6 @@
const { logger } = require('@librechat/data-schemas');
const { isAgentsEndpoint, removeNullishValues, Constants } = require('librechat-data-provider');
const { loadAgent } = require('~/models/Agent');
const { logger } = require('~/config');
const buildOptions = (req, endpoint, parsedBody, endpointType) => {
const { spec, iconURL, agent_id, instructions, ...model_parameters } = parsedBody;

View File

@@ -1,4 +1,5 @@
const { logger } = require('@librechat/data-schemas');
const { validateAgentModel } = require('@librechat/api');
const { createContentAggregator } = require('@librechat/agents');
const {
Constants,
@@ -11,10 +12,12 @@ const {
getDefaultHandlers,
} = require('~/server/controllers/agents/callbacks');
const { initializeAgent } = require('~/server/services/Endpoints/agents/agent');
const { getModelsConfig } = require('~/server/controllers/ModelController');
const { getCustomEndpointConfig } = require('~/server/services/Config');
const { loadAgentTools } = require('~/server/services/ToolService');
const AgentClient = require('~/server/controllers/agents/client');
const { getAgent } = require('~/models/Agent');
const { logViolation } = require('~/cache');
function createToolLoader() {
/**
@@ -72,6 +75,19 @@ const initializeClient = async ({ req, res, endpointOption }) => {
throw new Error('Agent not found');
}
const modelsConfig = await getModelsConfig(req);
const validationResult = await validateAgentModel({
req,
res,
modelsConfig,
logViolation,
agent: primaryAgent,
});
if (!validationResult.isValid) {
throw new Error(validationResult.error?.message);
}
const agentConfigs = new Map();
/** @type {Set<string>} */
const allowedProviders = new Set(req?.app?.locals?.[EModelEndpoint.agents]?.allowedProviders);
@@ -101,6 +117,19 @@ const initializeClient = async ({ req, res, endpointOption }) => {
if (!agent) {
throw new Error(`Agent ${agentId} not found`);
}
const validationResult = await validateAgentModel({
req,
res,
agent,
modelsConfig,
logViolation,
});
if (!validationResult.isValid) {
throw new Error(validationResult.error?.message);
}
const config = await initializeAgent({
req,
res,

View File

@@ -1,12 +1,12 @@
const OpenAI = require('openai');
const { HttpsProxyAgent } = require('https-proxy-agent');
const { ProxyAgent } = require('undici');
const { ErrorTypes, EModelEndpoint } = require('librechat-data-provider');
const {
getUserKeyValues,
getUserKeyExpiry,
checkUserKeyExpiry,
} = require('~/server/services/UserService');
const OpenAIClient = require('~/app/clients/OpenAIClient');
const OAIClient = require('~/app/clients/OpenAIClient');
const { isUserProvided } = require('~/server/utils');
const initializeClient = async ({ req, res, endpointOption, version, initAppClient = false }) => {
@@ -59,7 +59,10 @@ const initializeClient = async ({ req, res, endpointOption, version, initAppClie
}
if (PROXY) {
opts.httpAgent = new HttpsProxyAgent(PROXY);
const proxyAgent = new ProxyAgent(PROXY);
opts.fetchOptions = {
dispatcher: proxyAgent,
};
}
if (OPENAI_ORGANIZATION) {
@@ -76,7 +79,7 @@ const initializeClient = async ({ req, res, endpointOption, version, initAppClie
openai.res = res;
if (endpointOption && initAppClient) {
const client = new OpenAIClient(apiKey, clientOptions);
const client = new OAIClient(apiKey, clientOptions);
return {
client,
openai,

View File

@@ -1,5 +1,5 @@
// const OpenAI = require('openai');
const { HttpsProxyAgent } = require('https-proxy-agent');
const { ProxyAgent } = require('undici');
const { ErrorTypes } = require('librechat-data-provider');
const { getUserKey, getUserKeyExpiry, getUserKeyValues } = require('~/server/services/UserService');
const initializeClient = require('./initalize');
@@ -107,6 +107,7 @@ describe('initializeClient', () => {
const res = {};
const { openai } = await initializeClient({ req, res });
expect(openai.httpAgent).toBeInstanceOf(HttpsProxyAgent);
expect(openai.fetchOptions).toBeDefined();
expect(openai.fetchOptions.dispatcher).toBeInstanceOf(ProxyAgent);
});
});

View File

@@ -1,13 +1,13 @@
const OpenAI = require('openai');
const { HttpsProxyAgent } = require('https-proxy-agent');
const { ProxyAgent } = require('undici');
const { constructAzureURL, isUserProvided, resolveHeaders } = require('@librechat/api');
const { ErrorTypes, EModelEndpoint, mapModelToAzureConfig } = require('librechat-data-provider');
const {
checkUserKeyExpiry,
getUserKeyValues,
getUserKeyExpiry,
checkUserKeyExpiry,
} = require('~/server/services/UserService');
const OpenAIClient = require('~/app/clients/OpenAIClient');
const OAIClient = require('~/app/clients/OpenAIClient');
class Files {
constructor(client) {
@@ -158,7 +158,10 @@ const initializeClient = async ({ req, res, version, endpointOption, initAppClie
}
if (PROXY) {
opts.httpAgent = new HttpsProxyAgent(PROXY);
const proxyAgent = new ProxyAgent(PROXY);
opts.fetchOptions = {
dispatcher: proxyAgent,
};
}
if (OPENAI_ORGANIZATION) {
@@ -181,7 +184,7 @@ const initializeClient = async ({ req, res, version, endpointOption, initAppClie
}
if (endpointOption && initAppClient) {
const client = new OpenAIClient(apiKey, clientOptions);
const client = new OAIClient(apiKey, clientOptions);
return {
client,
openai,

View File

@@ -1,5 +1,5 @@
// const OpenAI = require('openai');
const { HttpsProxyAgent } = require('https-proxy-agent');
const { ProxyAgent } = require('undici');
const { ErrorTypes } = require('librechat-data-provider');
const { getUserKey, getUserKeyExpiry, getUserKeyValues } = require('~/server/services/UserService');
const initializeClient = require('./initialize');
@@ -107,6 +107,7 @@ describe('initializeClient', () => {
const res = {};
const { openai } = await initializeClient({ req, res });
expect(openai.httpAgent).toBeInstanceOf(HttpsProxyAgent);
expect(openai.fetchOptions).toBeDefined();
expect(openai.fetchOptions.dispatcher).toBeInstanceOf(ProxyAgent);
});
});

View File

@@ -141,8 +141,9 @@ const initializeClient = async ({ req, res, endpointOption, optionsOnly, overrid
const options = getOpenAIConfig(apiKey, clientOptions, endpoint);
if (options != null) {
options.useLegacyContent = true;
options.endpointTokenConfig = endpointTokenConfig;
}
if (!customOptions.streamRate) {
if (!clientOptions.streamRate) {
return options;
}
options.llmConfig.callbacks = [

View File

@@ -34,13 +34,13 @@ const providerConfigMap = {
* @param {string} provider - The provider string
* @returns {Promise<{
* getOptions: Function,
* overrideProvider?: string,
* overrideProvider: string,
* customEndpointConfig?: TEndpoint
* }>}
*/
async function getProviderConfig(provider) {
let getOptions = providerConfigMap[provider];
let overrideProvider;
let overrideProvider = provider;
/** @type {TEndpoint | undefined} */
let customEndpointConfig;
@@ -56,7 +56,7 @@ async function getProviderConfig(provider) {
overrideProvider = Providers.OPENAI;
}
if (isKnownCustomProvider(overrideProvider || provider) && !customEndpointConfig) {
if (isKnownCustomProvider(overrideProvider) && !customEndpointConfig) {
customEndpointConfig = await getCustomEndpointConfig(provider);
if (!customEndpointConfig) {
throw new Error(`Provider ${provider} not supported`);

View File

@@ -0,0 +1,166 @@
const { EModelEndpoint } = require('librechat-data-provider');
const { getStrategyFunctions } = require('~/server/services/Files/strategies');
const { validateAnthropicPdf } = require('../validation/pdfValidator');
/**
* Converts a readable stream to a buffer.
*
* @param {NodeJS.ReadableStream} stream - The readable stream to convert.
* @returns {Promise<Buffer>} - Promise resolving to the buffer.
*/
async function streamToBuffer(stream) {
return new Promise((resolve, reject) => {
const chunks = [];
stream.on('data', (chunk) => {
chunks.push(chunk);
});
stream.on('end', () => {
try {
const buffer = Buffer.concat(chunks);
chunks.length = 0; // Clear the array
resolve(buffer);
} catch (err) {
reject(err);
}
});
stream.on('error', (error) => {
chunks.length = 0;
reject(error);
});
}).finally(() => {
// Clean up the stream if required
if (stream.destroy && typeof stream.destroy === 'function') {
stream.destroy();
}
});
}
/**
* Processes and encodes document files for various endpoints
*
* @param {Express.Request} req - Express request object
* @param {MongoFile[]} files - Array of file objects to process
* @param {string} endpoint - The endpoint identifier (e.g., EModelEndpoint.anthropic)
* @returns {Promise<{documents: MessageContentDocument[], files: MongoFile[]}>}
*/
async function encodeAndFormatDocuments(req, files, endpoint) {
const promises = [];
/** @type {Record<FileSources, Pick<ReturnType<typeof getStrategyFunctions>, 'prepareDocumentPayload' | 'getDownloadStream'>>} */
const encodingMethods = {};
/** @type {{ documents: MessageContentDocument[]; files: MongoFile[] }} */
const result = {
documents: [],
files: [],
};
if (!files || !files.length) {
return result;
}
// Filter for document files only
const documentFiles = files.filter(
(file) => file.type === 'application/pdf' || file.type?.startsWith('application/'), // Future: support for other document types
);
if (!documentFiles.length) {
return result;
}
for (let file of documentFiles) {
/** @type {FileSources} */
const source = file.source ?? 'local';
// Only process PDFs for Anthropic for now
if (file.type !== 'application/pdf' || endpoint !== EModelEndpoint.anthropic) {
continue;
}
if (!encodingMethods[source]) {
encodingMethods[source] = getStrategyFunctions(source);
}
// Prepare file metadata
const fileMetadata = {
file_id: file.file_id || file._id,
temp_file_id: file.temp_file_id,
filepath: file.filepath,
source: file.source,
filename: file.filename,
type: file.type,
};
promises.push([file, fileMetadata]);
}
const results = await Promise.allSettled(
promises.map(async ([file, fileMetadata]) => {
if (!file || !fileMetadata) {
return { file: null, content: null, metadata: fileMetadata };
}
try {
const source = file.source ?? 'local';
const { getDownloadStream } = encodingMethods[source];
const stream = await getDownloadStream(req, file.filepath);
const buffer = await streamToBuffer(stream);
const documentContent = buffer.toString('base64');
return {
file,
content: documentContent,
metadata: fileMetadata,
};
} catch (error) {
console.error(`Error processing document ${file.filename}:`, error);
return { file, content: null, metadata: fileMetadata };
}
}),
);
for (const settledResult of results) {
if (settledResult.status === 'rejected') {
console.error('Document processing failed:', settledResult.reason);
continue;
}
const { file, content, metadata } = settledResult.value;
if (!content || !file) {
if (metadata) {
result.files.push(metadata);
}
continue;
}
if (file.type === 'application/pdf' && endpoint === EModelEndpoint.anthropic) {
const pdfBuffer = Buffer.from(content, 'base64');
const validation = await validateAnthropicPdf(pdfBuffer, pdfBuffer.length);
if (!validation.isValid) {
throw new Error(`PDF validation failed: ${validation.error}`);
}
const documentPart = {
type: 'document',
source: {
type: 'base64',
media_type: 'application/pdf',
data: content,
},
};
result.documents.push(documentPart);
result.files.push(metadata);
}
}
return result;
}
module.exports = {
encodeAndFormatDocuments,
};

View File

@@ -0,0 +1,5 @@
const { encodeAndFormatDocuments } = require('./encode');
module.exports = {
encodeAndFormatDocuments,
};

View File

@@ -391,7 +391,17 @@ const processFileUpload = async ({ req, res, metadata }) => {
const isAssistantUpload = isAssistantsEndpoint(metadata.endpoint);
const assistantSource =
metadata.endpoint === EModelEndpoint.azureAssistants ? FileSources.azure : FileSources.openai;
const source = isAssistantUpload ? assistantSource : FileSources.vectordb;
// Use local storage for Anthropic native PDF support, vectordb for others
const isAnthropicUpload = metadata.endpoint === EModelEndpoint.anthropic;
let source;
if (isAssistantUpload) {
source = assistantSource;
} else if (isAnthropicUpload) {
source = FileSources.local;
} else {
source = FileSources.vectordb;
}
const { handleFileUpload } = getStrategyFunctions(source);
const { file_id, temp_file_id } = metadata;

View File

@@ -0,0 +1,77 @@
const { logger } = require('~/config');
const { anthropicPdfSizeLimit } = require('librechat-data-provider');
/**
* Validates if a PDF meets Anthropic's requirements
* @param {Buffer} pdfBuffer - The PDF file as a buffer
* @param {number} fileSize - The file size in bytes
* @returns {Promise<{isValid: boolean, error?: string}>}
*/
async function validateAnthropicPdf(pdfBuffer, fileSize) {
try {
// Check file size (32MB limit)
if (fileSize > anthropicPdfSizeLimit) {
return {
isValid: false,
error: `PDF file size (${Math.round(fileSize / (1024 * 1024))}MB) exceeds Anthropic's 32MB limit`,
};
}
// Basic PDF header validation
if (!pdfBuffer || pdfBuffer.length < 5) {
return {
isValid: false,
error: 'Invalid PDF file: too small or corrupted',
};
}
// Check PDF magic bytes
const pdfHeader = pdfBuffer.subarray(0, 5).toString();
if (!pdfHeader.startsWith('%PDF-')) {
return {
isValid: false,
error: 'Invalid PDF file: missing PDF header',
};
}
// Check for password protection/encryption
const pdfContent = pdfBuffer.toString('binary');
if (
pdfContent.includes('/Encrypt ') ||
pdfContent.includes('/U (') ||
pdfContent.includes('/O (')
) {
return {
isValid: false,
error: 'PDF is password-protected or encrypted. Anthropic requires unencrypted PDFs.',
};
}
// Estimate page count (this is a rough estimation)
const pageMatches = pdfContent.match(/\/Type[\s]*\/Page[^s]/g);
const estimatedPages = pageMatches ? pageMatches.length : 1;
if (estimatedPages > 100) {
return {
isValid: false,
error: `PDF has approximately ${estimatedPages} pages, exceeding Anthropic's 100-page limit`,
};
}
logger.debug(
`PDF validation passed: ${Math.round(fileSize / 1024)}KB, ~${estimatedPages} pages`,
);
return { isValid: true };
} catch (error) {
logger.error('PDF validation error:', error);
return {
isValid: false,
error: 'Failed to validate PDF file',
};
}
}
module.exports = {
validateAnthropicPdf,
};

View File

@@ -12,7 +12,7 @@ const {
} = require('@librechat/api');
const { findToken, createToken, updateToken } = require('~/models');
const { getMCPManager, getFlowStateManager } = require('~/config');
const { getCachedTools } = require('./Config');
const { getCachedTools, loadCustomConfig } = require('./Config');
const { getLogStores } = require('~/cache');
/**
@@ -104,7 +104,7 @@ function createAbortHandler({ userId, serverName, toolName, flowManager }) {
* @returns { Promise<typeof tool | { _call: (toolInput: Object | string) => unknown}> } An object with `_call` method to execute the tool input.
*/
async function createMCPTool({ req, res, toolKey, provider: _provider }) {
const availableTools = await getCachedTools({ includeGlobal: true });
const availableTools = await getCachedTools({ userId: req.user?.id, includeGlobal: true });
const toolDefinition = availableTools?.[toolKey]?.function;
if (!toolDefinition) {
logger.error(`Tool ${toolKey} not found in available tools`);
@@ -235,9 +235,139 @@ async function createMCPTool({ req, res, toolKey, provider: _provider }) {
responseFormat: AgentConstants.CONTENT_AND_ARTIFACT,
});
toolInstance.mcp = true;
toolInstance.mcpRawServerName = serverName;
return toolInstance;
}
/**
* Get MCP setup data including config, connections, and OAuth servers
* @param {string} userId - The user ID
* @returns {Object} Object containing mcpConfig, appConnections, userConnections, and oauthServers
*/
async function getMCPSetupData(userId) {
const printConfig = false;
const config = await loadCustomConfig(printConfig);
const mcpConfig = config?.mcpServers;
if (!mcpConfig) {
throw new Error('MCP config not found');
}
const mcpManager = getMCPManager(userId);
const appConnections = mcpManager.getAllConnections() || new Map();
const userConnections = mcpManager.getUserConnections(userId) || new Map();
const oauthServers = mcpManager.getOAuthServers() || new Set();
return {
mcpConfig,
appConnections,
userConnections,
oauthServers,
};
}
/**
* Check OAuth flow status for a user and server
* @param {string} userId - The user ID
* @param {string} serverName - The server name
* @returns {Object} Object containing hasActiveFlow and hasFailedFlow flags
*/
async function checkOAuthFlowStatus(userId, serverName) {
const flowsCache = getLogStores(CacheKeys.FLOWS);
const flowManager = getFlowStateManager(flowsCache);
const flowId = MCPOAuthHandler.generateFlowId(userId, serverName);
try {
const flowState = await flowManager.getFlowState(flowId, 'mcp_oauth');
if (!flowState) {
return { hasActiveFlow: false, hasFailedFlow: false };
}
const flowAge = Date.now() - flowState.createdAt;
const flowTTL = flowState.ttl || 180000; // Default 3 minutes
if (flowState.status === 'FAILED' || flowAge > flowTTL) {
const wasCancelled = flowState.error && flowState.error.includes('cancelled');
if (wasCancelled) {
logger.debug(`[MCP Connection Status] Found cancelled OAuth flow for ${serverName}`, {
flowId,
status: flowState.status,
error: flowState.error,
});
return { hasActiveFlow: false, hasFailedFlow: false };
} else {
logger.debug(`[MCP Connection Status] Found failed OAuth flow for ${serverName}`, {
flowId,
status: flowState.status,
flowAge,
flowTTL,
timedOut: flowAge > flowTTL,
error: flowState.error,
});
return { hasActiveFlow: false, hasFailedFlow: true };
}
}
if (flowState.status === 'PENDING') {
logger.debug(`[MCP Connection Status] Found active OAuth flow for ${serverName}`, {
flowId,
flowAge,
flowTTL,
});
return { hasActiveFlow: true, hasFailedFlow: false };
}
return { hasActiveFlow: false, hasFailedFlow: false };
} catch (error) {
logger.error(`[MCP Connection Status] Error checking OAuth flows for ${serverName}:`, error);
return { hasActiveFlow: false, hasFailedFlow: false };
}
}
/**
* Get connection status for a specific MCP server
* @param {string} userId - The user ID
* @param {string} serverName - The server name
* @param {Map} appConnections - App-level connections
* @param {Map} userConnections - User-level connections
* @param {Set} oauthServers - Set of OAuth servers
* @returns {Object} Object containing requiresOAuth and connectionState
*/
async function getServerConnectionStatus(
userId,
serverName,
appConnections,
userConnections,
oauthServers,
) {
const getConnectionState = () =>
appConnections.get(serverName)?.connectionState ??
userConnections.get(serverName)?.connectionState ??
'disconnected';
const baseConnectionState = getConnectionState();
let finalConnectionState = baseConnectionState;
if (baseConnectionState === 'disconnected' && oauthServers.has(serverName)) {
const { hasActiveFlow, hasFailedFlow } = await checkOAuthFlowStatus(userId, serverName);
if (hasFailedFlow) {
finalConnectionState = 'error';
} else if (hasActiveFlow) {
finalConnectionState = 'connecting';
}
}
return {
requiresOAuth: oauthServers.has(serverName),
connectionState: finalConnectionState,
};
}
module.exports = {
createMCPTool,
getMCPSetupData,
checkOAuthFlowStatus,
getServerConnectionStatus,
};

View File

@@ -0,0 +1,510 @@
const { logger } = require('@librechat/data-schemas');
const { MCPOAuthHandler } = require('@librechat/api');
const { CacheKeys } = require('librechat-data-provider');
const { getMCPSetupData, checkOAuthFlowStatus, getServerConnectionStatus } = require('./MCP');
// Mock all dependencies
jest.mock('@librechat/data-schemas', () => ({
logger: {
debug: jest.fn(),
error: jest.fn(),
},
}));
jest.mock('@librechat/api', () => ({
MCPOAuthHandler: {
generateFlowId: jest.fn(),
},
}));
jest.mock('librechat-data-provider', () => ({
CacheKeys: {
FLOWS: 'flows',
},
}));
jest.mock('./Config', () => ({
loadCustomConfig: jest.fn(),
}));
jest.mock('~/config', () => ({
getMCPManager: jest.fn(),
getFlowStateManager: jest.fn(),
}));
jest.mock('~/cache', () => ({
getLogStores: jest.fn(),
}));
jest.mock('~/models', () => ({
findToken: jest.fn(),
createToken: jest.fn(),
updateToken: jest.fn(),
}));
describe('tests for the new helper functions used by the MCP connection status endpoints', () => {
let mockLoadCustomConfig;
let mockGetMCPManager;
let mockGetFlowStateManager;
let mockGetLogStores;
beforeEach(() => {
jest.clearAllMocks();
mockLoadCustomConfig = require('./Config').loadCustomConfig;
mockGetMCPManager = require('~/config').getMCPManager;
mockGetFlowStateManager = require('~/config').getFlowStateManager;
mockGetLogStores = require('~/cache').getLogStores;
});
describe('getMCPSetupData', () => {
const mockUserId = 'user-123';
const mockConfig = {
mcpServers: {
server1: { type: 'stdio' },
server2: { type: 'http' },
},
};
beforeEach(() => {
mockGetMCPManager.mockReturnValue({
getAllConnections: jest.fn(() => new Map()),
getUserConnections: jest.fn(() => new Map()),
getOAuthServers: jest.fn(() => new Set()),
});
});
it('should successfully return MCP setup data', async () => {
mockLoadCustomConfig.mockResolvedValue(mockConfig);
const mockAppConnections = new Map([['server1', { status: 'connected' }]]);
const mockUserConnections = new Map([['server2', { status: 'disconnected' }]]);
const mockOAuthServers = new Set(['server2']);
const mockMCPManager = {
getAllConnections: jest.fn(() => mockAppConnections),
getUserConnections: jest.fn(() => mockUserConnections),
getOAuthServers: jest.fn(() => mockOAuthServers),
};
mockGetMCPManager.mockReturnValue(mockMCPManager);
const result = await getMCPSetupData(mockUserId);
expect(mockLoadCustomConfig).toHaveBeenCalledWith(false);
expect(mockGetMCPManager).toHaveBeenCalledWith(mockUserId);
expect(mockMCPManager.getAllConnections).toHaveBeenCalled();
expect(mockMCPManager.getUserConnections).toHaveBeenCalledWith(mockUserId);
expect(mockMCPManager.getOAuthServers).toHaveBeenCalled();
expect(result).toEqual({
mcpConfig: mockConfig.mcpServers,
appConnections: mockAppConnections,
userConnections: mockUserConnections,
oauthServers: mockOAuthServers,
});
});
it('should throw error when MCP config not found', async () => {
mockLoadCustomConfig.mockResolvedValue({});
await expect(getMCPSetupData(mockUserId)).rejects.toThrow('MCP config not found');
});
it('should handle null values from MCP manager gracefully', async () => {
mockLoadCustomConfig.mockResolvedValue(mockConfig);
const mockMCPManager = {
getAllConnections: jest.fn(() => null),
getUserConnections: jest.fn(() => null),
getOAuthServers: jest.fn(() => null),
};
mockGetMCPManager.mockReturnValue(mockMCPManager);
const result = await getMCPSetupData(mockUserId);
expect(result).toEqual({
mcpConfig: mockConfig.mcpServers,
appConnections: new Map(),
userConnections: new Map(),
oauthServers: new Set(),
});
});
});
describe('checkOAuthFlowStatus', () => {
const mockUserId = 'user-123';
const mockServerName = 'test-server';
const mockFlowId = 'flow-123';
beforeEach(() => {
const mockFlowsCache = {};
const mockFlowManager = {
getFlowState: jest.fn(),
};
mockGetLogStores.mockReturnValue(mockFlowsCache);
mockGetFlowStateManager.mockReturnValue(mockFlowManager);
MCPOAuthHandler.generateFlowId.mockReturnValue(mockFlowId);
});
it('should return false flags when no flow state exists', async () => {
const mockFlowManager = { getFlowState: jest.fn(() => null) };
mockGetFlowStateManager.mockReturnValue(mockFlowManager);
const result = await checkOAuthFlowStatus(mockUserId, mockServerName);
expect(mockGetLogStores).toHaveBeenCalledWith(CacheKeys.FLOWS);
expect(MCPOAuthHandler.generateFlowId).toHaveBeenCalledWith(mockUserId, mockServerName);
expect(mockFlowManager.getFlowState).toHaveBeenCalledWith(mockFlowId, 'mcp_oauth');
expect(result).toEqual({ hasActiveFlow: false, hasFailedFlow: false });
});
it('should detect failed flow when status is FAILED', async () => {
const mockFlowState = {
status: 'FAILED',
createdAt: Date.now() - 60000, // 1 minute ago
ttl: 180000,
};
const mockFlowManager = { getFlowState: jest.fn(() => mockFlowState) };
mockGetFlowStateManager.mockReturnValue(mockFlowManager);
const result = await checkOAuthFlowStatus(mockUserId, mockServerName);
expect(result).toEqual({ hasActiveFlow: false, hasFailedFlow: true });
expect(logger.debug).toHaveBeenCalledWith(
expect.stringContaining('Found failed OAuth flow'),
expect.objectContaining({
flowId: mockFlowId,
status: 'FAILED',
}),
);
});
it('should detect failed flow when flow has timed out', async () => {
const mockFlowState = {
status: 'PENDING',
createdAt: Date.now() - 200000, // 200 seconds ago (> 180s TTL)
ttl: 180000,
};
const mockFlowManager = { getFlowState: jest.fn(() => mockFlowState) };
mockGetFlowStateManager.mockReturnValue(mockFlowManager);
const result = await checkOAuthFlowStatus(mockUserId, mockServerName);
expect(result).toEqual({ hasActiveFlow: false, hasFailedFlow: true });
expect(logger.debug).toHaveBeenCalledWith(
expect.stringContaining('Found failed OAuth flow'),
expect.objectContaining({
timedOut: true,
}),
);
});
it('should detect failed flow when TTL not specified and flow exceeds default TTL', async () => {
const mockFlowState = {
status: 'PENDING',
createdAt: Date.now() - 200000, // 200 seconds ago (> 180s default TTL)
// ttl not specified, should use 180000 default
};
const mockFlowManager = { getFlowState: jest.fn(() => mockFlowState) };
mockGetFlowStateManager.mockReturnValue(mockFlowManager);
const result = await checkOAuthFlowStatus(mockUserId, mockServerName);
expect(result).toEqual({ hasActiveFlow: false, hasFailedFlow: true });
});
it('should detect active flow when status is PENDING and within TTL', async () => {
const mockFlowState = {
status: 'PENDING',
createdAt: Date.now() - 60000, // 1 minute ago (< 180s TTL)
ttl: 180000,
};
const mockFlowManager = { getFlowState: jest.fn(() => mockFlowState) };
mockGetFlowStateManager.mockReturnValue(mockFlowManager);
const result = await checkOAuthFlowStatus(mockUserId, mockServerName);
expect(result).toEqual({ hasActiveFlow: true, hasFailedFlow: false });
expect(logger.debug).toHaveBeenCalledWith(
expect.stringContaining('Found active OAuth flow'),
expect.objectContaining({
flowId: mockFlowId,
}),
);
});
it('should return false flags for other statuses', async () => {
const mockFlowState = {
status: 'COMPLETED',
createdAt: Date.now() - 60000,
ttl: 180000,
};
const mockFlowManager = { getFlowState: jest.fn(() => mockFlowState) };
mockGetFlowStateManager.mockReturnValue(mockFlowManager);
const result = await checkOAuthFlowStatus(mockUserId, mockServerName);
expect(result).toEqual({ hasActiveFlow: false, hasFailedFlow: false });
});
it('should handle errors gracefully', async () => {
const mockError = new Error('Flow state error');
const mockFlowManager = {
getFlowState: jest.fn(() => {
throw mockError;
}),
};
mockGetFlowStateManager.mockReturnValue(mockFlowManager);
const result = await checkOAuthFlowStatus(mockUserId, mockServerName);
expect(result).toEqual({ hasActiveFlow: false, hasFailedFlow: false });
expect(logger.error).toHaveBeenCalledWith(
expect.stringContaining('Error checking OAuth flows'),
mockError,
);
});
});
describe('getServerConnectionStatus', () => {
const mockUserId = 'user-123';
const mockServerName = 'test-server';
it('should return app connection state when available', async () => {
const appConnections = new Map([[mockServerName, { connectionState: 'connected' }]]);
const userConnections = new Map();
const oauthServers = new Set();
const result = await getServerConnectionStatus(
mockUserId,
mockServerName,
appConnections,
userConnections,
oauthServers,
);
expect(result).toEqual({
requiresOAuth: false,
connectionState: 'connected',
});
});
it('should fallback to user connection state when app connection not available', async () => {
const appConnections = new Map();
const userConnections = new Map([[mockServerName, { connectionState: 'connecting' }]]);
const oauthServers = new Set();
const result = await getServerConnectionStatus(
mockUserId,
mockServerName,
appConnections,
userConnections,
oauthServers,
);
expect(result).toEqual({
requiresOAuth: false,
connectionState: 'connecting',
});
});
it('should default to disconnected when no connections exist', async () => {
const appConnections = new Map();
const userConnections = new Map();
const oauthServers = new Set();
const result = await getServerConnectionStatus(
mockUserId,
mockServerName,
appConnections,
userConnections,
oauthServers,
);
expect(result).toEqual({
requiresOAuth: false,
connectionState: 'disconnected',
});
});
it('should prioritize app connection over user connection', async () => {
const appConnections = new Map([[mockServerName, { connectionState: 'connected' }]]);
const userConnections = new Map([[mockServerName, { connectionState: 'disconnected' }]]);
const oauthServers = new Set();
const result = await getServerConnectionStatus(
mockUserId,
mockServerName,
appConnections,
userConnections,
oauthServers,
);
expect(result).toEqual({
requiresOAuth: false,
connectionState: 'connected',
});
});
it('should indicate OAuth requirement when server is in OAuth servers set', async () => {
const appConnections = new Map();
const userConnections = new Map();
const oauthServers = new Set([mockServerName]);
const result = await getServerConnectionStatus(
mockUserId,
mockServerName,
appConnections,
userConnections,
oauthServers,
);
expect(result.requiresOAuth).toBe(true);
});
it('should handle OAuth flow status when disconnected and requires OAuth with failed flow', async () => {
const appConnections = new Map();
const userConnections = new Map();
const oauthServers = new Set([mockServerName]);
// Mock flow state to return failed flow
const mockFlowManager = {
getFlowState: jest.fn(() => ({
status: 'FAILED',
createdAt: Date.now() - 60000,
ttl: 180000,
})),
};
mockGetFlowStateManager.mockReturnValue(mockFlowManager);
mockGetLogStores.mockReturnValue({});
MCPOAuthHandler.generateFlowId.mockReturnValue('test-flow-id');
const result = await getServerConnectionStatus(
mockUserId,
mockServerName,
appConnections,
userConnections,
oauthServers,
);
expect(result).toEqual({
requiresOAuth: true,
connectionState: 'error',
});
});
it('should handle OAuth flow status when disconnected and requires OAuth with active flow', async () => {
const appConnections = new Map();
const userConnections = new Map();
const oauthServers = new Set([mockServerName]);
// Mock flow state to return active flow
const mockFlowManager = {
getFlowState: jest.fn(() => ({
status: 'PENDING',
createdAt: Date.now() - 60000, // 1 minute ago
ttl: 180000, // 3 minutes TTL
})),
};
mockGetFlowStateManager.mockReturnValue(mockFlowManager);
mockGetLogStores.mockReturnValue({});
MCPOAuthHandler.generateFlowId.mockReturnValue('test-flow-id');
const result = await getServerConnectionStatus(
mockUserId,
mockServerName,
appConnections,
userConnections,
oauthServers,
);
expect(result).toEqual({
requiresOAuth: true,
connectionState: 'connecting',
});
});
it('should handle OAuth flow status when disconnected and requires OAuth with no flow', async () => {
const appConnections = new Map();
const userConnections = new Map();
const oauthServers = new Set([mockServerName]);
// Mock flow state to return no flow
const mockFlowManager = {
getFlowState: jest.fn(() => null),
};
mockGetFlowStateManager.mockReturnValue(mockFlowManager);
mockGetLogStores.mockReturnValue({});
MCPOAuthHandler.generateFlowId.mockReturnValue('test-flow-id');
const result = await getServerConnectionStatus(
mockUserId,
mockServerName,
appConnections,
userConnections,
oauthServers,
);
expect(result).toEqual({
requiresOAuth: true,
connectionState: 'disconnected',
});
});
it('should not check OAuth flow status when server is connected', async () => {
const mockFlowManager = {
getFlowState: jest.fn(),
};
mockGetFlowStateManager.mockReturnValue(mockFlowManager);
mockGetLogStores.mockReturnValue({});
const appConnections = new Map([[mockServerName, { connectionState: 'connected' }]]);
const userConnections = new Map();
const oauthServers = new Set([mockServerName]);
const result = await getServerConnectionStatus(
mockUserId,
mockServerName,
appConnections,
userConnections,
oauthServers,
);
expect(result).toEqual({
requiresOAuth: true,
connectionState: 'connected',
});
// Should not call flow manager since server is connected
expect(mockFlowManager.getFlowState).not.toHaveBeenCalled();
});
it('should not check OAuth flow status when server does not require OAuth', async () => {
const mockFlowManager = {
getFlowState: jest.fn(),
};
mockGetFlowStateManager.mockReturnValue(mockFlowManager);
mockGetLogStores.mockReturnValue({});
const appConnections = new Map();
const userConnections = new Map();
const oauthServers = new Set(); // Server not in OAuth servers
const result = await getServerConnectionStatus(
mockUserId,
mockServerName,
appConnections,
userConnections,
oauthServers,
);
expect(result).toEqual({
requiresOAuth: false,
connectionState: 'disconnected',
});
// Should not call flow manager since server doesn't require OAuth
expect(mockFlowManager.getFlowState).not.toHaveBeenCalled();
});
});
});

View File

@@ -8,6 +8,7 @@ const { findOnePluginAuth, updatePluginAuth, deletePluginAuth } = require('~/mod
* @param {string} userId - The unique identifier of the user for whom the plugin authentication value is to be retrieved.
* @param {string} authField - The specific authentication field (e.g., 'API_KEY', 'URL') whose value is to be retrieved and decrypted.
* @param {boolean} throwError - Whether to throw an error if the authentication value does not exist. Defaults to `true`.
* @param {string} [pluginKey] - Optional plugin key to make the lookup more specific to a particular plugin.
* @returns {Promise<string|null>} A promise that resolves to the decrypted authentication value if found, or `null` if no such authentication value exists for the given user and field.
*
* The function throws an error if it encounters any issue during the retrieval or decryption process, or if the authentication value does not exist.
@@ -20,14 +21,28 @@ const { findOnePluginAuth, updatePluginAuth, deletePluginAuth } = require('~/mod
* console.error(err);
* });
*
* @example
* // To get the decrypted value of the 'API_KEY' field for a specific plugin:
* getUserPluginAuthValue('12345', 'API_KEY', true, 'mcp-server-name').then(value => {
* console.log(value);
* }).catch(err => {
* console.error(err);
* });
*
* @throws {Error} Throws an error if there's an issue during the retrieval or decryption process, or if the authentication value does not exist.
* @async
*/
const getUserPluginAuthValue = async (userId, authField, throwError = true) => {
const getUserPluginAuthValue = async (userId, authField, throwError = true, pluginKey) => {
try {
const pluginAuth = await findOnePluginAuth({ userId, authField });
const searchParams = { userId, authField };
if (pluginKey) {
searchParams.pluginKey = pluginKey;
}
const pluginAuth = await findOnePluginAuth(searchParams);
if (!pluginAuth) {
throw new Error(`No plugin auth ${authField} found for user ${userId}`);
const pluginInfo = pluginKey ? ` for plugin ${pluginKey}` : '';
throw new Error(`No plugin auth ${authField} found for user ${userId}${pluginInfo}`);
}
const decryptedValue = await decrypt(pluginAuth.value);

View File

@@ -91,11 +91,10 @@ class RunManager {
* @param {boolean} [params.final] - The end of the run polling loop, due to `requires_action`, `cancelling`, `cancelled`, `failed`, `completed`, or `expired` statuses.
*/
async fetchRunSteps({ openai, thread_id, run_id, runStatus, final = false }) {
// const { data: steps, first_id, last_id, has_more } = await openai.beta.threads.runs.steps.list(thread_id, run_id);
// const { data: steps, first_id, last_id, has_more } = await openai.beta.threads.runs.steps.list(run_id, { thread_id });
const { data: _steps } = await openai.beta.threads.runs.steps.list(
thread_id,
run_id,
{},
{ thread_id },
{
timeout: 3000,
maxRetries: 5,

View File

@@ -573,9 +573,9 @@ class StreamRunManager {
let toolRun;
try {
toolRun = this.openai.beta.threads.runs.submitToolOutputsStream(
run.thread_id,
run.id,
{
thread_id: run.thread_id,
tool_outputs,
stream: true,
},

View File

@@ -33,7 +33,7 @@ async function withTimeout(promise, timeoutMs, timeoutMessage) {
* @param {string} [params.body.model] - Optional. The ID of the model to be used for this run.
* @param {string} [params.body.instructions] - Optional. Override the default system message of the assistant.
* @param {string} [params.body.additional_instructions] - Optional. Appends additional instructions
* at theend of the instructions for the run. This is useful for modifying
* at the end of the instructions for the run. This is useful for modifying
* the behavior on a per-run basis without overriding other instructions.
* @param {Object[]} [params.body.tools] - Optional. Override the tools the assistant can use for this run.
* @param {string[]} [params.body.file_ids] - Optional.
@@ -179,7 +179,7 @@ async function waitForRun({
* @return {Promise<RunStep[]>} A promise that resolves to an array of RunStep objects.
*/
async function _retrieveRunSteps({ openai, thread_id, run_id }) {
const runSteps = await openai.beta.threads.runs.steps.list(thread_id, run_id);
const runSteps = await openai.beta.threads.runs.steps.list(run_id, { thread_id });
return runSteps;
}

View File

@@ -192,7 +192,8 @@ async function addThreadMetadata({ openai, thread_id, messageId, messages }) {
const promises = [];
for (const message of messages) {
promises.push(
openai.beta.threads.messages.update(thread_id, message.id, {
openai.beta.threads.messages.update(message.id, {
thread_id,
metadata: {
messageId,
},
@@ -263,7 +264,8 @@ async function syncMessages({
}
modifyPromises.push(
openai.beta.threads.messages.update(thread_id, apiMessage.id, {
openai.beta.threads.messages.update(apiMessage.id, {
thread_id,
metadata: {
messageId: dbMessage.messageId,
},
@@ -413,7 +415,7 @@ async function checkMessageGaps({
}) {
const promises = [];
promises.push(openai.beta.threads.messages.list(thread_id, defaultOrderQuery));
promises.push(openai.beta.threads.runs.steps.list(thread_id, run_id));
promises.push(openai.beta.threads.runs.steps.list(run_id, { thread_id }));
/** @type {[{ data: ThreadMessage[] }, { data: RunStep[] }]} */
const [response, stepsResponse] = await Promise.all(promises);

View File

@@ -1,6 +1,7 @@
const fs = require('fs');
const path = require('path');
const { sleep } = require('@librechat/agents');
const { getToolkitKey } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { zodToJsonSchema } = require('zod-to-json-schema');
const { Calculator } = require('@langchain/community/tools/calculator');
@@ -11,7 +12,6 @@ const {
ErrorTypes,
ContentTypes,
imageGenTools,
EToolResources,
EModelEndpoint,
actionDelimiter,
ImageVisionTool,
@@ -40,30 +40,6 @@ const { recordUsage } = require('~/server/services/Threads');
const { loadTools } = require('~/app/clients/tools/util');
const { redactMessage } = require('~/config/parsers');
/**
* @param {string} toolName
* @returns {string | undefined} toolKey
*/
function getToolkitKey(toolName) {
/** @type {string|undefined} */
let toolkitKey;
for (const toolkit of toolkits) {
if (toolName.startsWith(EToolResources.image_edit)) {
const splitMatches = toolkit.pluginKey.split('_');
const suffix = splitMatches[splitMatches.length - 1];
if (toolName.endsWith(suffix)) {
toolkitKey = toolkit.pluginKey;
break;
}
}
if (toolName.startsWith(toolkit.pluginKey)) {
toolkitKey = toolkit.pluginKey;
break;
}
}
return toolkitKey;
}
/**
* Loads and formats tools from the specified tool directory.
*
@@ -145,7 +121,7 @@ function loadAndFormatTools({ directory, adminFilter = [], adminIncluded = [] })
for (const toolInstance of basicToolInstances) {
const formattedTool = formatToOpenAIAssistantTool(toolInstance);
let toolName = formattedTool[Tools.function].name;
toolName = getToolkitKey(toolName) ?? toolName;
toolName = getToolkitKey({ toolkits, toolName }) ?? toolName;
if (filter.has(toolName) && included.size === 0) {
continue;
}
@@ -226,7 +202,7 @@ async function processRequiredActions(client, requiredActions) {
`[required actions] user: ${client.req.user.id} | thread_id: ${requiredActions[0].thread_id} | run_id: ${requiredActions[0].run_id}`,
requiredActions,
);
const toolDefinitions = await getCachedTools({ includeGlobal: true });
const toolDefinitions = await getCachedTools({ userId: client.req.user.id, includeGlobal: true });
const seenToolkits = new Set();
const tools = requiredActions
.map((action) => {

View File

@@ -9,20 +9,35 @@ const { getLogStores } = require('~/cache');
* Initialize MCP servers
* @param {import('express').Application} app - Express app instance
*/
async function initializeMCP(app) {
async function initializeMCPs(app) {
const mcpServers = app.locals.mcpConfig;
if (!mcpServers) {
return;
}
// Filter out servers with startup: false
const filteredServers = {};
for (const [name, config] of Object.entries(mcpServers)) {
if (config.startup === false) {
logger.info(`Skipping MCP server '${name}' due to startup: false`);
continue;
}
filteredServers[name] = config;
}
if (Object.keys(filteredServers).length === 0) {
logger.info('[MCP] No MCP servers to initialize (all skipped or none configured)');
return;
}
logger.info('Initializing MCP servers...');
const mcpManager = getMCPManager();
const flowsCache = getLogStores(CacheKeys.FLOWS);
const flowManager = flowsCache ? getFlowStateManager(flowsCache) : null;
try {
await mcpManager.initializeMCP({
mcpServers,
await mcpManager.initializeMCPs({
mcpServers: filteredServers,
flowManager,
tokenMethods: {
findToken,
@@ -47,10 +62,11 @@ async function initializeMCP(app) {
const cache = getLogStores(CacheKeys.CONFIG_STORE);
await cache.delete(CacheKeys.TOOLS);
logger.debug('Cleared tools array cache after MCP initialization');
logger.info('MCP servers initialized successfully');
} catch (error) {
logger.error('Failed to initialize MCP servers:', error);
}
}
module.exports = initializeMCP;
module.exports = initializeMCPs;

View File

@@ -52,6 +52,11 @@ function assistantsConfigSetup(config, assistantsEndpoint, prevConfig = {}) {
privateAssistants: parsedConfig.privateAssistants,
timeoutMs: parsedConfig.timeoutMs,
streamRate: parsedConfig.streamRate,
titlePrompt: parsedConfig.titlePrompt,
titleMethod: parsedConfig.titleMethod,
titleModel: parsedConfig.titleModel,
titleEndpoint: parsedConfig.titleEndpoint,
titlePromptTemplate: parsedConfig.titlePromptTemplate,
};
}

View File

@@ -2,11 +2,11 @@ const {
SystemRoles,
Permissions,
PermissionTypes,
isMemoryEnabled,
removeNullishValues,
} = require('librechat-data-provider');
const { logger } = require('@librechat/data-schemas');
const { isMemoryEnabled } = require('@librechat/api');
const { updateAccessPermissions } = require('~/models/Role');
const { logger } = require('~/config');
/**
* Loads the default interface object.
@@ -50,6 +50,7 @@ async function loadDefaultInterface(config, configDefaults, roleName = SystemRol
temporaryChat: interfaceConfig?.temporaryChat ?? defaults.temporaryChat,
runCode: interfaceConfig?.runCode ?? defaults.runCode,
webSearch: interfaceConfig?.webSearch ?? defaults.webSearch,
fileSearch: interfaceConfig?.fileSearch ?? defaults.fileSearch,
customWelcome: interfaceConfig?.customWelcome ?? defaults.customWelcome,
});
@@ -65,6 +66,7 @@ async function loadDefaultInterface(config, configDefaults, roleName = SystemRol
[PermissionTypes.TEMPORARY_CHAT]: { [Permissions.USE]: loadedInterface.temporaryChat },
[PermissionTypes.RUN_CODE]: { [Permissions.USE]: loadedInterface.runCode },
[PermissionTypes.WEB_SEARCH]: { [Permissions.USE]: loadedInterface.webSearch },
[PermissionTypes.FILE_SEARCH]: { [Permissions.USE]: loadedInterface.fileSearch },
});
await updateAccessPermissions(SystemRoles.ADMIN, {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: loadedInterface.prompts },
@@ -78,6 +80,7 @@ async function loadDefaultInterface(config, configDefaults, roleName = SystemRol
[PermissionTypes.TEMPORARY_CHAT]: { [Permissions.USE]: loadedInterface.temporaryChat },
[PermissionTypes.RUN_CODE]: { [Permissions.USE]: loadedInterface.runCode },
[PermissionTypes.WEB_SEARCH]: { [Permissions.USE]: loadedInterface.webSearch },
[PermissionTypes.FILE_SEARCH]: { [Permissions.USE]: loadedInterface.fileSearch },
});
let i = 0;

View File

@@ -18,6 +18,7 @@ describe('loadDefaultInterface', () => {
temporaryChat: true,
runCode: true,
webSearch: true,
fileSearch: true,
},
};
const configDefaults = { interface: {} };
@@ -27,12 +28,13 @@ describe('loadDefaultInterface', () => {
expect(updateAccessPermissions).toHaveBeenCalledWith(SystemRoles.USER, {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: true },
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: true },
[PermissionTypes.MEMORIES]: { [Permissions.USE]: true },
[PermissionTypes.MEMORIES]: { [Permissions.USE]: true, [Permissions.OPT_OUT]: undefined },
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: true },
[PermissionTypes.AGENTS]: { [Permissions.USE]: true },
[PermissionTypes.TEMPORARY_CHAT]: { [Permissions.USE]: true },
[PermissionTypes.RUN_CODE]: { [Permissions.USE]: true },
[PermissionTypes.WEB_SEARCH]: { [Permissions.USE]: true },
[PermissionTypes.FILE_SEARCH]: { [Permissions.USE]: true },
});
});
@@ -47,6 +49,7 @@ describe('loadDefaultInterface', () => {
temporaryChat: false,
runCode: false,
webSearch: false,
fileSearch: false,
},
};
const configDefaults = { interface: {} };
@@ -56,12 +59,13 @@ describe('loadDefaultInterface', () => {
expect(updateAccessPermissions).toHaveBeenCalledWith(SystemRoles.USER, {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: false },
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: false },
[PermissionTypes.MEMORIES]: { [Permissions.USE]: false },
[PermissionTypes.MEMORIES]: { [Permissions.USE]: false, [Permissions.OPT_OUT]: undefined },
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: false },
[PermissionTypes.AGENTS]: { [Permissions.USE]: false },
[PermissionTypes.TEMPORARY_CHAT]: { [Permissions.USE]: false },
[PermissionTypes.RUN_CODE]: { [Permissions.USE]: false },
[PermissionTypes.WEB_SEARCH]: { [Permissions.USE]: false },
[PermissionTypes.FILE_SEARCH]: { [Permissions.USE]: false },
});
});
@@ -74,12 +78,16 @@ describe('loadDefaultInterface', () => {
expect(updateAccessPermissions).toHaveBeenCalledWith(SystemRoles.USER, {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: undefined },
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: undefined },
[PermissionTypes.MEMORIES]: { [Permissions.USE]: undefined },
[PermissionTypes.MEMORIES]: {
[Permissions.USE]: undefined,
[Permissions.OPT_OUT]: undefined,
},
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: undefined },
[PermissionTypes.AGENTS]: { [Permissions.USE]: undefined },
[PermissionTypes.TEMPORARY_CHAT]: { [Permissions.USE]: undefined },
[PermissionTypes.RUN_CODE]: { [Permissions.USE]: undefined },
[PermissionTypes.WEB_SEARCH]: { [Permissions.USE]: undefined },
[PermissionTypes.FILE_SEARCH]: { [Permissions.USE]: undefined },
});
});
@@ -94,6 +102,7 @@ describe('loadDefaultInterface', () => {
temporaryChat: undefined,
runCode: undefined,
webSearch: undefined,
fileSearch: undefined,
},
};
const configDefaults = { interface: {} };
@@ -103,12 +112,16 @@ describe('loadDefaultInterface', () => {
expect(updateAccessPermissions).toHaveBeenCalledWith(SystemRoles.USER, {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: undefined },
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: undefined },
[PermissionTypes.MEMORIES]: { [Permissions.USE]: undefined },
[PermissionTypes.MEMORIES]: {
[Permissions.USE]: undefined,
[Permissions.OPT_OUT]: undefined,
},
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: undefined },
[PermissionTypes.AGENTS]: { [Permissions.USE]: undefined },
[PermissionTypes.TEMPORARY_CHAT]: { [Permissions.USE]: undefined },
[PermissionTypes.RUN_CODE]: { [Permissions.USE]: undefined },
[PermissionTypes.WEB_SEARCH]: { [Permissions.USE]: undefined },
[PermissionTypes.FILE_SEARCH]: { [Permissions.USE]: undefined },
});
});
@@ -123,6 +136,7 @@ describe('loadDefaultInterface', () => {
temporaryChat: undefined,
runCode: false,
webSearch: true,
fileSearch: false,
},
};
const configDefaults = { interface: {} };
@@ -132,12 +146,13 @@ describe('loadDefaultInterface', () => {
expect(updateAccessPermissions).toHaveBeenCalledWith(SystemRoles.USER, {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: true },
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: false },
[PermissionTypes.MEMORIES]: { [Permissions.USE]: true },
[PermissionTypes.MEMORIES]: { [Permissions.USE]: true, [Permissions.OPT_OUT]: undefined },
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: undefined },
[PermissionTypes.AGENTS]: { [Permissions.USE]: true },
[PermissionTypes.TEMPORARY_CHAT]: { [Permissions.USE]: undefined },
[PermissionTypes.RUN_CODE]: { [Permissions.USE]: false },
[PermissionTypes.WEB_SEARCH]: { [Permissions.USE]: true },
[PermissionTypes.FILE_SEARCH]: { [Permissions.USE]: false },
});
});
@@ -153,6 +168,7 @@ describe('loadDefaultInterface', () => {
temporaryChat: true,
runCode: true,
webSearch: true,
fileSearch: true,
},
};
@@ -161,12 +177,13 @@ describe('loadDefaultInterface', () => {
expect(updateAccessPermissions).toHaveBeenCalledWith(SystemRoles.USER, {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: true },
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: true },
[PermissionTypes.MEMORIES]: { [Permissions.USE]: true },
[PermissionTypes.MEMORIES]: { [Permissions.USE]: true, [Permissions.OPT_OUT]: undefined },
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: true },
[PermissionTypes.AGENTS]: { [Permissions.USE]: true },
[PermissionTypes.TEMPORARY_CHAT]: { [Permissions.USE]: true },
[PermissionTypes.RUN_CODE]: { [Permissions.USE]: true },
[PermissionTypes.WEB_SEARCH]: { [Permissions.USE]: true },
[PermissionTypes.FILE_SEARCH]: { [Permissions.USE]: true },
});
});
@@ -179,12 +196,16 @@ describe('loadDefaultInterface', () => {
expect(updateAccessPermissions).toHaveBeenCalledWith(SystemRoles.USER, {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: undefined },
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: undefined },
[PermissionTypes.MEMORIES]: { [Permissions.USE]: undefined },
[PermissionTypes.MEMORIES]: {
[Permissions.USE]: undefined,
[Permissions.OPT_OUT]: undefined,
},
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: true },
[PermissionTypes.AGENTS]: { [Permissions.USE]: undefined },
[PermissionTypes.TEMPORARY_CHAT]: { [Permissions.USE]: undefined },
[PermissionTypes.RUN_CODE]: { [Permissions.USE]: undefined },
[PermissionTypes.WEB_SEARCH]: { [Permissions.USE]: undefined },
[PermissionTypes.FILE_SEARCH]: { [Permissions.USE]: undefined },
});
});
@@ -197,12 +218,16 @@ describe('loadDefaultInterface', () => {
expect(updateAccessPermissions).toHaveBeenCalledWith(SystemRoles.USER, {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: undefined },
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: undefined },
[PermissionTypes.MEMORIES]: { [Permissions.USE]: undefined },
[PermissionTypes.MEMORIES]: {
[Permissions.USE]: undefined,
[Permissions.OPT_OUT]: undefined,
},
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: false },
[PermissionTypes.AGENTS]: { [Permissions.USE]: undefined },
[PermissionTypes.TEMPORARY_CHAT]: { [Permissions.USE]: undefined },
[PermissionTypes.RUN_CODE]: { [Permissions.USE]: undefined },
[PermissionTypes.WEB_SEARCH]: { [Permissions.USE]: undefined },
[PermissionTypes.FILE_SEARCH]: { [Permissions.USE]: undefined },
});
});
@@ -215,12 +240,16 @@ describe('loadDefaultInterface', () => {
expect(updateAccessPermissions).toHaveBeenCalledWith(SystemRoles.USER, {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: undefined },
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: undefined },
[PermissionTypes.MEMORIES]: { [Permissions.USE]: undefined },
[PermissionTypes.MEMORIES]: {
[Permissions.USE]: undefined,
[Permissions.OPT_OUT]: undefined,
},
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: undefined },
[PermissionTypes.AGENTS]: { [Permissions.USE]: undefined },
[PermissionTypes.TEMPORARY_CHAT]: { [Permissions.USE]: undefined },
[PermissionTypes.RUN_CODE]: { [Permissions.USE]: undefined },
[PermissionTypes.WEB_SEARCH]: { [Permissions.USE]: undefined },
[PermissionTypes.FILE_SEARCH]: { [Permissions.USE]: undefined },
});
});
@@ -234,6 +263,7 @@ describe('loadDefaultInterface', () => {
agents: false,
temporaryChat: true,
runCode: false,
fileSearch: true,
},
};
const configDefaults = { interface: {} };
@@ -243,12 +273,13 @@ describe('loadDefaultInterface', () => {
expect(updateAccessPermissions).toHaveBeenCalledWith(SystemRoles.USER, {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: true },
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: false },
[PermissionTypes.MEMORIES]: { [Permissions.USE]: true },
[PermissionTypes.MEMORIES]: { [Permissions.USE]: true, [Permissions.OPT_OUT]: undefined },
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: true },
[PermissionTypes.AGENTS]: { [Permissions.USE]: false },
[PermissionTypes.TEMPORARY_CHAT]: { [Permissions.USE]: true },
[PermissionTypes.RUN_CODE]: { [Permissions.USE]: false },
[PermissionTypes.WEB_SEARCH]: { [Permissions.USE]: undefined },
[PermissionTypes.FILE_SEARCH]: { [Permissions.USE]: true },
});
});
@@ -264,6 +295,7 @@ describe('loadDefaultInterface', () => {
temporaryChat: undefined,
runCode: undefined,
webSearch: undefined,
fileSearch: true,
},
};
@@ -272,12 +304,13 @@ describe('loadDefaultInterface', () => {
expect(updateAccessPermissions).toHaveBeenCalledWith(SystemRoles.USER, {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: true },
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: true },
[PermissionTypes.MEMORIES]: { [Permissions.USE]: false },
[PermissionTypes.MEMORIES]: { [Permissions.USE]: false, [Permissions.OPT_OUT]: undefined },
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: false },
[PermissionTypes.AGENTS]: { [Permissions.USE]: undefined },
[PermissionTypes.TEMPORARY_CHAT]: { [Permissions.USE]: undefined },
[PermissionTypes.RUN_CODE]: { [Permissions.USE]: undefined },
[PermissionTypes.WEB_SEARCH]: { [Permissions.USE]: undefined },
[PermissionTypes.FILE_SEARCH]: { [Permissions.USE]: true },
});
});
@@ -300,12 +333,90 @@ describe('loadDefaultInterface', () => {
expect(updateAccessPermissions).toHaveBeenCalledWith(SystemRoles.USER, {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: true },
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: false },
[PermissionTypes.MEMORIES]: { [Permissions.USE]: true },
[PermissionTypes.MEMORIES]: { [Permissions.USE]: true, [Permissions.OPT_OUT]: undefined },
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: true },
[PermissionTypes.AGENTS]: { [Permissions.USE]: false },
[PermissionTypes.TEMPORARY_CHAT]: { [Permissions.USE]: true },
[PermissionTypes.RUN_CODE]: { [Permissions.USE]: false },
[PermissionTypes.WEB_SEARCH]: { [Permissions.USE]: undefined },
[PermissionTypes.FILE_SEARCH]: { [Permissions.USE]: undefined },
});
});
it('should call updateAccessPermissions with the correct parameters when FILE_SEARCH is true', async () => {
const config = {
interface: {
fileSearch: true,
},
};
const configDefaults = { interface: {} };
await loadDefaultInterface(config, configDefaults);
expect(updateAccessPermissions).toHaveBeenCalledWith(SystemRoles.USER, {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: undefined },
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: undefined },
[PermissionTypes.MEMORIES]: { [Permissions.USE]: undefined },
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: undefined },
[PermissionTypes.AGENTS]: { [Permissions.USE]: undefined },
[PermissionTypes.TEMPORARY_CHAT]: { [Permissions.USE]: undefined },
[PermissionTypes.RUN_CODE]: { [Permissions.USE]: undefined },
[PermissionTypes.WEB_SEARCH]: { [Permissions.USE]: undefined },
[PermissionTypes.FILE_SEARCH]: { [Permissions.USE]: true },
});
});
it('should call updateAccessPermissions with false when FILE_SEARCH is false', async () => {
const config = {
interface: {
fileSearch: false,
},
};
const configDefaults = { interface: {} };
await loadDefaultInterface(config, configDefaults);
expect(updateAccessPermissions).toHaveBeenCalledWith(SystemRoles.USER, {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: undefined },
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: undefined },
[PermissionTypes.MEMORIES]: { [Permissions.USE]: undefined },
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: undefined },
[PermissionTypes.AGENTS]: { [Permissions.USE]: undefined },
[PermissionTypes.TEMPORARY_CHAT]: { [Permissions.USE]: undefined },
[PermissionTypes.RUN_CODE]: { [Permissions.USE]: undefined },
[PermissionTypes.WEB_SEARCH]: { [Permissions.USE]: undefined },
[PermissionTypes.FILE_SEARCH]: { [Permissions.USE]: false },
});
});
it('should call updateAccessPermissions with all interface options including fileSearch', async () => {
const config = {
interface: {
prompts: true,
bookmarks: false,
memories: true,
multiConvo: true,
agents: false,
temporaryChat: true,
runCode: false,
webSearch: true,
fileSearch: true,
},
};
const configDefaults = { interface: {} };
await loadDefaultInterface(config, configDefaults);
expect(updateAccessPermissions).toHaveBeenCalledWith(SystemRoles.USER, {
[PermissionTypes.PROMPTS]: { [Permissions.USE]: true },
[PermissionTypes.BOOKMARKS]: { [Permissions.USE]: false },
[PermissionTypes.MEMORIES]: { [Permissions.USE]: true, [Permissions.OPT_OUT]: undefined },
[PermissionTypes.MULTI_CONVO]: { [Permissions.USE]: true },
[PermissionTypes.AGENTS]: { [Permissions.USE]: false },
[PermissionTypes.TEMPORARY_CHAT]: { [Permissions.USE]: true },
[PermissionTypes.RUN_CODE]: { [Permissions.USE]: false },
[PermissionTypes.WEB_SEARCH]: { [Permissions.USE]: true },
[PermissionTypes.FILE_SEARCH]: { [Permissions.USE]: true },
});
});
});

Some files were not shown because too many files have changed in this diff Show More