Compare commits

...

107 Commits

Author SHA1 Message Date
Dustin Healy
3ca736bc53 🔧 fix: Fix rampant pings and rate limiting
- Skips idle connection checks in `getMCPManager` to avoid unnecessary pings.
- Introduces a skipOAuthTimeout flag during initial connection to prevent timeouts during server discovery.
- Uses a lightweight connection state check instead of ping to avoid rate limits.
- Prevents refetch spam and rate limit errors when checking connection status.
- Fixes an issue where the server connection was not being disconnected.
2025-07-21 18:26:35 -04:00
Dustin Healy
4b4741b1aa 🔧 refactor: Clean up logging statements 2025-07-21 18:26:29 -04:00
Dustin Healy
cfb19175fb 🔧 refactor: Clean up imports in MCPPanel component
- Merged import statements for better organization.
- Removed unused `reinitializeMCPServerMutation`
2025-07-21 18:26:24 -04:00
Dustin Healy
4130a09f61 🔧 refactor: Remove unused server reinitialization logic from MCPPanel
- Removed the `handleReinitializeServer` function.
- Removed the `ServerConfigWithVars` interface.
2025-07-21 18:26:18 -04:00
Dustin Healy
5399ac5231 🔧 refactor: Streamline MCPPanel UI by removing status indicators and refresh spinner 2025-07-21 18:26:13 -04:00
Dustin Healy
b03faebcfb 🔧 chore: Fix errors causing unit tests and ESLint checks to fail 2025-07-21 18:26:06 -04:00
Dustin Healy
89843f71af 🔧 chore: Remove unused translation keys 2025-07-21 18:26:00 -04:00
Dustin Healy
5e2b6e8eb5 🔧 refactor: Integrate MCPPanel with new MCP components
- **Unified UI Components**: Replace custom MCPVariableEditor with CustomUserVarsSection and ServerInitializationSection for consistent design across all MCP interfaces
- **Real-Time Status Indicators**: Add live connection status badges and set/unset authentication pills to match MCPConfigDialog functionality
- **Enhanced OAuth Support**: Integrate ServerInitializationSection for proper OAuth flow handling in side panel
2025-07-21 18:25:55 -04:00
Dustin Healy
b1e346a225 🔌 feat: MCP OAuth Integration in Chat UI
- **Real-Time Connection Status**: New backend APIs and React Query hooks provide live MCP server connection monitoring with automatic UI updates
- **OAuth Flow Components**: Complete MCPConfigDialog, ServerInitializationSection, and CustomUserVarsSection with OAuth URL handling and polling-based completion
- **Enhanced Server Selection**: MCPSelect component with connection-aware filtering, visual status indicators, and better credential management UX

(still needs a lot of refinement since there is bloat/unused vars and functions leftover from the ideation phase on how to approach OAuth and connection statuses)
2025-07-21 18:25:47 -04:00
Dustin Healy
faaba30af1 feat: Add MCP Reinitialization to MCPPanel (#8418)
*  feat: Add MCP Reinitialization to MCPPanel

- Refactored tool caching to include user-specific tools in various service files.
- Refactored MCPManager class for clarity
- Added a new endpoint for reinitializing MCP servers, allowing for dynamic updates of server configurations.
- Enhanced the MCPPanel component to support server reinitialization with user feedback.

* 🔃 refactor: Simplify Plugin Deduplication and Clear Cache Post-MCP Initialization

- Replaced manual deduplication of tools with the dedicated `filterUniquePlugins` function for improved readability.
- Added back cache clearing for tools after MCP initialization to ensure fresh data is used.
- Removed unused exports from `PluginController.js` to clean up the codebase.
2025-07-21 17:49:19 -04:00
Danny Avila
14660d75ae 🆕 feat: Enhanced Title Generation Config Options (#8580)
* 🏗️ refactor: Extract reasoning key logic into separate function

* refactor: Ensure `overrideProvider` is always defined in `getProviderConfig` result, and only used in `initializeAgent` if different from `agent.provider`

* feat: new title configuration options across services

- titlePrompt
- titleEndpoint
- titlePromptTemplate
- new "completion" titleMethod (new default)

* chore: update @librechat/agents and conform openai version to prevent SDK errors

* chore: add form-data package as a dependency and override to v4.0.4 to address CVE-2025-7783

* feat: add support for 'all' endpoint configuration in AppService and corresponding tests

* refactor: replace HttpsProxyAgent with ProxyAgent from undici for improved proxy handling in assistant initialization

* chore: update frontend review workflow to limit package paths to data-provider

* chore: update backend review workflow to include all package paths
2025-07-21 17:37:37 -04:00
Danny Avila
aec1777a90 📦 chore: bump @librechat/agents to v2.4.63 (#8558) 2025-07-19 14:37:22 -04:00
Danny Avila
90c43dd451 🔒 fix: Address multer CVE-2025-7338 (#8557) 2025-07-19 14:23:20 -04:00
Danny Avila
4c754c1190 🏄‍♂️ fix: Handle SSE Stream Edge Case (#8556)
* refactor: Move draft-related utilities to a new `drafts.ts` file

* refactor: auto-save draft logic to use new get/set functions

* fix: Ensure `getDraft` properly decodes stored draft values

* fix: Handle edge case where stream is cancelled before any response, which creates a blank page
2025-07-19 13:44:02 -04:00
Danny Avila
f70e0cf849 🔒 fix: Address on-headers CVE-2025-7339 (#8553)
* 📦 chore: bump `compression` from 1.7.4 to 1.8.1

* chore: bump `express-session` to v1.18.2

* chore: update `connect-redis` from v7.1.0 to v8.1.0

* chore: update import for `connect-redis` to use named export due to v8.0.0 breaking change
2025-07-19 13:36:59 -04:00
Dustin Healy
d0c958ba33 🔥 feat: Add Firecrawl Scraper Configurability (#8495)
- Added firecrawlOptions configuration field to librechat.yaml
- Refactored web.ts to live in packages/api rather than data-provider
- Updated imports from web.ts to reflect new location
- Added firecrawlOptions to FirecrawlConfig interface
- Added firecrawlOptions to authResult of loadWebSearchAuth so it gets properly passed to agents to be built into firecrawl payload
- Added tests for firecrawlOptions to web.spec.ts
2025-07-18 22:37:57 -04:00
Dustin Healy
0761e65086 🔧 fix: Enhance Responses API Auto-Enable Logic for Compatible Endpoints (#8506)
- Updated the logic to auto-enable the Responses API when web search is enabled, specifically for OpenAI, Azure, and Custom endpoints.
- Added import for EModelEndpoint to facilitate endpoint compatibility checks.
2025-07-18 22:27:56 -04:00
Danny Avila
0bf708915b ♻️ refactor: formatContentStrings to support AI and System messages (#8528)
* ♻️ refactor: `formatContentStrings` to support AI and System messages

* 📦 chore: bump @librechat/api version to 1.2.7
2025-07-17 19:19:37 -04:00
Danny Avila
cf59f1ab45 📦 chore: bump librechat-data-provider to v0.7.900 2025-07-17 18:42:34 -04:00
Danny Avila
445e9eae85 🧩 fix: Human Message Content Handling for Legacy Content (#8525)
* wip: first pass content strings

* 📦 chore: update @langchain/core to v0.3.62 for data-provider dev dep.

* 📦 chore: bump @langchain/core to v0.3.62 for api dep.

* 📦 chore: move @langchain/core to peerDependencies in package.json and package-lock.json

* fix: update formatContentStrings to create HumanMessage directly from formatted content

* chore: import order
2025-07-17 18:34:24 -04:00
Danny Avila
cd9c578907 📦 chore: bump @librechat/agents to to v2.4.62 (#8524) 2025-07-17 17:54:25 -04:00
github-actions[bot]
ac94c73f23 🌍 i18n: Update translation.json with latest translations (#8505)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-07-17 11:31:45 -04:00
Danny Avila
dfef7c31d2 ♻️ refactor: SidePanel Context to Optimize on ChatView Rerender (#8509) 2025-07-17 11:31:19 -04:00
Danny Avila
0b1b0af741 ☑️ refactor: Allow Mid-convo Agent Selection from Agent Panel (#8510) 2025-07-17 11:30:50 -04:00
Ben Verhees
0a169a1ff6 👥 fix: Collaborative Check Flag for Shared Agent Files (#8516) 2025-07-17 10:42:57 -04:00
Danny Avila
4b12ea327a 📦 chore: bump @librechat/agents to to v2.4.61 (#8504) 2025-07-16 18:32:31 -04:00
Danny Avila
35d8ef50f4 🪙 fix: Use Fallback Token Transaction if No Collected Usage (#8503) 2025-07-16 17:58:15 -04:00
Danny Avila
1dabe96404 🕒 refactor: Use Legacy Content for Custom Endpoints and Azure Serverless for Improved Compatibility (#8502)
* 🕒 refactor: Use Legacy Content for Custom Endpoints to Improve Compatibility

- Also applies to Azure serverless endpoints from AI Foundry

* chore: move useLegacyContent condition before early return

* fix: Ensure useLegacyContent is set only when options are available
2025-07-16 17:17:15 -04:00
Dustin Healy
7f8c327509 🌊 feat: Add Disable Streaming Toggle (#8177)
* 🌊 feat: Add Disable Streaming Option in Configuration

- Introduced a new setting to disable streaming responses in openAI, Azure, and custom endpoint parameter panels.
- Updated translation files to include labels and descriptions for the disable streaming feature.
- Modified relevant schemas and parameter settings to support the new disable streaming functionality.

* 🔧 fix: disableStreaming state not persisting when returning to a conversation

- Added disableStreaming field to the IPreset interface and conversationPreset.
- Moved toggles and sliders around for nicer left-right UI split in parameters panel.
- Removed old reference to 'grounding' ub conversationPreset (now web_search) and added web_search to IPreset.
2025-07-16 10:09:40 -04:00
Danny Avila
52bbac3a37 feat: Add GitHub Actions workflow for publishing @librechat/client to NPM 2025-07-16 09:19:59 -04:00
Danny Avila
62b4f3b795 🛂 fix: Only Perform allowedProviders Validation for Agents (#8487) 2025-07-15 18:43:47 -04:00
Theo N. Truong
01b012a8fa 🏦 refactor: Centralize Caching & Redis Key Prefixing (#8457)
* 🔧 Overhauled caching feature:
- Refactored caching logic.
- Fixed redis prefix, namespace, tls, ttl, and cluster.
- Added REDIS_KEY_PREFIX_VAR

* # refactor: Rename redisCache to standardCache

* # Add Redis pinging mechanism to maintain connection.

* # docs: Add warning about Keyv Redis client prefix support
2025-07-15 18:24:31 -04:00
Danny Avila
418b5e9070 ♻️ fix: Resolve MCP Connection if Ping is Unsupported (#8483) 2025-07-15 18:20:11 -04:00
Danny Avila
a9f01bb86f 📝 refactor: Memory Instructions for Improved Performance (#8463) 2025-07-14 18:37:46 -04:00
Danny Avila
aeeb860fe0 📦 chore: bump @librechat/agents to v2.4.60 (#8458) 2025-07-14 18:29:48 -04:00
github-actions[bot]
e11e716807 🌍 i18n: Update translation.json with latest translations (#8422)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-07-14 17:22:02 -04:00
Danny Avila
e370a87ebe ♻️ fix: Correct Message ID Assignment Logic (#8439)
* fix: Add `isRegenerate` flag to chat payload to avoid saving temporary response IDs

* fix: Remove unused `isResubmission` flag

* ci: Add tests for responseMessageId regeneration logic in BaseClient
2025-07-14 00:57:20 -04:00
Danny Avila
170cc340d8 refactor: Imports to Prevent Circular Type Refs (#8423) 2025-07-12 11:37:07 -04:00
Danny Avila
f1b29ffb45 🔒 feat: View/Delete Shared Agent Files (#8419)
* 🔧 fix: Add localized message for delete operation not allowed

* refactor: improve file deletion operations ux

* feat: agent-based file access control and enhance file retrieval logic

* feat: implement agent-specific file retrieval

* feat: enhance agent file retrieval logic for authors and shared access

* ci: include userId and agentId in mockGetFiles call for OCR file retrieval
2025-07-12 01:52:46 -04:00
Danny Avila
6aa4bb5a4a 👟 fix: Edge Case of Azure Provider Assignment for Title Run (#8420) 2025-07-12 01:52:17 -04:00
Sebastien Bruel
9f44187351 🗂️ fix: Disable express-static-gzip for Uploaded Images (#8307)
* Fix scanning of the uploaded images folder on startup

* Re-write tests to pass linting

* Disable image output gzip scan by default

* Add `ENABLE_IMAGE_OUTPUT_GZIP_SCAN` to `.env.example`
2025-07-11 16:51:53 -04:00
Samuel Path
d2e1ca4c4a 🖼️ fix: Permission Checks for Agent Avatar Uploads (#8412)
Implements permission validation before allowing agent avatar uploads. Only admins, the agent's author, or users of collaborative agents can modify avatars. Also improves error handling by checking for agent existence upfront and simplifies avatar update logic.

Co-authored-by: Sai Nihas <sai.nihas@shopify.com>
2025-07-11 15:37:11 -04:00
Samuel Path
8e869f2274 🧠 feat: Enforce Token Limit for Memory Usage (#8401) 2025-07-11 14:46:19 -04:00
Danny Avila
2e1874e596 🔧 fix: handleError import path to use '@librechat/api' (#8415)
* 🔧 fix: Update handleError import path to use '@librechat/api' in middleware files

* chore: import order

* chore: import order

---------

Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com>
2025-07-11 13:29:51 -04:00
Danny Avila
929b433662 🔧 fix: Plugin Method Undefined in Agent Tool Closure (#8413) 2025-07-11 13:16:59 -04:00
Danny Avila
1e4f1f780c 🔑 feat: Grok 4 Pricing and Token Limits (#8395)
* 🔑 feat: Grok 4 Pricing and Token Limits

* 🔑 feat: Update Grok 3 Pricing for Mini and Fast Models
2025-07-11 03:24:13 -04:00
Danny Avila
4733f10e41 📦 chore: Bump @librechat/agents to v2.4.59 (#8392)
* chore: remove @librechat/agents temporarily

* chore: bump @librechat/agents to v2.4.59
2025-07-11 03:18:36 -04:00
Danny Avila
110984b48f 📦 chore: Bump @librechat/agents to v2.4.58 (#8386) 2025-07-10 20:41:38 -04:00
Danny Avila
19320f2296 🔑 feat: Base64 Google Service Keys and Reliable Private Key Formats (#8385) 2025-07-10 20:33:01 -04:00
Danny Avila
8523074e87 🔧 fix: Invalidate Tool Caching after MCP Initialization (#8384)
- Added Constants import in PluginController for better organization.
- Renamed cachedTools to cachedToolsArray for clarity in PluginController.
- Ensured getCachedTools returns an empty object if no tools are found.
- Cleared tools array cache after MCP initialization in initializeMCP for consistency.
2025-07-10 20:32:38 -04:00
Danny Avila
e4531d682d 🔃 refactor: Conslidate JSON Schema Conversion to Schema 2025-07-10 18:52:24 -04:00
Danny Avila
4bbdc4c402 🧩 fix: additionalProperties Handling and Ref Resolution in Zod Schemas (#8381)
* fix: false flagging object as empty object when it has `additionalProperties` field

* 🔧 fix: Implement $ref resolution in JSON Schema handling

* 🔧 fix: Resolve JSON Schema references before conversion to Zod

* chore: move zod logic packages/api
2025-07-10 18:02:34 -04:00
Danny Avila
8ca4cf3d2f 🔧 fix: Update Drag & Drop Logic with new File Option handling (#8354) 2025-07-10 08:38:55 -04:00
Danny Avila
13a9bcdd48 🔧 fix: Omit 'additionalModelRequestFields' from Bedrock Titling (#8353) 2025-07-10 08:38:30 -04:00
Danny Avila
4b32ec42c6 📝 fix: Resolve Markdown Rendering Issues (#8352)
* 🔧 fix: Handle optional arguments in `useParseArgs` and improve tool call condition

* chore: Remove math plugins from `MarkdownLite`

*  feat: Add Error Boundary to Markdown Component for Enhanced Error Handling

- Introduced `MarkdownErrorBoundary` to catch and display errors during Markdown rendering.
- Updated the `Markdown` component to utilize the new error boundary, improving user experience by handling rendering issues gracefully.

* Revert "chore: Remove math plugins from `MarkdownLite`"

This reverts commit d393099d52.

*  feat: Introduce MarkdownErrorBoundary for improved error handling in Markdown components

* refactor: include most markdown elements in error boundary fallback, aside from problematic plugins
2025-07-10 08:38:14 -04:00
Danny Avila
4918899c8d 🖨️ fix: Use Azure Serverless API Version for Responses API (#8316) 2025-07-08 21:07:52 -04:00
Danny Avila
7e37211458 🗝️ refactor: loadServiceKey to Support Stringified JSON and Env Var Renaming (#8317)
* feat: Enhance loadServiceKey to support stringified JSON input

* chore: Update GOOGLE_SERVICE_KEY_FILE_PATH to GOOGLE_SERVICE_KEY_FILE for consistency
2025-07-08 21:07:33 -04:00
Theo N. Truong
e57fc83d40 🔧 fix: Import Path for Custom Configuration Loading (#8319) 2025-07-08 21:07:04 -04:00
Danny Avila
550610dba9 ⚖️ feat: Add Violation Scores (#8304)
- Introduced new violation scores for TTS, STT, Fork, Import, and File Upload actions in the .env.example file.
- Updated logViolation function to accept a score parameter, allowing for dynamic severity levels based on the action type.
- Modified limiters for Fork, Import, Message, STT, TTS, Tool Call, and File Upload to utilize the new violation scores when logging violations.
2025-07-07 17:08:40 -04:00
github-actions[bot]
916cd46221 🌍 i18n: Update translation.json with latest translations (#8288)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-07-07 17:08:15 -04:00
Dustin Healy
12b08183ff 🐛 fix: Memories Key Updates (#8302)
* Updated the PATCH /memories/:key endpoint to allow key changes while ensuring no duplicate keys exist.
* Improved error handling in MemoryCreateDialog and MemoryEditDialog for key validation and duplication scenarios.
* Added a new translation for memory key validation error in translation.json.
2025-07-07 16:38:55 -04:00
Danny Avila
f4d97e1672 📝 docs: Update README 2025-07-07 01:14:07 -04:00
Danny Avila
035fa081c1 🔧 refactor: Prevent Unnecessary Google Service Key Loading (#8287)
* 🔧 refactor: Improve Google Key Handling in `loadAsyncEndpoints`

- Enhanced logic to check if GOOGLE_KEY is provided, including user-provided checks.
- Updated service key loading mechanism to only attempt loading if GOOGLE_KEY is not provided.
- Added error logging for service key loading failures.

* 🔧 refactor: Enhance service key loading logic in `initializeClient`
2025-07-07 01:10:08 -04:00
Danny Avila
aecf8f19a6 🔧 fix: Initialize reasoningKey to 'reasoning_content' (#8286)
* chore: bump @librechat/agents to v2.4.56

* chore: bump @librechat/api version to 1.2.6

* fix: initialize reasoningKey to 'reasoning_content' in createRun function
2025-07-07 01:05:40 -04:00
Dustin Healy
35f548a94d 🔄 refactor: Google grounding field to web_search for Consistency (#8285)
- Updated the Google configuration and related schemas to replace 'grounding' with 'web_search' for consistency.
- Adjusted the logic in the getGoogleConfig function to reflect the new naming convention.
- Ensured all references in parameter settings and conversation schemas are updated accordingly.
2025-07-07 00:41:51 -04:00
Danny Avila
e60c0cf201 🔍 feat: Anthropic Web Search (#8281)
* chore: bump @librechat/agents to ^2.4.54 for anthropic web search support

* WIP: hardcoded web search tool usage

* feat: Implement web search functionality in Anthropic integration

- Updated parameters panel to include web search for anthropic models.
- Updated necessary schemas to accomodate toggle functionality

* chore: Set default web search option to false in anthropicSettings

* refactor: Rename webSearch to web_search for consistency across settings and schemas

* chore: bump @librechat/agents to v2.4.55

---------

Co-authored-by: Dustin Healy <dustinhealy1@gmail.com>
2025-07-06 21:43:09 -04:00
github-actions[bot]
5b392f9cb0 🌍 i18n: Update translation.json with latest translations (#8255)
* 🌍 i18n: Update translation.json with latest translations

* Update translation.json

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Danny Avila <danny@librechat.ai>
2025-07-05 18:04:57 -04:00
Dustin Healy
e0f468da20 🔍 feat: Add SearXNG for Web Search and Enhance ApiKeyDialog (#8242)
* 🔍 feat: Add SearXNG Web Search support and enhance ApiKeyDialog

- Updated WebSearch component to include authentication data for web search functionality so it won't show badge after being revoked
- Refactored ApiKeyDialog to streamline provider, scraper, and reranker selection with new InputSection component
- Added support for SearXNG as a search provider and updated translation files accordingly
- Improved form handling in useAuthSearchTool to accommodate new API keys and URLs

* 📜 chore: remove unused i18next key

* 📦 chore: address comments (swap API key and URL fields in SearXNG config, change input fields to 'text' from 'password'

* 📦 chore: make URL fields go first in ApiKeyDialog

* chore: bump @librechat/agents to v2.4.52

* ci: update webSearch configuration to include searxng fields in AppService.spec.js

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-07-05 17:58:22 -04:00
Danny Avila
91a2df4759 🔧 refactor: Change Permissions Check from some to every for Stricter Access Validation (#8270)
* 🔧 refactor: Change Permissions Check from `some` to `every` for Stricter Access Validation

* 🧪 ci: Add comprehensive tests for access middleware functions

* fix: custom provider check logic in `getProviderConfig` function
2025-07-05 15:53:08 -04:00
Danny Avila
97a99985fa 🛡️ feat: Rate Limiting for Conversation Forking (#8269)
* chore: Improve error logging for fetching conversations, and use new TS packages for utils

* feat: Implement fork limiters for conversation forking requests

* chore: error message for conversation index deletion to clarify syncing behavior

* feat: Enhance error handling for forking with rate limit message
2025-07-05 15:02:32 -04:00
Danny Avila
3554625a06 refactor: Add Robust Timestamp handling for Conversation Imports (#8262) 2025-07-05 12:44:19 -04:00
Danny Avila
a37bf6719c 🧪 refactor: Add Validation for Agent Creation/Updates (#8261)
* refactor: Add validation schemas for agent creation and updates

* fix: Ensure author validation is applied in correct order for agent update handler

* ci: Add comprehensive unit tests for agent creation and update handlers with mass assignment protection

* fix: add missing  web_search tool in system tools configuration
2025-07-05 11:34:28 -04:00
Danny Avila
e513f50c08 ⚒️ refactor: Keep useAvailableToolsQuery Enabled for All Endpoints 2025-07-04 15:43:17 -04:00
Danny Avila
f5511e4a4e 🔁 refactor: Capabilities for Tools/File handling for Direct Endpoints (#8253)
* feat: add useAgentCapabilities hook to manage agent capabilities

* refactor: move  agents and endpoints configuration to AgentPanel context provider

* refactor: implement useGetAgentsConfig hook for consolidated agents and endpoints management

* refactor: enhance ToolsDropdown to utilize agent capabilities and streamline dropdown item rendering

* chore: reorder return values in useAgentCapabilities for improved clarity

* refactor: enhance agent capabilities handling in AttachFileMenu and update file handling logic to allow capabilities to be used for non-agents endpoints
2025-07-04 14:51:26 -04:00
Danny Avila
a288ad1d9c 🪄 feat: Artifacts Badge & Optimize Ephemeral Agent State (#8252)
* 🔧 fix: Update type annotations in useEventHandlers for better type safety

* 🔧 refactor: `useToolToggle` for improved localStorage synchronization and allow string/falsy values for setting to storage

*  feat: Implement Artifacts badge to BadgeRow with toggle options and UI components

- Added Artifacts component to manage artifacts state and options.
- Introduced ArtifactsSubMenu for additional settings related to artifacts.
- Integrated artifacts functionality into BadgeRow and ToolsDropdown components.
- Updated localStorage handling for artifacts state persistence.
- Enhanced localization for artifacts-related strings in translation files.
- Refactored Agent model to include artifacts in the ephemeral agent response.

* fix: set ephemeral agent state for conversation on finalization

* chore: remove beta settings dialog tab

* refactor: improve Ephemeral Agent statefulness

* fix: update setValue parameter to use 'value' instead of 'isChecked' in CheckboxButton

* refactor: update color classes for Artifact toggle and order of dropdown components

* chore: remove unused i18n localization
2025-07-04 13:25:04 -04:00
Sebastien Bruel
458580ec87 🥅 refactor: Express App default Error Handling with ErrorController (#8249) 2025-07-04 13:24:57 -04:00
github-actions[bot]
4285d5841c 🌍 i18n: Update translation.json with latest translations (#8235)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-07-04 11:48:54 -04:00
Sebastien Bruel
5ee55cda4f 📦 chore: bump @modelcontextprotocol/sdk to 1.13.3 and cleanup mcp/connection.ts (#8241) 2025-07-04 09:28:57 -04:00
Danny Avila
404d40cbef 📦 chore: override @langchain/openai to v0.5.16 2025-07-03 23:16:42 -04:00
Danny Avila
f4680b016c 📦 chore: bump @librechat/agents to v2.4.51 (#8234) 2025-07-03 22:35:13 -04:00
Ruben Talstra
077224b351 feat: Add support for Armenian, Latvian, and Uyghur languages (#8227) 2025-07-03 11:16:33 -04:00
Danny Avila
9c70d1db96 🔧 fix: Include apiKey in llmConfig for Azure OpenAI Responses API 2025-07-02 13:12:05 -04:00
Danny Avila
543281da6c 🔧 fix: Tool Selection for Google Models 2025-07-02 13:01:51 -04:00
Danny Avila
24800bfbeb v0.7.9-rc1 2025-07-02 10:27:34 -04:00
Danny Avila
07e08143e4 🧠 fix: Prevent Memory Errors with Buffer String (#8196) 2025-07-02 10:25:19 -04:00
Dustin Healy
8ba61a86f4 🔍 feat: Web Search via OpenAI Responses API (#8186)
* 🔍 feat: Introduce Web Search Functionality for OpenAI API

- Added a new web_search parameter to enable web search capabilities in the OpenAI configuration.
- Updated the DynamicSlider component for improved styling.
- Enhanced the useSetIndexOptions hook to auto-enable the Responses API when web search is activated.
- Modified relevant schemas, types, and translation files to support the new web search feature.

* chore: remove comments

* refactor: tool handling in initializeAgent for better clarity and functionality and reflection of openai features

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-07-02 10:03:14 -04:00
Danny Avila
56ad92fb1c 🤖 feat: Azure OpenAI Responses API (#8195)
* 🤖 feat: Azure OpenAI Responses API

* chore: cleanup order of executions
2025-07-02 09:39:19 -04:00
github-actions[bot]
1ceb52d2b5 🌍 i18n: Update translation.json with latest translations (#8164)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-07-02 01:17:53 -04:00
Danny Avila
5d267aa8e2 🔀 fix: Assistants API File Attachments 2025-07-01 22:38:10 -04:00
Danny Avila
59d00e99f3 🔍 feat: Fetch Google Service Key and Consolidate Key Loading Logic (#8179) 2025-07-01 22:37:29 -04:00
Dustin Healy
738d04fac4 🔍 feat: Add Google Search Grounding Toggle (#8174)
*  feat: Add Google Search Grounding Feature and Update Agent Tool Initialization

- Introduced a new grounding option in the Google configuration to enable real-time web search results.
- Updated the agent initialization to concatenate additional tools from options.
- Enhanced translation files to include descriptions for the new grounding feature.
- Modified relevant schemas and parameter settings to support the grounding functionality.

* 🔑 chore: Update @librechat/agents dependency to version 2.4.50

*  fix: Ensure tools array is initialized before concatenation in initializeAgent function

* chore: Update version of librechat-data-provider to 0.7.899 and add GOOGLE_TOOL_CONFLICT error type

* fix: Adjust label class for better text wrapping in DynamicSwitch component

* fix: Handle Google tool conflict error and update error messages in translation

* fix: Restore grounding setting in googleCol2 configuration

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-07-01 18:00:18 -04:00
Dani Regli
8a5dbac0f9 🛂 fix: Reuse OpenID Auth Tokens with Proxy Setup (#8151)
* Fixes https://github.com/danny-avila/LibreChat/issues/8099 in correctly setting up proxy support

- fixes the openid Strategy
- fixes the openid jwt strategy (jwksRsa fetching in a proxy environment)

Signed-off-by: Regli Daniel <daniel.regli1@sanitas.com>

* Fixes https://github.com/danny-avila/LibreChat/issues/8099 in correctly setting up proxy support

- properly formatted

Signed-off-by: Regli Daniel <1daniregli@gmail.com>

---------

Signed-off-by: Regli Daniel <daniel.regli1@sanitas.com>
Signed-off-by: Regli Daniel <1daniregli@gmail.com>
Co-authored-by: schnaker85 <1daniregligmail.com>
2025-07-01 16:30:06 -04:00
Danny Avila
434289fe92 🔀 feat: Save & Submit Message Content Parts (#8171)
* 🐛 fix: Enhance provider validation and error handling in getProviderConfig function

* WIP: edit text part

* refactor: Allow updating of both TEXT and THINK content types in message updates

* WIP: first pass, save & submit

* chore: remove legacy generation user message field

* feat: merge edited content

* fix: update placeholder and description for bedrock setting

* fix: remove unsupported warning message for AI resubmission
2025-07-01 15:43:10 -04:00
Samuel Path
a648ad3d13 fix: Agent MCP Tools Checkbox Inactive When Hidden (#8166) 2025-07-01 10:05:00 -04:00
Samuel Path
55d63caaf4 💻 ci: Make Unit Tests Pass on MacOS (#8165) 2025-07-01 09:20:33 -04:00
Danny Avila
313539d1ed 🔑 refactor: Prioritize GOOGLE_KEY When GCP Service Key File Provided (#8150) 2025-06-30 18:51:50 -04:00
Danny Avila
f869d772f7 🪐 feat: Initial OpenAI Responses API Support (#8149)
* chore: update @librechat/agents to v2.4.47

* WIP: temporary auto-toggle responses api for o1/o3-pro

* feat: Enable Responses API for OpenAI models

- Updated the OpenAI client initialization to check for the useResponsesApi parameter in model options.
- Added translations for enabling the Responses API in the UI.
- Introduced useResponsesApi parameter in data provider settings and schemas.
- Updated relevant schemas to include useResponsesApi for conversation and preset configurations.

* refactor: Remove useResponsesApi check from OpenAI client initialization and update translation for Responses API

- Removed the check for useResponsesApi in the OpenAI client initialization.
- Updated the translation for enabling the Responses API to clarify its functionality.

* chore: update @librechat/agents dependency to version 2.4.48

* chore: update @librechat/agents dependency to version 2.4.49

* chore: linting

* chore: linting

* feat: Enhance DynamicSlider and validation for enumMappings

- Added support for enumMappings in DynamicSlider to display values correctly based on enum settings.
- Implemented validation for enumMappings in the generate function to ensure all options have corresponding mappings.
- Added tests for handling empty string options and incomplete enumMappings in the generate.spec.ts file.

* feat: Enhance DynamicSlider localization support

- Added localization handling for mapped values in DynamicSlider when using enumMappings.
- Updated the logic to check if the mapped value is a localization key and return the localized string if applicable.
- Adjusted dependencies in useCallback hooks to include localize for proper functionality.

* feat: Add reasoning summary and effort options to OpenAI configuration and UI

* feat: Add enumMappings for ImageDetail options in parameter settings

* style: Improve styling for DynamicSlider component labels and inputs

* chore: Update reasoning effort description and parameter order for OpenAI params

---------

Co-authored-by: Dustin Healy <dustinhealy1@gmail.com>
2025-06-30 18:34:47 -04:00
Danny Avila
20100e120b 🔑 feat: Set Google Service Key File Path (#8130) 2025-06-29 17:09:37 -04:00
Danny Avila
3f3cfefc52 🗒️ feat: Add Google Vertex AI Mistral OCR Strategy (#8125)
* Implemented new uploadGoogleVertexMistralOCR function for processing OCR using Google Vertex AI.
* Added vertexMistralOCRStrategy to handle file uploads.
* Updated FileSources and OCRStrategy enums to include vertexai_mistral_ocr.
* Introduced helper functions for JWT creation and Google service account configuration loading.
2025-06-28 13:26:03 -04:00
matt burnett
3e1591d404 🤖 fix: Remove versions and __v when Duplicating an Agent (#8115)
Revert "Add tests for agent duplication controller"

This reverts commit 3e7beb1cc336bcfe1c57411e9c151f5e6aa927e4.
2025-06-28 12:35:41 -04:00
Danny Avila
1060ae8040 🐛 fix: Assistants Endpoint Handling in createPayload Function (#8123)
* 📦 chore: bump librechat-data-provider version to 0.7.89

* 🐛 fix: Assistants endpoint handling in createPayload function
2025-06-28 12:33:43 -04:00
Danny Avila
dd67e463e4 📦 chore: bump pbkdf2 to v3.1.3 (#8091) 2025-06-26 19:19:04 -04:00
github-actions[bot]
d60ad61325 🌍 i18n: Update translation.json with latest translations (#8058)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-06-26 19:12:46 -04:00
Danny Avila
452151e408 🐛 fix: RAG API failing with OPENID_REUSE_TOKENS Enabled (#8090)
* feat: Implement Short-Lived JWT Token Generation for RAG API

* fix: Update import paths

* fix: Correct environment variable names for OpenID on behalf flow

* fix: Remove unnecessary spaces in OpenID on behalf flow userinfo scope

---------

Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com>
2025-06-26 19:10:21 -04:00
Danny Avila
33b4a97b42 🔒 fix: Agents Config/Permission Checks after Streamline Change (#8089)
* refactor: access control logic to TypeScript

* chore: Change EndpointURLs to a constant object for improved type safety

* 🐛 fix: Enhance agent access control by adding skipAgentCheck functionality

* 🐛 fix: Add endpointFileConfig prop to AttachFileMenu and update file handling logic

* 🐛 fix: Update tool handling logic to support optional groupedTools and improve null checks, add dedicated tool dialog for Assistants

* chore: Export Accordion component from UI index for improved modularity

* feat: Add ActivePanelContext for managing active panel state across components

* chore: Replace string IDs with EModelEndpoint constants for assistants and agents in useSideNavLinks

* fix: Integrate access checks for agent creation and deletion routes in actions.js
2025-06-26 18:53:05 -04:00
Sebastien Bruel
9cdc62b655 📂 fix: Prevent Null Reference Errors in File Process (#8084) 2025-06-26 18:51:35 -04:00
Danny Avila
799f0e5810 🐛 fix: Move MemoryEntry and PluginAuth model retrieval inside methods for Runtime Usage 2025-06-25 20:58:34 -04:00
333 changed files with 21687 additions and 4942 deletions

View File

@@ -349,6 +349,11 @@ REGISTRATION_VIOLATION_SCORE=1
CONCURRENT_VIOLATION_SCORE=1
MESSAGE_VIOLATION_SCORE=1
NON_BROWSER_VIOLATION_SCORE=20
TTS_VIOLATION_SCORE=0
STT_VIOLATION_SCORE=0
FORK_VIOLATION_SCORE=0
IMPORT_VIOLATION_SCORE=0
FILE_UPLOAD_VIOLATION_SCORE=0
LOGIN_MAX=7
LOGIN_WINDOW=5
@@ -453,8 +458,8 @@ OPENID_REUSE_TOKENS=
OPENID_JWKS_URL_CACHE_ENABLED=
OPENID_JWKS_URL_CACHE_TIME= # 600000 ms eq to 10 minutes leave empty to disable caching
#Set to true to trigger token exchange flow to acquire access token for the userinfo endpoint.
OPENID_ON_BEHALF_FLOW_FOR_USERINFRO_REQUIRED=
OPENID_ON_BEHALF_FLOW_USERINFRO_SCOPE = "user.read" # example for Scope Needed for Microsoft Graph API
OPENID_ON_BEHALF_FLOW_FOR_USERINFO_REQUIRED=
OPENID_ON_BEHALF_FLOW_USERINFO_SCOPE="user.read" # example for Scope Needed for Microsoft Graph API
# Set to true to use the OpenID Connect end session endpoint for logout
OPENID_USE_END_SESSION_ENDPOINT=
@@ -575,6 +580,10 @@ ALLOW_SHARED_LINKS_PUBLIC=true
# If you have another service in front of your LibreChat doing compression, disable express based compression here
# DISABLE_COMPRESSION=true
# If you have gzipped version of uploaded image images in the same folder, this will enable gzip scan and serving of these images
# Note: The images folder will be scanned on startup and a ma kept in memory. Be careful for large number of images.
# ENABLE_IMAGE_OUTPUT_GZIP_SCAN=true
#===================================================#
# UI #
#===================================================#
@@ -592,11 +601,31 @@ HELP_AND_FAQ_URL=https://librechat.ai
# REDIS Options #
#===============#
# REDIS_URI=10.10.10.10:6379
# Enable Redis for caching and session storage
# USE_REDIS=true
# USE_REDIS_CLUSTER=true
# REDIS_CA=/path/to/ca.crt
# Single Redis instance
# REDIS_URI=redis://127.0.0.1:6379
# Redis cluster (multiple nodes)
# REDIS_URI=redis://127.0.0.1:7001,redis://127.0.0.1:7002,redis://127.0.0.1:7003
# Redis with TLS/SSL encryption and CA certificate
# REDIS_URI=rediss://127.0.0.1:6380
# REDIS_CA=/path/to/ca-cert.pem
# Redis authentication (if required)
# REDIS_USERNAME=your_redis_username
# REDIS_PASSWORD=your_redis_password
# Redis key prefix configuration
# Use environment variable name for dynamic prefix (recommended for cloud deployments)
# REDIS_KEY_PREFIX_VAR=K_REVISION
# Or use static prefix directly
# REDIS_KEY_PREFIX=librechat
# Redis connection limits
# REDIS_MAX_LISTENERS=40
#==================================================#
# Others #

View File

@@ -7,7 +7,7 @@ on:
- release/*
paths:
- 'api/**'
- 'packages/api/**'
- 'packages/**'
jobs:
tests_Backend:
name: Run Backend unit tests

32
.github/workflows/client.yml vendored Normal file
View File

@@ -0,0 +1,32 @@
name: Publish `@librechat/client` to NPM
on:
workflow_dispatch:
inputs:
reason:
description: 'Reason for manual trigger'
required: false
default: 'Manual publish requested'
jobs:
build-and-publish:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Use Node.js
uses: actions/setup-node@v4
with:
node-version: '18.x'
- name: Check if client package exists
run: |
if [ -d "packages/client" ]; then
echo "Client package directory found"
else
echo "Client package directory not found - workflow ready for future use"
exit 0
fi
- name: Placeholder for future publishing
run: echo "Client package publishing workflow is ready"

View File

@@ -8,7 +8,7 @@ on:
- release/*
paths:
- 'client/**'
- 'packages/**'
- 'packages/data-provider/**'
jobs:
tests_frontend_ubuntu:

9
.gitignore vendored
View File

@@ -125,3 +125,12 @@ helm/**/.values.yaml
# SAML Idp cert
*.cert
# AI Assistants
/.claude/
/.cursor/
/.copilot/
/.aider/
/.openai/
/.tabnine/
/.codeium

View File

@@ -1,4 +1,4 @@
# v0.7.8
# v0.7.9-rc1
# Base node image
FROM node:20-alpine AS node

View File

@@ -1,5 +1,5 @@
# Dockerfile.multi
# v0.7.8
# v0.7.9-rc1
# Base for all builds
FROM node:20-alpine AS base-min

View File

@@ -52,7 +52,7 @@
- 🖥️ **UI & Experience** inspired by ChatGPT with enhanced design and features
- 🤖 **AI Model Selection**:
- Anthropic (Claude), AWS Bedrock, OpenAI, Azure OpenAI, Google, Vertex AI, OpenAI Assistants API (incl. Azure)
- Anthropic (Claude), AWS Bedrock, OpenAI, Azure OpenAI, Google, Vertex AI, OpenAI Responses API (incl. Azure)
- [Custom Endpoints](https://www.librechat.ai/docs/quick_start/custom_endpoints): Use any OpenAI-compatible API with LibreChat, no proxy required
- Compatible with [Local & Remote AI Providers](https://www.librechat.ai/docs/configuration/librechat_yaml/ai_endpoints):
- Ollama, groq, Cohere, Mistral AI, Apple MLX, koboldcpp, together.ai,
@@ -66,10 +66,9 @@
- 🔦 **Agents & Tools Integration**:
- **[LibreChat Agents](https://www.librechat.ai/docs/features/agents)**:
- No-Code Custom Assistants: Build specialized, AI-driven helpers without coding
- Flexible & Extensible: Attach tools like DALL-E-3, file search, code execution, and more
- Compatible with Custom Endpoints, OpenAI, Azure, Anthropic, AWS Bedrock, and more
- Flexible & Extensible: Use MCP Servers, tools, file search, code execution, and more
- Compatible with Custom Endpoints, OpenAI, Azure, Anthropic, AWS Bedrock, Google, Vertex AI, Responses API, and more
- [Model Context Protocol (MCP) Support](https://modelcontextprotocol.io/clients#librechat) for Tools
- Use LibreChat Agents and OpenAI Assistants with Files, Code Interpreter, Tools, and API Actions
- 🔍 **Web Search**:
- Search the internet and retrieve relevant information to enhance your AI context

View File

@@ -13,7 +13,6 @@ const {
const { getMessages, saveMessage, updateMessage, saveConvo, getConvo } = require('~/models');
const { checkBalance } = require('~/models/balanceMethods');
const { truncateToolCallOutputs } = require('./prompts');
const { addSpaceIfNeeded } = require('~/server/utils');
const { getFiles } = require('~/models/File');
const TextStream = require('./TextStream');
const { logger } = require('~/config');
@@ -109,12 +108,15 @@ class BaseClient {
/**
* Abstract method to record token usage. Subclasses must implement this method.
* If a correction to the token usage is needed, the method should return an object with the corrected token counts.
* Should only be used if `recordCollectedUsage` was not used instead.
* @param {string} [model]
* @param {number} promptTokens
* @param {number} completionTokens
* @returns {Promise<void>}
*/
async recordTokenUsage({ promptTokens, completionTokens }) {
async recordTokenUsage({ model, promptTokens, completionTokens }) {
logger.debug('[BaseClient] `recordTokenUsage` not implemented.', {
model,
promptTokens,
completionTokens,
});
@@ -198,6 +200,10 @@ class BaseClient {
this.currentMessages[this.currentMessages.length - 1].messageId = head;
}
if (opts.isRegenerate && responseMessageId.endsWith('_')) {
responseMessageId = crypto.randomUUID();
}
this.responseMessageId = responseMessageId;
return {
@@ -572,7 +578,7 @@ class BaseClient {
});
}
const { generation = '' } = opts;
const { editedContent } = opts;
// It's not necessary to push to currentMessages
// depending on subclass implementation of handling messages
@@ -587,11 +593,21 @@ class BaseClient {
isCreatedByUser: false,
model: this.modelOptions?.model ?? this.model,
sender: this.sender,
text: generation,
};
this.currentMessages.push(userMessage, latestMessage);
} else {
latestMessage.text = generation;
} else if (editedContent != null) {
// Handle editedContent for content parts
if (editedContent && latestMessage.content && Array.isArray(latestMessage.content)) {
const { index, text, type } = editedContent;
if (index >= 0 && index < latestMessage.content.length) {
const contentPart = latestMessage.content[index];
if (type === ContentTypes.THINK && contentPart.type === ContentTypes.THINK) {
contentPart[ContentTypes.THINK] = text;
} else if (type === ContentTypes.TEXT && contentPart.type === ContentTypes.TEXT) {
contentPart[ContentTypes.TEXT] = text;
}
}
}
}
this.continued = true;
} else {
@@ -672,16 +688,32 @@ class BaseClient {
};
if (typeof completion === 'string') {
responseMessage.text = addSpaceIfNeeded(generation) + completion;
responseMessage.text = completion;
} else if (
Array.isArray(completion) &&
(this.clientName === EModelEndpoint.agents ||
isParamEndpoint(this.options.endpoint, this.options.endpointType))
) {
responseMessage.text = '';
responseMessage.content = completion;
if (!opts.editedContent || this.currentMessages.length === 0) {
responseMessage.content = completion;
} else {
const latestMessage = this.currentMessages[this.currentMessages.length - 1];
if (!latestMessage?.content) {
responseMessage.content = completion;
} else {
const existingContent = [...latestMessage.content];
const { type: editedType } = opts.editedContent;
responseMessage.content = this.mergeEditedContent(
existingContent,
completion,
editedType,
);
}
}
} else if (Array.isArray(completion)) {
responseMessage.text = addSpaceIfNeeded(generation) + completion.join('');
responseMessage.text = completion.join('');
}
if (
@@ -712,9 +744,13 @@ class BaseClient {
} else {
responseMessage.tokenCount = this.getTokenCountForResponse(responseMessage);
completionTokens = responseMessage.tokenCount;
await this.recordTokenUsage({
usage,
promptTokens,
completionTokens,
model: responseMessage.model,
});
}
await this.recordTokenUsage({ promptTokens, completionTokens, usage });
}
if (userMessagePromise) {
@@ -1095,6 +1131,50 @@ class BaseClient {
return numTokens;
}
/**
* Merges completion content with existing content when editing TEXT or THINK types
* @param {Array} existingContent - The existing content array
* @param {Array} newCompletion - The new completion content
* @param {string} editedType - The type of content being edited
* @returns {Array} The merged content array
*/
mergeEditedContent(existingContent, newCompletion, editedType) {
if (!newCompletion.length) {
return existingContent.concat(newCompletion);
}
if (editedType !== ContentTypes.TEXT && editedType !== ContentTypes.THINK) {
return existingContent.concat(newCompletion);
}
const lastIndex = existingContent.length - 1;
const lastExisting = existingContent[lastIndex];
const firstNew = newCompletion[0];
if (lastExisting?.type !== firstNew?.type || firstNew?.type !== editedType) {
return existingContent.concat(newCompletion);
}
const mergedContent = [...existingContent];
if (editedType === ContentTypes.TEXT) {
mergedContent[lastIndex] = {
...mergedContent[lastIndex],
[ContentTypes.TEXT]:
(mergedContent[lastIndex][ContentTypes.TEXT] || '') + (firstNew[ContentTypes.TEXT] || ''),
};
} else {
mergedContent[lastIndex] = {
...mergedContent[lastIndex],
[ContentTypes.THINK]:
(mergedContent[lastIndex][ContentTypes.THINK] || '') +
(firstNew[ContentTypes.THINK] || ''),
};
}
// Add remaining completion items
return mergedContent.concat(newCompletion.slice(1));
}
async sendPayload(payload, opts = {}) {
if (opts && typeof opts === 'object') {
this.setOptions(opts);

View File

@@ -1,6 +1,7 @@
const axios = require('axios');
const { isEnabled } = require('~/server/utils');
const { logger } = require('~/config');
const { isEnabled } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { generateShortLivedToken } = require('~/server/services/AuthService');
const footer = `Use the context as your learned knowledge to better answer the user.
@@ -18,7 +19,7 @@ function createContextHandlers(req, userMessageContent) {
const queryPromises = [];
const processedFiles = [];
const processedIds = new Set();
const jwtToken = req.headers.authorization.split(' ')[1];
const jwtToken = generateShortLivedToken(req.user.id);
const useFullContext = isEnabled(process.env.RAG_USE_FULL_CONTEXT);
const query = async (file) => {

View File

@@ -237,41 +237,9 @@ const formatAgentMessages = (payload) => {
return messages;
};
/**
* Formats an array of messages for LangChain, making sure all content fields are strings
* @param {Array<(HumanMessage|AIMessage|SystemMessage|ToolMessage)>} payload - The array of messages to format.
* @returns {Array<(HumanMessage|AIMessage|SystemMessage|ToolMessage)>} - The array of formatted LangChain messages, including ToolMessages for tool calls.
*/
const formatContentStrings = (payload) => {
const messages = [];
for (const message of payload) {
if (typeof message.content === 'string') {
continue;
}
if (!Array.isArray(message.content)) {
continue;
}
// Reduce text types to a single string, ignore all other types
const content = message.content.reduce((acc, curr) => {
if (curr.type === ContentTypes.TEXT) {
return `${acc}${curr[ContentTypes.TEXT]}\n`;
}
return acc;
}, '');
message.content = content.trim();
}
return messages;
};
module.exports = {
formatMessage,
formatFromLangChain,
formatAgentMessages,
formatContentStrings,
formatLangChainMessages,
};

View File

@@ -422,6 +422,46 @@ describe('BaseClient', () => {
expect(response).toEqual(expectedResult);
});
test('should replace responseMessageId with new UUID when isRegenerate is true and messageId ends with underscore', async () => {
const mockCrypto = require('crypto');
const newUUID = 'new-uuid-1234';
jest.spyOn(mockCrypto, 'randomUUID').mockReturnValue(newUUID);
const opts = {
isRegenerate: true,
responseMessageId: 'existing-message-id_',
};
await TestClient.setMessageOptions(opts);
expect(TestClient.responseMessageId).toBe(newUUID);
expect(TestClient.responseMessageId).not.toBe('existing-message-id_');
mockCrypto.randomUUID.mockRestore();
});
test('should not replace responseMessageId when isRegenerate is false', async () => {
const opts = {
isRegenerate: false,
responseMessageId: 'existing-message-id_',
};
await TestClient.setMessageOptions(opts);
expect(TestClient.responseMessageId).toBe('existing-message-id_');
});
test('should not replace responseMessageId when it does not end with underscore', async () => {
const opts = {
isRegenerate: true,
responseMessageId: 'existing-message-id',
};
await TestClient.setMessageOptions(opts);
expect(TestClient.responseMessageId).toBe('existing-message-id');
});
test('sendMessage should work with provided conversationId and parentMessageId', async () => {
const userMessage = 'Second message in the conversation';
const opts = {

View File

@@ -1,26 +1,35 @@
const { z } = require('zod');
const axios = require('axios');
const { tool } = require('@langchain/core/tools');
const { logger } = require('@librechat/data-schemas');
const { Tools, EToolResources } = require('librechat-data-provider');
const { generateShortLivedToken } = require('~/server/services/AuthService');
const { getFiles } = require('~/models/File');
const { logger } = require('~/config');
/**
*
* @param {Object} options
* @param {ServerRequest} options.req
* @param {Agent['tool_resources']} options.tool_resources
* @param {string} [options.agentId] - The agent ID for file access control
* @returns {Promise<{
* files: Array<{ file_id: string; filename: string }>,
* toolContext: string
* }>}
*/
const primeFiles = async (options) => {
const { tool_resources } = options;
const { tool_resources, req, agentId } = options;
const file_ids = tool_resources?.[EToolResources.file_search]?.file_ids ?? [];
const agentResourceIds = new Set(file_ids);
const resourceFiles = tool_resources?.[EToolResources.file_search]?.files ?? [];
const dbFiles = ((await getFiles({ file_id: { $in: file_ids } })) ?? []).concat(resourceFiles);
const dbFiles = (
(await getFiles(
{ file_id: { $in: file_ids } },
null,
{ text: 0 },
{ userId: req?.user?.id, agentId },
)) ?? []
).concat(resourceFiles);
let toolContext = `- Note: Semantic search is available through the ${Tools.file_search} tool but no files are currently loaded. Request the user to upload documents to search through.`;
@@ -59,7 +68,7 @@ const createFileSearchTool = async ({ req, files, entity_id }) => {
if (files.length === 0) {
return 'No files to search. Instruct the user to add files for the search.';
}
const jwtToken = req.headers.authorization.split(' ')[1];
const jwtToken = generateShortLivedToken(req.user.id);
if (!jwtToken) {
return 'There was an error authenticating the file search request.';
}

View File

@@ -1,14 +1,9 @@
const { mcpToolPattern } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { SerpAPI } = require('@langchain/community/tools/serpapi');
const { Calculator } = require('@langchain/community/tools/calculator');
const { mcpToolPattern, loadWebSearchAuth } = require('@librechat/api');
const { EnvVar, createCodeExecutionTool, createSearchTool } = require('@librechat/agents');
const {
Tools,
EToolResources,
loadWebSearchAuth,
replaceSpecialVars,
} = require('librechat-data-provider');
const { Tools, EToolResources, replaceSpecialVars } = require('librechat-data-provider');
const {
availableTools,
manifestToolMap,
@@ -235,7 +230,7 @@ const loadTools = async ({
/** @type {Record<string, string>} */
const toolContextMap = {};
const appTools = (await getCachedTools({ includeGlobal: true })) ?? {};
const cachedTools = (await getCachedTools({ userId: user, includeGlobal: true })) ?? {};
for (const tool of tools) {
if (tool === Tools.execute_code) {
@@ -245,7 +240,13 @@ const loadTools = async ({
authFields: [EnvVar.CODE_API_KEY],
});
const codeApiKey = authValues[EnvVar.CODE_API_KEY];
const { files, toolContext } = await primeCodeFiles(options, codeApiKey);
const { files, toolContext } = await primeCodeFiles(
{
...options,
agentId: agent?.id,
},
codeApiKey,
);
if (toolContext) {
toolContextMap[tool] = toolContext;
}
@@ -260,7 +261,10 @@ const loadTools = async ({
continue;
} else if (tool === Tools.file_search) {
requestedTools[tool] = async () => {
const { files, toolContext } = await primeSearchFiles(options);
const { files, toolContext } = await primeSearchFiles({
...options,
agentId: agent?.id,
});
if (toolContext) {
toolContextMap[tool] = toolContext;
}
@@ -294,7 +298,7 @@ Current Date & Time: ${replaceSpecialVars({ text: '{{iso_datetime}}' })}
});
};
continue;
} else if (tool && appTools[tool] && mcpToolPattern.test(tool)) {
} else if (tool && cachedTools && mcpToolPattern.test(tool)) {
requestedTools[tool] = async () =>
createMCPTool({
req: options.req,

33
api/cache/cacheConfig.js vendored Normal file
View File

@@ -0,0 +1,33 @@
const fs = require('fs');
const { math, isEnabled } = require('@librechat/api');
// To ensure that different deployments do not interfere with each other's cache, we use a prefix for the Redis keys.
// This prefix is usually the deployment ID, which is often passed to the container or pod as an env var.
// Set REDIS_KEY_PREFIX_VAR to the env var that contains the deployment ID.
const REDIS_KEY_PREFIX_VAR = process.env.REDIS_KEY_PREFIX_VAR;
const REDIS_KEY_PREFIX = process.env.REDIS_KEY_PREFIX;
if (REDIS_KEY_PREFIX_VAR && REDIS_KEY_PREFIX) {
throw new Error('Only either REDIS_KEY_PREFIX_VAR or REDIS_KEY_PREFIX can be set.');
}
const USE_REDIS = isEnabled(process.env.USE_REDIS);
if (USE_REDIS && !process.env.REDIS_URI) {
throw new Error('USE_REDIS is enabled but REDIS_URI is not set.');
}
const cacheConfig = {
USE_REDIS,
REDIS_URI: process.env.REDIS_URI,
REDIS_USERNAME: process.env.REDIS_USERNAME,
REDIS_PASSWORD: process.env.REDIS_PASSWORD,
REDIS_CA: process.env.REDIS_CA ? fs.readFileSync(process.env.REDIS_CA, 'utf8') : null,
REDIS_KEY_PREFIX: process.env[REDIS_KEY_PREFIX_VAR] || REDIS_KEY_PREFIX || '',
REDIS_MAX_LISTENERS: math(process.env.REDIS_MAX_LISTENERS, 40),
CI: isEnabled(process.env.CI),
DEBUG_MEMORY_CACHE: isEnabled(process.env.DEBUG_MEMORY_CACHE),
BAN_DURATION: math(process.env.BAN_DURATION, 7200000), // 2 hours
};
module.exports = { cacheConfig };

108
api/cache/cacheConfig.spec.js vendored Normal file
View File

@@ -0,0 +1,108 @@
const fs = require('fs');
describe('cacheConfig', () => {
let originalEnv;
let originalReadFileSync;
beforeEach(() => {
originalEnv = { ...process.env };
originalReadFileSync = fs.readFileSync;
// Clear all related env vars first
delete process.env.REDIS_URI;
delete process.env.REDIS_CA;
delete process.env.REDIS_KEY_PREFIX_VAR;
delete process.env.REDIS_KEY_PREFIX;
delete process.env.USE_REDIS;
// Clear require cache
jest.resetModules();
});
afterEach(() => {
process.env = originalEnv;
fs.readFileSync = originalReadFileSync;
jest.resetModules();
});
describe('REDIS_KEY_PREFIX validation and resolution', () => {
test('should throw error when both REDIS_KEY_PREFIX_VAR and REDIS_KEY_PREFIX are set', () => {
process.env.REDIS_KEY_PREFIX_VAR = 'DEPLOYMENT_ID';
process.env.REDIS_KEY_PREFIX = 'manual-prefix';
expect(() => {
require('./cacheConfig');
}).toThrow('Only either REDIS_KEY_PREFIX_VAR or REDIS_KEY_PREFIX can be set.');
});
test('should resolve REDIS_KEY_PREFIX from variable reference', () => {
process.env.REDIS_KEY_PREFIX_VAR = 'DEPLOYMENT_ID';
process.env.DEPLOYMENT_ID = 'test-deployment-123';
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.REDIS_KEY_PREFIX).toBe('test-deployment-123');
});
test('should use direct REDIS_KEY_PREFIX value', () => {
process.env.REDIS_KEY_PREFIX = 'direct-prefix';
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.REDIS_KEY_PREFIX).toBe('direct-prefix');
});
test('should default to empty string when no prefix is configured', () => {
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.REDIS_KEY_PREFIX).toBe('');
});
test('should handle empty variable reference', () => {
process.env.REDIS_KEY_PREFIX_VAR = 'EMPTY_VAR';
process.env.EMPTY_VAR = '';
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.REDIS_KEY_PREFIX).toBe('');
});
test('should handle undefined variable reference', () => {
process.env.REDIS_KEY_PREFIX_VAR = 'UNDEFINED_VAR';
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.REDIS_KEY_PREFIX).toBe('');
});
});
describe('USE_REDIS and REDIS_URI validation', () => {
test('should throw error when USE_REDIS is enabled but REDIS_URI is not set', () => {
process.env.USE_REDIS = 'true';
expect(() => {
require('./cacheConfig');
}).toThrow('USE_REDIS is enabled but REDIS_URI is not set.');
});
test('should not throw error when USE_REDIS is enabled and REDIS_URI is set', () => {
process.env.USE_REDIS = 'true';
process.env.REDIS_URI = 'redis://localhost:6379';
expect(() => {
require('./cacheConfig');
}).not.toThrow();
});
test('should handle empty REDIS_URI when USE_REDIS is enabled', () => {
process.env.USE_REDIS = 'true';
process.env.REDIS_URI = '';
expect(() => {
require('./cacheConfig');
}).toThrow('USE_REDIS is enabled but REDIS_URI is not set.');
});
});
describe('REDIS_CA file reading', () => {
test('should be null when REDIS_CA is not set', () => {
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.REDIS_CA).toBeNull();
});
});
});

66
api/cache/cacheFactory.js vendored Normal file
View File

@@ -0,0 +1,66 @@
const KeyvRedis = require('@keyv/redis').default;
const { Keyv } = require('keyv');
const { cacheConfig } = require('./cacheConfig');
const { keyvRedisClient, ioredisClient, GLOBAL_PREFIX_SEPARATOR } = require('./redisClients');
const { Time } = require('librechat-data-provider');
const { RedisStore: ConnectRedis } = require('connect-redis');
const MemoryStore = require('memorystore')(require('express-session'));
const { violationFile } = require('./keyvFiles');
const { RedisStore } = require('rate-limit-redis');
/**
* Creates a cache instance using Redis or a fallback store. Suitable for general caching needs.
* @param {string} namespace - The cache namespace.
* @param {number} [ttl] - Time to live for cache entries.
* @param {object} [fallbackStore] - Optional fallback store if Redis is not used.
* @returns {Keyv} Cache instance.
*/
const standardCache = (namespace, ttl = undefined, fallbackStore = undefined) => {
if (cacheConfig.USE_REDIS) {
const keyvRedis = new KeyvRedis(keyvRedisClient);
const cache = new Keyv(keyvRedis, { namespace, ttl });
keyvRedis.namespace = cacheConfig.REDIS_KEY_PREFIX;
keyvRedis.keyPrefixSeparator = GLOBAL_PREFIX_SEPARATOR;
return cache;
}
if (fallbackStore) return new Keyv({ store: fallbackStore, namespace, ttl });
return new Keyv({ namespace, ttl });
};
/**
* Creates a cache instance for storing violation data.
* Uses a file-based fallback store if Redis is not enabled.
* @param {string} namespace - The cache namespace for violations.
* @param {number} [ttl] - Time to live for cache entries.
* @returns {Keyv} Cache instance for violations.
*/
const violationCache = (namespace, ttl = undefined) => {
return standardCache(`violations:${namespace}`, ttl, violationFile);
};
/**
* Creates a session cache instance using Redis or in-memory store.
* @param {string} namespace - The session namespace.
* @param {number} [ttl] - Time to live for session entries.
* @returns {MemoryStore | ConnectRedis} Session store instance.
*/
const sessionCache = (namespace, ttl = undefined) => {
namespace = namespace.endsWith(':') ? namespace : `${namespace}:`;
if (!cacheConfig.USE_REDIS) return new MemoryStore({ ttl, checkPeriod: Time.ONE_DAY });
return new ConnectRedis({ client: ioredisClient, ttl, prefix: namespace });
};
/**
* Creates a rate limiter cache using Redis.
* @param {string} prefix - The key prefix for rate limiting.
* @returns {RedisStore|undefined} RedisStore instance or undefined if Redis is not used.
*/
const limiterCache = (prefix) => {
if (!prefix) throw new Error('prefix is required');
if (!cacheConfig.USE_REDIS) return undefined;
prefix = prefix.endsWith(':') ? prefix : `${prefix}:`;
return new RedisStore({ sendCommand, prefix });
};
const sendCommand = (...args) => ioredisClient?.call(...args);
module.exports = { standardCache, sessionCache, violationCache, limiterCache };

270
api/cache/cacheFactory.spec.js vendored Normal file
View File

@@ -0,0 +1,270 @@
const { Time } = require('librechat-data-provider');
// Mock dependencies first
const mockKeyvRedis = {
namespace: '',
keyPrefixSeparator: '',
};
const mockKeyv = jest.fn().mockReturnValue({ mock: 'keyv' });
const mockConnectRedis = jest.fn().mockReturnValue({ mock: 'connectRedis' });
const mockMemoryStore = jest.fn().mockReturnValue({ mock: 'memoryStore' });
const mockRedisStore = jest.fn().mockReturnValue({ mock: 'redisStore' });
const mockIoredisClient = {
call: jest.fn(),
};
const mockKeyvRedisClient = {};
const mockViolationFile = {};
// Mock modules before requiring the main module
jest.mock('@keyv/redis', () => ({
default: jest.fn().mockImplementation(() => mockKeyvRedis),
}));
jest.mock('keyv', () => ({
Keyv: mockKeyv,
}));
jest.mock('./cacheConfig', () => ({
cacheConfig: {
USE_REDIS: false,
REDIS_KEY_PREFIX: 'test',
},
}));
jest.mock('./redisClients', () => ({
keyvRedisClient: mockKeyvRedisClient,
ioredisClient: mockIoredisClient,
GLOBAL_PREFIX_SEPARATOR: '::',
}));
jest.mock('./keyvFiles', () => ({
violationFile: mockViolationFile,
}));
jest.mock('connect-redis', () => ({ RedisStore: mockConnectRedis }));
jest.mock('memorystore', () => jest.fn(() => mockMemoryStore));
jest.mock('rate-limit-redis', () => ({
RedisStore: mockRedisStore,
}));
// Import after mocking
const { standardCache, sessionCache, violationCache, limiterCache } = require('./cacheFactory');
const { cacheConfig } = require('./cacheConfig');
describe('cacheFactory', () => {
beforeEach(() => {
jest.clearAllMocks();
// Reset cache config mock
cacheConfig.USE_REDIS = false;
cacheConfig.REDIS_KEY_PREFIX = 'test';
});
describe('redisCache', () => {
it('should create Redis cache when USE_REDIS is true', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'test-namespace';
const ttl = 3600;
standardCache(namespace, ttl);
expect(require('@keyv/redis').default).toHaveBeenCalledWith(mockKeyvRedisClient);
expect(mockKeyv).toHaveBeenCalledWith(mockKeyvRedis, { namespace, ttl });
expect(mockKeyvRedis.namespace).toBe(cacheConfig.REDIS_KEY_PREFIX);
expect(mockKeyvRedis.keyPrefixSeparator).toBe('::');
});
it('should create Redis cache with undefined ttl when not provided', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'test-namespace';
standardCache(namespace);
expect(mockKeyv).toHaveBeenCalledWith(mockKeyvRedis, { namespace, ttl: undefined });
});
it('should use fallback store when USE_REDIS is false and fallbackStore is provided', () => {
cacheConfig.USE_REDIS = false;
const namespace = 'test-namespace';
const ttl = 3600;
const fallbackStore = { some: 'store' };
standardCache(namespace, ttl, fallbackStore);
expect(mockKeyv).toHaveBeenCalledWith({ store: fallbackStore, namespace, ttl });
});
it('should create default Keyv instance when USE_REDIS is false and no fallbackStore', () => {
cacheConfig.USE_REDIS = false;
const namespace = 'test-namespace';
const ttl = 3600;
standardCache(namespace, ttl);
expect(mockKeyv).toHaveBeenCalledWith({ namespace, ttl });
});
it('should handle namespace and ttl as undefined', () => {
cacheConfig.USE_REDIS = false;
standardCache();
expect(mockKeyv).toHaveBeenCalledWith({ namespace: undefined, ttl: undefined });
});
});
describe('violationCache', () => {
it('should create violation cache with prefixed namespace', () => {
const namespace = 'test-violations';
const ttl = 7200;
// We can't easily mock the internal redisCache call since it's in the same module
// But we can test that the function executes without throwing
expect(() => violationCache(namespace, ttl)).not.toThrow();
});
it('should create violation cache with undefined ttl', () => {
const namespace = 'test-violations';
violationCache(namespace);
// The function should call redisCache with violations: prefixed namespace
// Since we can't easily mock the internal redisCache call, we test the behavior
expect(() => violationCache(namespace)).not.toThrow();
});
it('should handle undefined namespace', () => {
expect(() => violationCache(undefined)).not.toThrow();
});
});
describe('sessionCache', () => {
it('should return MemoryStore when USE_REDIS is false', () => {
cacheConfig.USE_REDIS = false;
const namespace = 'sessions';
const ttl = 86400;
const result = sessionCache(namespace, ttl);
expect(mockMemoryStore).toHaveBeenCalledWith({ ttl, checkPeriod: Time.ONE_DAY });
expect(result).toBe(mockMemoryStore());
});
it('should return ConnectRedis when USE_REDIS is true', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'sessions';
const ttl = 86400;
const result = sessionCache(namespace, ttl);
expect(mockConnectRedis).toHaveBeenCalledWith({
client: mockIoredisClient,
ttl,
prefix: `${namespace}:`,
});
expect(result).toBe(mockConnectRedis());
});
it('should add colon to namespace if not present', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'sessions';
sessionCache(namespace);
expect(mockConnectRedis).toHaveBeenCalledWith({
client: mockIoredisClient,
ttl: undefined,
prefix: 'sessions:',
});
});
it('should not add colon to namespace if already present', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'sessions:';
sessionCache(namespace);
expect(mockConnectRedis).toHaveBeenCalledWith({
client: mockIoredisClient,
ttl: undefined,
prefix: 'sessions:',
});
});
it('should handle undefined ttl', () => {
cacheConfig.USE_REDIS = false;
const namespace = 'sessions';
sessionCache(namespace);
expect(mockMemoryStore).toHaveBeenCalledWith({
ttl: undefined,
checkPeriod: Time.ONE_DAY,
});
});
});
describe('limiterCache', () => {
it('should return undefined when USE_REDIS is false', () => {
cacheConfig.USE_REDIS = false;
const result = limiterCache('prefix');
expect(result).toBeUndefined();
});
it('should return RedisStore when USE_REDIS is true', () => {
cacheConfig.USE_REDIS = true;
const result = limiterCache('rate-limit');
expect(mockRedisStore).toHaveBeenCalledWith({
sendCommand: expect.any(Function),
prefix: `rate-limit:`,
});
expect(result).toBe(mockRedisStore());
});
it('should add colon to prefix if not present', () => {
cacheConfig.USE_REDIS = true;
limiterCache('rate-limit');
expect(mockRedisStore).toHaveBeenCalledWith({
sendCommand: expect.any(Function),
prefix: 'rate-limit:',
});
});
it('should not add colon to prefix if already present', () => {
cacheConfig.USE_REDIS = true;
limiterCache('rate-limit:');
expect(mockRedisStore).toHaveBeenCalledWith({
sendCommand: expect.any(Function),
prefix: 'rate-limit:',
});
});
it('should pass sendCommand function that calls ioredisClient.call', () => {
cacheConfig.USE_REDIS = true;
limiterCache('rate-limit');
const sendCommandCall = mockRedisStore.mock.calls[0][0];
const sendCommand = sendCommandCall.sendCommand;
// Test that sendCommand properly delegates to ioredisClient.call
const args = ['GET', 'test-key'];
sendCommand(...args);
expect(mockIoredisClient.call).toHaveBeenCalledWith(...args);
});
it('should handle undefined prefix', () => {
cacheConfig.USE_REDIS = true;
expect(() => limiterCache()).toThrow('prefix is required');
});
});
});

View File

@@ -1,113 +1,52 @@
const { cacheConfig } = require('./cacheConfig');
const { Keyv } = require('keyv');
const { isEnabled, math } = require('@librechat/api');
const { CacheKeys, ViolationTypes, Time } = require('librechat-data-provider');
const { logFile, violationFile } = require('./keyvFiles');
const keyvRedis = require('./keyvRedis');
const { logFile } = require('./keyvFiles');
const keyvMongo = require('./keyvMongo');
const { BAN_DURATION, USE_REDIS, DEBUG_MEMORY_CACHE, CI } = process.env ?? {};
const duration = math(BAN_DURATION, 7200000);
const isRedisEnabled = isEnabled(USE_REDIS);
const debugMemoryCache = isEnabled(DEBUG_MEMORY_CACHE);
const createViolationInstance = (namespace) => {
const config = isRedisEnabled ? { store: keyvRedis } : { store: violationFile, namespace };
return new Keyv(config);
};
// Serve cache from memory so no need to clear it on startup/exit
const pending_req = isRedisEnabled
? new Keyv({ store: keyvRedis })
: new Keyv({ namespace: CacheKeys.PENDING_REQ });
const config = isRedisEnabled
? new Keyv({ store: keyvRedis })
: new Keyv({ namespace: CacheKeys.CONFIG_STORE });
const roles = isRedisEnabled
? new Keyv({ store: keyvRedis })
: new Keyv({ namespace: CacheKeys.ROLES });
const mcpTools = isRedisEnabled
? new Keyv({ store: keyvRedis })
: new Keyv({ namespace: CacheKeys.MCP_TOOLS });
const audioRuns = isRedisEnabled
? new Keyv({ store: keyvRedis, ttl: Time.TEN_MINUTES })
: new Keyv({ namespace: CacheKeys.AUDIO_RUNS, ttl: Time.TEN_MINUTES });
const messages = isRedisEnabled
? new Keyv({ store: keyvRedis, ttl: Time.ONE_MINUTE })
: new Keyv({ namespace: CacheKeys.MESSAGES, ttl: Time.ONE_MINUTE });
const flows = isRedisEnabled
? new Keyv({ store: keyvRedis, ttl: Time.TWO_MINUTES })
: new Keyv({ namespace: CacheKeys.FLOWS, ttl: Time.ONE_MINUTE * 3 });
const tokenConfig = isRedisEnabled
? new Keyv({ store: keyvRedis, ttl: Time.THIRTY_MINUTES })
: new Keyv({ namespace: CacheKeys.TOKEN_CONFIG, ttl: Time.THIRTY_MINUTES });
const genTitle = isRedisEnabled
? new Keyv({ store: keyvRedis, ttl: Time.TWO_MINUTES })
: new Keyv({ namespace: CacheKeys.GEN_TITLE, ttl: Time.TWO_MINUTES });
const s3ExpiryInterval = isRedisEnabled
? new Keyv({ store: keyvRedis, ttl: Time.THIRTY_MINUTES })
: new Keyv({ namespace: CacheKeys.S3_EXPIRY_INTERVAL, ttl: Time.THIRTY_MINUTES });
const modelQueries = isEnabled(process.env.USE_REDIS)
? new Keyv({ store: keyvRedis })
: new Keyv({ namespace: CacheKeys.MODEL_QUERIES });
const abortKeys = isRedisEnabled
? new Keyv({ store: keyvRedis })
: new Keyv({ namespace: CacheKeys.ABORT_KEYS, ttl: Time.TEN_MINUTES });
const openIdExchangedTokensCache = isRedisEnabled
? new Keyv({ store: keyvRedis, ttl: Time.TEN_MINUTES })
: new Keyv({ namespace: CacheKeys.OPENID_EXCHANGED_TOKENS, ttl: Time.TEN_MINUTES });
const { standardCache, sessionCache, violationCache } = require('./cacheFactory');
const namespaces = {
[CacheKeys.ROLES]: roles,
[CacheKeys.MCP_TOOLS]: mcpTools,
[CacheKeys.CONFIG_STORE]: config,
[CacheKeys.PENDING_REQ]: pending_req,
[ViolationTypes.BAN]: new Keyv({ store: keyvMongo, namespace: CacheKeys.BANS, ttl: duration }),
[CacheKeys.ENCODED_DOMAINS]: new Keyv({
[ViolationTypes.GENERAL]: new Keyv({ store: logFile, namespace: 'violations' }),
[ViolationTypes.LOGINS]: violationCache(ViolationTypes.LOGINS),
[ViolationTypes.CONCURRENT]: violationCache(ViolationTypes.CONCURRENT),
[ViolationTypes.NON_BROWSER]: violationCache(ViolationTypes.NON_BROWSER),
[ViolationTypes.MESSAGE_LIMIT]: violationCache(ViolationTypes.MESSAGE_LIMIT),
[ViolationTypes.REGISTRATIONS]: violationCache(ViolationTypes.REGISTRATIONS),
[ViolationTypes.TOKEN_BALANCE]: violationCache(ViolationTypes.TOKEN_BALANCE),
[ViolationTypes.TTS_LIMIT]: violationCache(ViolationTypes.TTS_LIMIT),
[ViolationTypes.STT_LIMIT]: violationCache(ViolationTypes.STT_LIMIT),
[ViolationTypes.CONVO_ACCESS]: violationCache(ViolationTypes.CONVO_ACCESS),
[ViolationTypes.TOOL_CALL_LIMIT]: violationCache(ViolationTypes.TOOL_CALL_LIMIT),
[ViolationTypes.FILE_UPLOAD_LIMIT]: violationCache(ViolationTypes.FILE_UPLOAD_LIMIT),
[ViolationTypes.VERIFY_EMAIL_LIMIT]: violationCache(ViolationTypes.VERIFY_EMAIL_LIMIT),
[ViolationTypes.RESET_PASSWORD_LIMIT]: violationCache(ViolationTypes.RESET_PASSWORD_LIMIT),
[ViolationTypes.ILLEGAL_MODEL_REQUEST]: violationCache(ViolationTypes.ILLEGAL_MODEL_REQUEST),
[ViolationTypes.BAN]: new Keyv({
store: keyvMongo,
namespace: CacheKeys.ENCODED_DOMAINS,
ttl: 0,
namespace: CacheKeys.BANS,
ttl: cacheConfig.BAN_DURATION,
}),
general: new Keyv({ store: logFile, namespace: 'violations' }),
concurrent: createViolationInstance('concurrent'),
non_browser: createViolationInstance('non_browser'),
message_limit: createViolationInstance('message_limit'),
token_balance: createViolationInstance(ViolationTypes.TOKEN_BALANCE),
registrations: createViolationInstance('registrations'),
[ViolationTypes.TTS_LIMIT]: createViolationInstance(ViolationTypes.TTS_LIMIT),
[ViolationTypes.STT_LIMIT]: createViolationInstance(ViolationTypes.STT_LIMIT),
[ViolationTypes.CONVO_ACCESS]: createViolationInstance(ViolationTypes.CONVO_ACCESS),
[ViolationTypes.TOOL_CALL_LIMIT]: createViolationInstance(ViolationTypes.TOOL_CALL_LIMIT),
[ViolationTypes.FILE_UPLOAD_LIMIT]: createViolationInstance(ViolationTypes.FILE_UPLOAD_LIMIT),
[ViolationTypes.VERIFY_EMAIL_LIMIT]: createViolationInstance(ViolationTypes.VERIFY_EMAIL_LIMIT),
[ViolationTypes.RESET_PASSWORD_LIMIT]: createViolationInstance(
ViolationTypes.RESET_PASSWORD_LIMIT,
[CacheKeys.OPENID_SESSION]: sessionCache(CacheKeys.OPENID_SESSION),
[CacheKeys.SAML_SESSION]: sessionCache(CacheKeys.SAML_SESSION),
[CacheKeys.ROLES]: standardCache(CacheKeys.ROLES),
[CacheKeys.MCP_TOOLS]: standardCache(CacheKeys.MCP_TOOLS),
[CacheKeys.CONFIG_STORE]: standardCache(CacheKeys.CONFIG_STORE),
[CacheKeys.PENDING_REQ]: standardCache(CacheKeys.PENDING_REQ),
[CacheKeys.ENCODED_DOMAINS]: new Keyv({ store: keyvMongo, namespace: CacheKeys.ENCODED_DOMAINS }),
[CacheKeys.ABORT_KEYS]: standardCache(CacheKeys.ABORT_KEYS, Time.TEN_MINUTES),
[CacheKeys.TOKEN_CONFIG]: standardCache(CacheKeys.TOKEN_CONFIG, Time.THIRTY_MINUTES),
[CacheKeys.GEN_TITLE]: standardCache(CacheKeys.GEN_TITLE, Time.TWO_MINUTES),
[CacheKeys.S3_EXPIRY_INTERVAL]: standardCache(CacheKeys.S3_EXPIRY_INTERVAL, Time.THIRTY_MINUTES),
[CacheKeys.MODEL_QUERIES]: standardCache(CacheKeys.MODEL_QUERIES),
[CacheKeys.AUDIO_RUNS]: standardCache(CacheKeys.AUDIO_RUNS, Time.TEN_MINUTES),
[CacheKeys.MESSAGES]: standardCache(CacheKeys.MESSAGES, Time.ONE_MINUTE),
[CacheKeys.FLOWS]: standardCache(CacheKeys.FLOWS, Time.ONE_MINUTE * 3),
[CacheKeys.OPENID_EXCHANGED_TOKENS]: standardCache(
CacheKeys.OPENID_EXCHANGED_TOKENS,
Time.TEN_MINUTES,
),
[ViolationTypes.ILLEGAL_MODEL_REQUEST]: createViolationInstance(
ViolationTypes.ILLEGAL_MODEL_REQUEST,
),
logins: createViolationInstance('logins'),
[CacheKeys.ABORT_KEYS]: abortKeys,
[CacheKeys.TOKEN_CONFIG]: tokenConfig,
[CacheKeys.GEN_TITLE]: genTitle,
[CacheKeys.S3_EXPIRY_INTERVAL]: s3ExpiryInterval,
[CacheKeys.MODEL_QUERIES]: modelQueries,
[CacheKeys.AUDIO_RUNS]: audioRuns,
[CacheKeys.MESSAGES]: messages,
[CacheKeys.FLOWS]: flows,
[CacheKeys.OPENID_EXCHANGED_TOKENS]: openIdExchangedTokensCache,
};
/**
@@ -116,7 +55,10 @@ const namespaces = {
*/
function getTTLStores() {
return Object.values(namespaces).filter(
(store) => store instanceof Keyv && typeof store.opts?.ttl === 'number' && store.opts.ttl > 0,
(store) =>
store instanceof Keyv &&
parseInt(store.opts?.ttl ?? '0') > 0 &&
!store.opts?.store?.constructor?.name?.includes('Redis'), // Only include non-Redis stores
);
}
@@ -152,18 +94,18 @@ async function clearExpiredFromCache(cache) {
if (data?.expires && data.expires <= expiryTime) {
const deleted = await cache.opts.store.delete(key);
if (!deleted) {
debugMemoryCache &&
cacheConfig.DEBUG_MEMORY_CACHE &&
console.warn(`[Cache] Error deleting entry: ${key} from ${cache.opts.namespace}`);
continue;
}
cleared++;
}
} catch (error) {
debugMemoryCache &&
cacheConfig.DEBUG_MEMORY_CACHE &&
console.log(`[Cache] Error processing entry from ${cache.opts.namespace}:`, error);
const deleted = await cache.opts.store.delete(key);
if (!deleted) {
debugMemoryCache &&
cacheConfig.DEBUG_MEMORY_CACHE &&
console.warn(`[Cache] Error deleting entry: ${key} from ${cache.opts.namespace}`);
continue;
}
@@ -172,7 +114,7 @@ async function clearExpiredFromCache(cache) {
}
if (cleared > 0) {
debugMemoryCache &&
cacheConfig.DEBUG_MEMORY_CACHE &&
console.log(
`[Cache] Cleared ${cleared} entries older than ${ttl}ms from ${cache.opts.namespace}`,
);
@@ -213,7 +155,7 @@ async function clearAllExpiredFromCache() {
}
}
if (!isRedisEnabled && !isEnabled(CI)) {
if (!cacheConfig.USE_REDIS && !cacheConfig.CI) {
/** @type {Set<NodeJS.Timeout>} */
const cleanupIntervals = new Set();
@@ -224,7 +166,7 @@ if (!isRedisEnabled && !isEnabled(CI)) {
cleanupIntervals.add(cleanup);
if (debugMemoryCache) {
if (cacheConfig.DEBUG_MEMORY_CACHE) {
const monitor = setInterval(() => {
const ttlStores = getTTLStores();
const memory = process.memoryUsage();
@@ -245,13 +187,13 @@ if (!isRedisEnabled && !isEnabled(CI)) {
}
const dispose = () => {
debugMemoryCache && console.log('[Cache] Cleaning up and shutting down...');
cacheConfig.DEBUG_MEMORY_CACHE && console.log('[Cache] Cleaning up and shutting down...');
cleanupIntervals.forEach((interval) => clearInterval(interval));
cleanupIntervals.clear();
// One final cleanup before exit
clearAllExpiredFromCache().then(() => {
debugMemoryCache && console.log('[Cache] Final cleanup completed');
cacheConfig.DEBUG_MEMORY_CACHE && console.log('[Cache] Final cleanup completed');
process.exit(0);
});
};

View File

@@ -1,92 +0,0 @@
const fs = require('fs');
const Redis = require('ioredis');
const { isEnabled } = require('~/server/utils');
const logger = require('~/config/winston');
const { REDIS_URI, USE_REDIS, USE_REDIS_CLUSTER, REDIS_CA, REDIS_MAX_LISTENERS } = process.env;
/** @type {import('ioredis').Redis | import('ioredis').Cluster} */
let ioredisClient;
const redis_max_listeners = Number(REDIS_MAX_LISTENERS) || 40;
function mapURI(uri) {
const regex =
/^(?:(?<scheme>\w+):\/\/)?(?:(?<user>[^:@]+)(?::(?<password>[^@]+))?@)?(?<host>[\w.-]+)(?::(?<port>\d{1,5}))?$/;
const match = uri.match(regex);
if (match) {
const { scheme, user, password, host, port } = match.groups;
return {
scheme: scheme || 'none',
user: user || null,
password: password || null,
host: host || null,
port: port || null,
};
} else {
const parts = uri.split(':');
if (parts.length === 2) {
return {
scheme: 'none',
user: null,
password: null,
host: parts[0],
port: parts[1],
};
}
return {
scheme: 'none',
user: null,
password: null,
host: uri,
port: null,
};
}
}
if (REDIS_URI && isEnabled(USE_REDIS)) {
let redisOptions = null;
if (REDIS_CA) {
const ca = fs.readFileSync(REDIS_CA);
redisOptions = { tls: { ca } };
}
if (isEnabled(USE_REDIS_CLUSTER)) {
const hosts = REDIS_URI.split(',').map((item) => {
var value = mapURI(item);
return {
host: value.host,
port: value.port,
};
});
ioredisClient = new Redis.Cluster(hosts, { redisOptions });
} else {
ioredisClient = new Redis(REDIS_URI, redisOptions);
}
ioredisClient.on('ready', () => {
logger.info('IoRedis connection ready');
});
ioredisClient.on('reconnecting', () => {
logger.info('IoRedis connection reconnecting');
});
ioredisClient.on('end', () => {
logger.info('IoRedis connection ended');
});
ioredisClient.on('close', () => {
logger.info('IoRedis connection closed');
});
ioredisClient.on('error', (err) => logger.error('IoRedis connection error:', err));
ioredisClient.setMaxListeners(redis_max_listeners);
logger.info(
'[Optional] IoRedis initialized for rate limiters. If you have issues, disable Redis or restart the server.',
);
} else {
logger.info('[Optional] IoRedis not initialized for rate limiters.');
}
module.exports = ioredisClient;

109
api/cache/keyvRedis.js vendored
View File

@@ -1,109 +0,0 @@
const fs = require('fs');
const ioredis = require('ioredis');
const KeyvRedis = require('@keyv/redis').default;
const { isEnabled } = require('~/server/utils');
const logger = require('~/config/winston');
const { REDIS_URI, USE_REDIS, USE_REDIS_CLUSTER, REDIS_CA, REDIS_KEY_PREFIX, REDIS_MAX_LISTENERS } =
process.env;
let keyvRedis;
const redis_prefix = REDIS_KEY_PREFIX || '';
const redis_max_listeners = Number(REDIS_MAX_LISTENERS) || 40;
function mapURI(uri) {
const regex =
/^(?:(?<scheme>\w+):\/\/)?(?:(?<user>[^:@]+)(?::(?<password>[^@]+))?@)?(?<host>[\w.-]+)(?::(?<port>\d{1,5}))?$/;
const match = uri.match(regex);
if (match) {
const { scheme, user, password, host, port } = match.groups;
return {
scheme: scheme || 'none',
user: user || null,
password: password || null,
host: host || null,
port: port || null,
};
} else {
const parts = uri.split(':');
if (parts.length === 2) {
return {
scheme: 'none',
user: null,
password: null,
host: parts[0],
port: parts[1],
};
}
return {
scheme: 'none',
user: null,
password: null,
host: uri,
port: null,
};
}
}
if (REDIS_URI && isEnabled(USE_REDIS)) {
let redisOptions = null;
/** @type {import('@keyv/redis').KeyvRedisOptions} */
let keyvOpts = {
useRedisSets: false,
keyPrefix: redis_prefix,
};
if (REDIS_CA) {
const ca = fs.readFileSync(REDIS_CA);
redisOptions = { tls: { ca } };
}
if (isEnabled(USE_REDIS_CLUSTER)) {
const hosts = REDIS_URI.split(',').map((item) => {
var value = mapURI(item);
return {
host: value.host,
port: value.port,
};
});
const cluster = new ioredis.Cluster(hosts, { redisOptions });
keyvRedis = new KeyvRedis(cluster, keyvOpts);
} else {
keyvRedis = new KeyvRedis(REDIS_URI, keyvOpts);
}
const pingInterval = setInterval(
() => {
logger.debug('KeyvRedis ping');
keyvRedis.client.ping().catch((err) => logger.error('Redis keep-alive ping failed:', err));
},
5 * 60 * 1000,
);
keyvRedis.on('ready', () => {
logger.info('KeyvRedis connection ready');
});
keyvRedis.on('reconnecting', () => {
logger.info('KeyvRedis connection reconnecting');
});
keyvRedis.on('end', () => {
logger.info('KeyvRedis connection ended');
});
keyvRedis.on('close', () => {
clearInterval(pingInterval);
logger.info('KeyvRedis connection closed');
});
keyvRedis.on('error', (err) => logger.error('KeyvRedis connection error:', err));
keyvRedis.setMaxListeners(redis_max_listeners);
logger.info(
'[Optional] Redis initialized. If you have issues, or seeing older values, disable it or flush cache to refresh values.',
);
} else {
logger.info('[Optional] Redis not initialized.');
}
module.exports = keyvRedis;

View File

@@ -1,4 +1,5 @@
const { isEnabled } = require('~/server/utils');
const { ViolationTypes } = require('librechat-data-provider');
const getLogStores = require('./getLogStores');
const banViolation = require('./banViolation');
@@ -9,14 +10,14 @@ const banViolation = require('./banViolation');
* @param {Object} res - Express response object.
* @param {string} type - The type of violation.
* @param {Object} errorMessage - The error message to log.
* @param {number} [score=1] - The severity of the violation. Defaults to 1
* @param {number | string} [score=1] - The severity of the violation. Defaults to 1
*/
const logViolation = async (req, res, type, errorMessage, score = 1) => {
const userId = req.user?.id ?? req.user?._id;
if (!userId) {
return;
}
const logs = getLogStores('general');
const logs = getLogStores(ViolationTypes.GENERAL);
const violationLogs = getLogStores(type);
const key = isEnabled(process.env.USE_REDIS) ? `${type}:${userId}` : userId;

57
api/cache/redisClients.js vendored Normal file
View File

@@ -0,0 +1,57 @@
const IoRedis = require('ioredis');
const { cacheConfig } = require('./cacheConfig');
const { createClient, createCluster } = require('@keyv/redis');
const GLOBAL_PREFIX_SEPARATOR = '::';
const urls = cacheConfig.REDIS_URI?.split(',').map((uri) => new URL(uri));
const username = urls?.[0].username || cacheConfig.REDIS_USERNAME;
const password = urls?.[0].password || cacheConfig.REDIS_PASSWORD;
const ca = cacheConfig.REDIS_CA;
/** @type {import('ioredis').Redis | import('ioredis').Cluster | null} */
let ioredisClient = null;
if (cacheConfig.USE_REDIS) {
const redisOptions = {
username: username,
password: password,
tls: ca ? { ca } : undefined,
keyPrefix: `${cacheConfig.REDIS_KEY_PREFIX}${GLOBAL_PREFIX_SEPARATOR}`,
maxListeners: cacheConfig.REDIS_MAX_LISTENERS,
};
ioredisClient =
urls.length === 1
? new IoRedis(cacheConfig.REDIS_URI, redisOptions)
: new IoRedis.Cluster(cacheConfig.REDIS_URI, { redisOptions });
// Pinging the Redis server every 5 minutes to keep the connection alive
const pingInterval = setInterval(() => ioredisClient.ping(), 5 * 60 * 1000);
ioredisClient.on('close', () => clearInterval(pingInterval));
ioredisClient.on('end', () => clearInterval(pingInterval));
}
/** @type {import('@keyv/redis').RedisClient | import('@keyv/redis').RedisCluster | null} */
let keyvRedisClient = null;
if (cacheConfig.USE_REDIS) {
// ** WARNING ** Keyv Redis client does not support Prefix like ioredis above.
// The prefix feature will be handled by the Keyv-Redis store in cacheFactory.js
const redisOptions = { username, password, socket: { tls: ca != null, ca } };
keyvRedisClient =
urls.length === 1
? createClient({ url: cacheConfig.REDIS_URI, ...redisOptions })
: createCluster({
rootNodes: cacheConfig.REDIS_URI.split(',').map((url) => ({ url })),
defaults: redisOptions,
});
keyvRedisClient.setMaxListeners(cacheConfig.REDIS_MAX_LISTENERS);
// Pinging the Redis server every 5 minutes to keep the connection alive
const keyvPingInterval = setInterval(() => keyvRedisClient.ping(), 5 * 60 * 1000);
keyvRedisClient.on('disconnect', () => clearInterval(keyvPingInterval));
keyvRedisClient.on('end', () => clearInterval(keyvPingInterval));
}
module.exports = { ioredisClient, keyvRedisClient, GLOBAL_PREFIX_SEPARATOR };

View File

@@ -11,12 +11,13 @@ let flowManager = null;
/**
* @param {string} [userId] - Optional user ID, to avoid disconnecting the current user.
* @param {boolean} [skipIdleCheck] - Skip idle connection checking to avoid unnecessary pings.
* @returns {MCPManager}
*/
function getMCPManager(userId) {
function getMCPManager(userId, skipIdleCheck = false) {
if (!mcpManager) {
mcpManager = MCPManager.getInstance();
} else {
} else if (!skipIdleCheck) {
mcpManager.checkIdleConnections(userId);
}
return mcpManager;

View File

@@ -61,7 +61,7 @@ const getAgent = async (searchParameter) => await Agent.findOne(searchParameter)
const loadEphemeralAgent = async ({ req, agent_id, endpoint, model_parameters: _m }) => {
const { model, ...model_parameters } = _m;
/** @type {Record<string, FunctionTool>} */
const availableTools = await getCachedTools({ includeGlobal: true });
const availableTools = await getCachedTools({ userId: req.user.id, includeGlobal: true });
/** @type {TEphemeralAgent | null} */
const ephemeralAgent = req.body.ephemeralAgent;
const mcpServers = new Set(ephemeralAgent?.mcp);
@@ -90,7 +90,7 @@ const loadEphemeralAgent = async ({ req, agent_id, endpoint, model_parameters: _
}
const instructions = req.body.promptPrefix;
return {
const result = {
id: agent_id,
instructions,
provider: endpoint,
@@ -98,6 +98,11 @@ const loadEphemeralAgent = async ({ req, agent_id, endpoint, model_parameters: _
model,
tools,
};
if (ephemeralAgent?.artifacts != null && ephemeralAgent.artifacts) {
result.artifacts = ephemeralAgent.artifacts;
}
return result;
};
/**

View File

@@ -1,6 +1,6 @@
const { logger } = require('@librechat/data-schemas');
const { createTempChatExpirationDate } = require('@librechat/api');
const getCustomConfig = require('~/server/services/Config/loadCustomConfig');
const getCustomConfig = require('~/server/services/Config/getCustomConfig');
const { getMessages, deleteMessages } = require('./Message');
const { Conversation } = require('~/db/models');

View File

@@ -1,5 +1,7 @@
const { logger } = require('@librechat/data-schemas');
const { EToolResources, FileContext } = require('librechat-data-provider');
const { EToolResources, FileContext, Constants } = require('librechat-data-provider');
const { getProjectByName } = require('./Project');
const { getAgent } = require('./Agent');
const { File } = require('~/db/models');
/**
@@ -12,17 +14,124 @@ const findFileById = async (file_id, options = {}) => {
return await File.findOne({ file_id, ...options }).lean();
};
/**
* Checks if a user has access to multiple files through a shared agent (batch operation)
* @param {string} userId - The user ID to check access for
* @param {string[]} fileIds - Array of file IDs to check
* @param {string} agentId - The agent ID that might grant access
* @returns {Promise<Map<string, boolean>>} Map of fileId to access status
*/
const hasAccessToFilesViaAgent = async (userId, fileIds, agentId, checkCollaborative = true) => {
const accessMap = new Map();
// Initialize all files as no access
fileIds.forEach((fileId) => accessMap.set(fileId, false));
try {
const agent = await getAgent({ id: agentId });
if (!agent) {
return accessMap;
}
// Check if user is the author - if so, grant access to all files
if (agent.author.toString() === userId) {
fileIds.forEach((fileId) => accessMap.set(fileId, true));
return accessMap;
}
// Check if agent is shared with the user via projects
if (!agent.projectIds || agent.projectIds.length === 0) {
return accessMap;
}
// Check if agent is in global project
const globalProject = await getProjectByName(Constants.GLOBAL_PROJECT_NAME, '_id');
if (
!globalProject ||
!agent.projectIds.some((pid) => pid.toString() === globalProject._id.toString())
) {
return accessMap;
}
// Agent is globally shared - check if it's collaborative
if (checkCollaborative && !agent.isCollaborative) {
return accessMap;
}
// Check which files are actually attached
const attachedFileIds = new Set();
if (agent.tool_resources) {
for (const [_resourceType, resource] of Object.entries(agent.tool_resources)) {
if (resource?.file_ids && Array.isArray(resource.file_ids)) {
resource.file_ids.forEach((fileId) => attachedFileIds.add(fileId));
}
}
}
// Grant access only to files that are attached to this agent
fileIds.forEach((fileId) => {
if (attachedFileIds.has(fileId)) {
accessMap.set(fileId, true);
}
});
return accessMap;
} catch (error) {
logger.error('[hasAccessToFilesViaAgent] Error checking file access:', error);
return accessMap;
}
};
/**
* Retrieves files matching a given filter, sorted by the most recently updated.
* @param {Object} filter - The filter criteria to apply.
* @param {Object} [_sortOptions] - Optional sort parameters.
* @param {Object|String} [selectFields={ text: 0 }] - Fields to include/exclude in the query results.
* Default excludes the 'text' field.
* @param {Object} [options] - Additional options
* @param {string} [options.userId] - User ID for access control
* @param {string} [options.agentId] - Agent ID that might grant access to files
* @returns {Promise<Array<MongoFile>>} A promise that resolves to an array of file documents.
*/
const getFiles = async (filter, _sortOptions, selectFields = { text: 0 }) => {
const getFiles = async (filter, _sortOptions, selectFields = { text: 0 }, options = {}) => {
const sortOptions = { updatedAt: -1, ..._sortOptions };
return await File.find(filter).select(selectFields).sort(sortOptions).lean();
const files = await File.find(filter).select(selectFields).sort(sortOptions).lean();
// If userId and agentId are provided, filter files based on access
if (options.userId && options.agentId) {
// Collect file IDs that need access check
const filesToCheck = [];
const ownedFiles = [];
for (const file of files) {
if (file.user && file.user.toString() === options.userId) {
ownedFiles.push(file);
} else {
filesToCheck.push(file);
}
}
if (filesToCheck.length === 0) {
return ownedFiles;
}
// Batch check access for all non-owned files
const fileIds = filesToCheck.map((f) => f.file_id);
const accessMap = await hasAccessToFilesViaAgent(
options.userId,
fileIds,
options.agentId,
false,
);
// Filter files based on access
const accessibleFiles = filesToCheck.filter((file) => accessMap.get(file.file_id));
return [...ownedFiles, ...accessibleFiles];
}
return files;
};
/**
@@ -176,4 +285,5 @@ module.exports = {
deleteFiles,
deleteFileByFilter,
batchUpdateFiles,
hasAccessToFilesViaAgent,
};

264
api/models/File.spec.js Normal file
View File

@@ -0,0 +1,264 @@
const mongoose = require('mongoose');
const { v4: uuidv4 } = require('uuid');
const { fileSchema } = require('@librechat/data-schemas');
const { agentSchema } = require('@librechat/data-schemas');
const { projectSchema } = require('@librechat/data-schemas');
const { MongoMemoryServer } = require('mongodb-memory-server');
const { GLOBAL_PROJECT_NAME } = require('librechat-data-provider').Constants;
const { getFiles, createFile } = require('./File');
const { getProjectByName } = require('./Project');
const { createAgent } = require('./Agent');
let File;
let Agent;
let Project;
describe('File Access Control', () => {
let mongoServer;
beforeAll(async () => {
mongoServer = await MongoMemoryServer.create();
const mongoUri = mongoServer.getUri();
File = mongoose.models.File || mongoose.model('File', fileSchema);
Agent = mongoose.models.Agent || mongoose.model('Agent', agentSchema);
Project = mongoose.models.Project || mongoose.model('Project', projectSchema);
await mongoose.connect(mongoUri);
});
afterAll(async () => {
await mongoose.disconnect();
await mongoServer.stop();
});
beforeEach(async () => {
await File.deleteMany({});
await Agent.deleteMany({});
await Project.deleteMany({});
});
describe('hasAccessToFilesViaAgent', () => {
it('should efficiently check access for multiple files at once', async () => {
const userId = new mongoose.Types.ObjectId().toString();
const authorId = new mongoose.Types.ObjectId().toString();
const agentId = uuidv4();
const fileIds = [uuidv4(), uuidv4(), uuidv4(), uuidv4()];
// Create files
for (const fileId of fileIds) {
await createFile({
user: authorId,
file_id: fileId,
filename: `file-${fileId}.txt`,
filepath: `/uploads/${fileId}`,
});
}
// Create agent with only first two files attached
await createAgent({
id: agentId,
name: 'Test Agent',
author: authorId,
model: 'gpt-4',
provider: 'openai',
isCollaborative: true,
tool_resources: {
file_search: {
file_ids: [fileIds[0], fileIds[1]],
},
},
});
// Get or create global project
const globalProject = await getProjectByName(GLOBAL_PROJECT_NAME, '_id');
// Share agent globally
await Agent.updateOne({ id: agentId }, { $push: { projectIds: globalProject._id } });
// Check access for all files
const { hasAccessToFilesViaAgent } = require('./File');
const accessMap = await hasAccessToFilesViaAgent(userId, fileIds, agentId);
// Should have access only to the first two files
expect(accessMap.get(fileIds[0])).toBe(true);
expect(accessMap.get(fileIds[1])).toBe(true);
expect(accessMap.get(fileIds[2])).toBe(false);
expect(accessMap.get(fileIds[3])).toBe(false);
});
it('should grant access to all files when user is the agent author', async () => {
const authorId = new mongoose.Types.ObjectId().toString();
const agentId = uuidv4();
const fileIds = [uuidv4(), uuidv4(), uuidv4()];
// Create agent
await createAgent({
id: agentId,
name: 'Test Agent',
author: authorId,
model: 'gpt-4',
provider: 'openai',
tool_resources: {
file_search: {
file_ids: [fileIds[0]], // Only one file attached
},
},
});
// Check access as the author
const { hasAccessToFilesViaAgent } = require('./File');
const accessMap = await hasAccessToFilesViaAgent(authorId, fileIds, agentId);
// Author should have access to all files
expect(accessMap.get(fileIds[0])).toBe(true);
expect(accessMap.get(fileIds[1])).toBe(true);
expect(accessMap.get(fileIds[2])).toBe(true);
});
it('should handle non-existent agent gracefully', async () => {
const userId = new mongoose.Types.ObjectId().toString();
const fileIds = [uuidv4(), uuidv4()];
const { hasAccessToFilesViaAgent } = require('./File');
const accessMap = await hasAccessToFilesViaAgent(userId, fileIds, 'non-existent-agent');
// Should have no access to any files
expect(accessMap.get(fileIds[0])).toBe(false);
expect(accessMap.get(fileIds[1])).toBe(false);
});
it('should deny access when agent is not collaborative', async () => {
const userId = new mongoose.Types.ObjectId().toString();
const authorId = new mongoose.Types.ObjectId().toString();
const agentId = uuidv4();
const fileIds = [uuidv4(), uuidv4()];
// Create agent with files but isCollaborative: false
await createAgent({
id: agentId,
name: 'Non-Collaborative Agent',
author: authorId,
model: 'gpt-4',
provider: 'openai',
isCollaborative: false,
tool_resources: {
file_search: {
file_ids: fileIds,
},
},
});
// Get or create global project
const globalProject = await getProjectByName(GLOBAL_PROJECT_NAME, '_id');
// Share agent globally
await Agent.updateOne({ id: agentId }, { $push: { projectIds: globalProject._id } });
// Check access for files
const { hasAccessToFilesViaAgent } = require('./File');
const accessMap = await hasAccessToFilesViaAgent(userId, fileIds, agentId);
// Should have no access to any files when isCollaborative is false
expect(accessMap.get(fileIds[0])).toBe(false);
expect(accessMap.get(fileIds[1])).toBe(false);
});
});
describe('getFiles with agent access control', () => {
test('should return files owned by user and files accessible through agent', async () => {
const authorId = new mongoose.Types.ObjectId();
const userId = new mongoose.Types.ObjectId();
const agentId = `agent_${uuidv4()}`;
const ownedFileId = `file_${uuidv4()}`;
const sharedFileId = `file_${uuidv4()}`;
const inaccessibleFileId = `file_${uuidv4()}`;
// Create/get global project using getProjectByName which will upsert
const globalProject = await getProjectByName(GLOBAL_PROJECT_NAME);
// Create agent with shared file
await createAgent({
id: agentId,
name: 'Shared Agent',
provider: 'test',
model: 'test-model',
author: authorId,
projectIds: [globalProject._id],
isCollaborative: true,
tool_resources: {
file_search: {
file_ids: [sharedFileId],
},
},
});
// Create files
await createFile({
file_id: ownedFileId,
user: userId,
filename: 'owned.txt',
filepath: '/uploads/owned.txt',
type: 'text/plain',
bytes: 100,
});
await createFile({
file_id: sharedFileId,
user: authorId,
filename: 'shared.txt',
filepath: '/uploads/shared.txt',
type: 'text/plain',
bytes: 200,
embedded: true,
});
await createFile({
file_id: inaccessibleFileId,
user: authorId,
filename: 'inaccessible.txt',
filepath: '/uploads/inaccessible.txt',
type: 'text/plain',
bytes: 300,
});
// Get files with access control
const files = await getFiles(
{ file_id: { $in: [ownedFileId, sharedFileId, inaccessibleFileId] } },
null,
{ text: 0 },
{ userId: userId.toString(), agentId },
);
expect(files).toHaveLength(2);
expect(files.map((f) => f.file_id)).toContain(ownedFileId);
expect(files.map((f) => f.file_id)).toContain(sharedFileId);
expect(files.map((f) => f.file_id)).not.toContain(inaccessibleFileId);
});
test('should return all files when no userId/agentId provided', async () => {
const userId = new mongoose.Types.ObjectId();
const fileId1 = `file_${uuidv4()}`;
const fileId2 = `file_${uuidv4()}`;
await createFile({
file_id: fileId1,
user: userId,
filename: 'file1.txt',
filepath: '/uploads/file1.txt',
type: 'text/plain',
bytes: 100,
});
await createFile({
file_id: fileId2,
user: new mongoose.Types.ObjectId(),
filename: 'file2.txt',
filepath: '/uploads/file2.txt',
type: 'text/plain',
bytes: 200,
});
const files = await getFiles({ file_id: { $in: [fileId1, fileId2] } });
expect(files).toHaveLength(2);
});
});
});

View File

@@ -1,7 +1,7 @@
const { z } = require('zod');
const { logger } = require('@librechat/data-schemas');
const { createTempChatExpirationDate } = require('@librechat/api');
const getCustomConfig = require('~/server/services/Config/loadCustomConfig');
const getCustomConfig = require('~/server/services/Config/getCustomConfig');
const { Message } = require('~/db/models');
const idSchema = z.string().uuid();

View File

@@ -135,10 +135,11 @@ const tokenValues = Object.assign(
'grok-2-1212': { prompt: 2.0, completion: 10.0 },
'grok-2-latest': { prompt: 2.0, completion: 10.0 },
'grok-2': { prompt: 2.0, completion: 10.0 },
'grok-3-mini-fast': { prompt: 0.4, completion: 4 },
'grok-3-mini-fast': { prompt: 0.6, completion: 4 },
'grok-3-mini': { prompt: 0.3, completion: 0.5 },
'grok-3-fast': { prompt: 5.0, completion: 25.0 },
'grok-3': { prompt: 3.0, completion: 15.0 },
'grok-4': { prompt: 3.0, completion: 15.0 },
'grok-beta': { prompt: 5.0, completion: 15.0 },
'mistral-large': { prompt: 2.0, completion: 6.0 },
'pixtral-large': { prompt: 2.0, completion: 6.0 },

View File

@@ -636,6 +636,15 @@ describe('Grok Model Tests - Pricing', () => {
);
});
test('should return correct prompt and completion rates for Grok 4 model', () => {
expect(getMultiplier({ model: 'grok-4-0709', tokenType: 'prompt' })).toBe(
tokenValues['grok-4'].prompt,
);
expect(getMultiplier({ model: 'grok-4-0709', tokenType: 'completion' })).toBe(
tokenValues['grok-4'].completion,
);
});
test('should return correct prompt and completion rates for Grok 3 models with prefixes', () => {
expect(getMultiplier({ model: 'xai/grok-3', tokenType: 'prompt' })).toBe(
tokenValues['grok-3'].prompt,
@@ -662,6 +671,15 @@ describe('Grok Model Tests - Pricing', () => {
tokenValues['grok-3-mini-fast'].completion,
);
});
test('should return correct prompt and completion rates for Grok 4 model with prefixes', () => {
expect(getMultiplier({ model: 'xai/grok-4-0709', tokenType: 'prompt' })).toBe(
tokenValues['grok-4'].prompt,
);
expect(getMultiplier({ model: 'xai/grok-4-0709', tokenType: 'completion' })).toBe(
tokenValues['grok-4'].completion,
);
});
});
});

View File

@@ -1,6 +1,6 @@
{
"name": "@librechat/backend",
"version": "v0.7.8",
"version": "v0.7.9-rc1",
"description": "",
"scripts": {
"start": "echo 'please run this from the root directory'",
@@ -44,19 +44,20 @@
"@googleapis/youtube": "^20.0.0",
"@keyv/redis": "^4.3.3",
"@langchain/community": "^0.3.47",
"@langchain/core": "^0.3.60",
"@langchain/core": "^0.3.62",
"@langchain/google-genai": "^0.2.13",
"@langchain/google-vertexai": "^0.2.13",
"@langchain/openai": "^0.5.18",
"@langchain/textsplitters": "^0.1.0",
"@librechat/agents": "^2.4.46",
"@librechat/agents": "^2.4.67",
"@librechat/api": "*",
"@librechat/data-schemas": "*",
"@node-saml/passport-saml": "^5.0.0",
"@waylaidwanderer/fetch-event-source": "^3.0.1",
"axios": "^1.8.2",
"bcryptjs": "^2.4.3",
"compression": "^1.7.4",
"connect-redis": "^7.1.0",
"compression": "^1.8.1",
"connect-redis": "^8.1.0",
"cookie": "^0.7.2",
"cookie-parser": "^1.4.7",
"cors": "^2.8.5",
@@ -66,10 +67,11 @@
"express": "^4.21.2",
"express-mongo-sanitize": "^2.2.0",
"express-rate-limit": "^7.4.1",
"express-session": "^1.18.1",
"express-session": "^1.18.2",
"express-static-gzip": "^2.2.0",
"file-type": "^18.7.0",
"firebase": "^11.0.2",
"form-data": "^4.0.4",
"googleapis": "^126.0.1",
"handlebars": "^4.7.7",
"https-proxy-agent": "^7.0.6",
@@ -87,12 +89,12 @@
"mime": "^3.0.0",
"module-alias": "^2.2.3",
"mongoose": "^8.12.1",
"multer": "^2.0.1",
"multer": "^2.0.2",
"nanoid": "^3.3.7",
"node-fetch": "^2.7.0",
"nodemailer": "^6.9.15",
"ollama": "^0.5.0",
"openai": "^4.96.2",
"openai": "^5.10.1",
"openai-chat-tokens": "^0.2.8",
"openid-client": "^6.5.0",
"passport": "^0.6.0",

View File

@@ -24,17 +24,23 @@ const handleValidationError = (err, res) => {
}
};
// eslint-disable-next-line no-unused-vars
module.exports = (err, req, res, next) => {
module.exports = (err, _req, res, _next) => {
try {
if (err.name === 'ValidationError') {
return (err = handleValidationError(err, res));
return handleValidationError(err, res);
}
if (err.code && err.code == 11000) {
return (err = handleDuplicateKeyError(err, res));
return handleDuplicateKeyError(err, res);
}
} catch (err) {
// Special handling for errors like SyntaxError
if (err.statusCode && err.body) {
return res.status(err.statusCode).send(err.body);
}
logger.error('ErrorController => error', err);
res.status(500).send('An unknown error occurred.');
return res.status(500).send('An unknown error occurred.');
} catch (err) {
logger.error('ErrorController => processing error', err);
return res.status(500).send('Processing error in ErrorController.');
}
};

View File

@@ -0,0 +1,241 @@
const errorController = require('./ErrorController');
const { logger } = require('~/config');
// Mock the logger
jest.mock('~/config', () => ({
logger: {
error: jest.fn(),
},
}));
describe('ErrorController', () => {
let mockReq, mockRes, mockNext;
beforeEach(() => {
mockReq = {};
mockRes = {
status: jest.fn().mockReturnThis(),
send: jest.fn(),
};
mockNext = jest.fn();
logger.error.mockClear();
});
describe('ValidationError handling', () => {
it('should handle ValidationError with single error', () => {
const validationError = {
name: 'ValidationError',
errors: {
email: { message: 'Email is required', path: 'email' },
},
};
errorController(validationError, mockReq, mockRes, mockNext);
expect(mockRes.status).toHaveBeenCalledWith(400);
expect(mockRes.send).toHaveBeenCalledWith({
messages: '["Email is required"]',
fields: '["email"]',
});
expect(logger.error).toHaveBeenCalledWith('Validation error:', validationError.errors);
});
it('should handle ValidationError with multiple errors', () => {
const validationError = {
name: 'ValidationError',
errors: {
email: { message: 'Email is required', path: 'email' },
password: { message: 'Password is required', path: 'password' },
},
};
errorController(validationError, mockReq, mockRes, mockNext);
expect(mockRes.status).toHaveBeenCalledWith(400);
expect(mockRes.send).toHaveBeenCalledWith({
messages: '"Email is required Password is required"',
fields: '["email","password"]',
});
expect(logger.error).toHaveBeenCalledWith('Validation error:', validationError.errors);
});
it('should handle ValidationError with empty errors object', () => {
const validationError = {
name: 'ValidationError',
errors: {},
};
errorController(validationError, mockReq, mockRes, mockNext);
expect(mockRes.status).toHaveBeenCalledWith(400);
expect(mockRes.send).toHaveBeenCalledWith({
messages: '[]',
fields: '[]',
});
});
});
describe('Duplicate key error handling', () => {
it('should handle duplicate key error (code 11000)', () => {
const duplicateKeyError = {
code: 11000,
keyValue: { email: 'test@example.com' },
};
errorController(duplicateKeyError, mockReq, mockRes, mockNext);
expect(mockRes.status).toHaveBeenCalledWith(409);
expect(mockRes.send).toHaveBeenCalledWith({
messages: 'An document with that ["email"] already exists.',
fields: '["email"]',
});
expect(logger.error).toHaveBeenCalledWith('Duplicate key error:', duplicateKeyError.keyValue);
});
it('should handle duplicate key error with multiple fields', () => {
const duplicateKeyError = {
code: 11000,
keyValue: { email: 'test@example.com', username: 'testuser' },
};
errorController(duplicateKeyError, mockReq, mockRes, mockNext);
expect(mockRes.status).toHaveBeenCalledWith(409);
expect(mockRes.send).toHaveBeenCalledWith({
messages: 'An document with that ["email","username"] already exists.',
fields: '["email","username"]',
});
expect(logger.error).toHaveBeenCalledWith('Duplicate key error:', duplicateKeyError.keyValue);
});
it('should handle error with code 11000 as string', () => {
const duplicateKeyError = {
code: '11000',
keyValue: { email: 'test@example.com' },
};
errorController(duplicateKeyError, mockReq, mockRes, mockNext);
expect(mockRes.status).toHaveBeenCalledWith(409);
expect(mockRes.send).toHaveBeenCalledWith({
messages: 'An document with that ["email"] already exists.',
fields: '["email"]',
});
});
});
describe('SyntaxError handling', () => {
it('should handle errors with statusCode and body', () => {
const syntaxError = {
statusCode: 400,
body: 'Invalid JSON syntax',
};
errorController(syntaxError, mockReq, mockRes, mockNext);
expect(mockRes.status).toHaveBeenCalledWith(400);
expect(mockRes.send).toHaveBeenCalledWith('Invalid JSON syntax');
});
it('should handle errors with different statusCode and body', () => {
const customError = {
statusCode: 422,
body: { error: 'Unprocessable entity' },
};
errorController(customError, mockReq, mockRes, mockNext);
expect(mockRes.status).toHaveBeenCalledWith(422);
expect(mockRes.send).toHaveBeenCalledWith({ error: 'Unprocessable entity' });
});
it('should handle error with statusCode but no body', () => {
const partialError = {
statusCode: 400,
};
errorController(partialError, mockReq, mockRes, mockNext);
expect(mockRes.status).toHaveBeenCalledWith(500);
expect(mockRes.send).toHaveBeenCalledWith('An unknown error occurred.');
});
it('should handle error with body but no statusCode', () => {
const partialError = {
body: 'Some error message',
};
errorController(partialError, mockReq, mockRes, mockNext);
expect(mockRes.status).toHaveBeenCalledWith(500);
expect(mockRes.send).toHaveBeenCalledWith('An unknown error occurred.');
});
});
describe('Unknown error handling', () => {
it('should handle unknown errors', () => {
const unknownError = new Error('Some unknown error');
errorController(unknownError, mockReq, mockRes, mockNext);
expect(mockRes.status).toHaveBeenCalledWith(500);
expect(mockRes.send).toHaveBeenCalledWith('An unknown error occurred.');
expect(logger.error).toHaveBeenCalledWith('ErrorController => error', unknownError);
});
it('should handle errors with code other than 11000', () => {
const mongoError = {
code: 11100,
message: 'Some MongoDB error',
};
errorController(mongoError, mockReq, mockRes, mockNext);
expect(mockRes.status).toHaveBeenCalledWith(500);
expect(mockRes.send).toHaveBeenCalledWith('An unknown error occurred.');
expect(logger.error).toHaveBeenCalledWith('ErrorController => error', mongoError);
});
it('should handle null/undefined errors', () => {
errorController(null, mockReq, mockRes, mockNext);
expect(mockRes.status).toHaveBeenCalledWith(500);
expect(mockRes.send).toHaveBeenCalledWith('Processing error in ErrorController.');
expect(logger.error).toHaveBeenCalledWith(
'ErrorController => processing error',
expect.any(Error),
);
});
});
describe('Catch block handling', () => {
beforeEach(() => {
// Restore logger mock to normal behavior for these tests
logger.error.mockRestore();
logger.error = jest.fn();
});
it('should handle errors when logger.error throws', () => {
// Create fresh mocks for this test
const freshMockRes = {
status: jest.fn().mockReturnThis(),
send: jest.fn(),
};
// Mock logger to throw on the first call, succeed on the second
logger.error
.mockImplementationOnce(() => {
throw new Error('Logger error');
})
.mockImplementation(() => {});
const testError = new Error('Test error');
errorController(testError, mockReq, freshMockRes, mockNext);
expect(freshMockRes.status).toHaveBeenCalledWith(500);
expect(freshMockRes.send).toHaveBeenCalledWith('Processing error in ErrorController.');
expect(logger.error).toHaveBeenCalledTimes(2);
});
});
});

View File

@@ -1,11 +1,10 @@
const { logger } = require('@librechat/data-schemas');
const { CacheKeys, AuthType } = require('librechat-data-provider');
const { CacheKeys, AuthType, Constants } = require('librechat-data-provider');
const { getCustomConfig, getCachedTools } = require('~/server/services/Config');
const { getToolkitKey } = require('~/server/services/ToolService');
const { getMCPManager, getFlowStateManager } = require('~/config');
const { availableTools } = require('~/app/clients/tools');
const { getLogStores } = require('~/cache');
const { Constants } = require('librechat-data-provider');
/**
* Filters out duplicate plugins from the list of plugins.
@@ -98,7 +97,6 @@ function createServerToolsCallback() {
return;
}
await mcpToolsCache.set(serverName, serverTools);
logger.debug(`MCP tools for ${serverName} added to cache.`);
} catch (error) {
logger.error('Error retrieving MCP tools from cache:', error);
}
@@ -139,15 +137,21 @@ function createGetServerTools() {
*/
const getAvailableTools = async (req, res) => {
try {
const userId = req.user?.id;
const customConfig = await getCustomConfig();
const cache = getLogStores(CacheKeys.CONFIG_STORE);
const cachedTools = await cache.get(CacheKeys.TOOLS);
if (cachedTools) {
res.status(200).json(cachedTools);
const cachedToolsArray = await cache.get(CacheKeys.TOOLS);
const cachedUserTools = await getCachedTools({ userId });
const userPlugins = await convertMCPToolsToPlugins(cachedUserTools, customConfig, userId);
if (cachedToolsArray && userPlugins) {
const dedupedTools = filterUniquePlugins([...userPlugins, ...cachedToolsArray]);
res.status(200).json(dedupedTools);
return;
}
// If not in cache, build from manifest
let pluginManifest = availableTools;
const customConfig = await getCustomConfig();
if (customConfig?.mcpServers != null) {
const mcpManager = getMCPManager();
const flowsCache = getLogStores(CacheKeys.FLOWS);
@@ -173,7 +177,7 @@ const getAvailableTools = async (req, res) => {
}
});
const toolDefinitions = await getCachedTools({ includeGlobal: true });
const toolDefinitions = (await getCachedTools({ includeGlobal: true })) || {};
const toolsOutput = [];
for (const plugin of authenticatedPlugins) {
@@ -197,37 +201,150 @@ const getAvailableTools = async (req, res) => {
const serverName = parts[parts.length - 1];
const serverConfig = customConfig?.mcpServers?.[serverName];
if (!serverConfig?.customUserVars) {
if (!serverConfig) {
toolsOutput.push(toolToAdd);
continue;
}
const customVarKeys = Object.keys(serverConfig.customUserVars);
if (customVarKeys.length === 0) {
toolToAdd.authConfig = [];
toolToAdd.authenticated = true;
} else {
// Handle MCP servers with customUserVars (user-level auth required)
if (serverConfig.customUserVars) {
// Build authConfig for MCP tools
toolToAdd.authConfig = Object.entries(serverConfig.customUserVars).map(([key, value]) => ({
authField: key,
label: value.title || key,
description: value.description || '',
}));
toolToAdd.authenticated = false;
// Check actual connection status for MCP tools with auth requirements
if (userId) {
try {
const mcpManager = getMCPManager(userId);
const connectionStatus = await mcpManager.getUserConnectionStatus(userId, serverName);
toolToAdd.authenticated = connectionStatus.connected;
} catch (error) {
logger.error(
`[getAvailableTools] Error checking connection status for ${serverName}:`,
error,
);
toolToAdd.authenticated = false;
}
} else {
// For non-authenticated requests, default to false
toolToAdd.authenticated = false;
}
} else {
// Handle app-level MCP servers (no auth required)
toolToAdd.authConfig = [];
// Check if the app-level connection is active
try {
const mcpManager = getMCPManager();
const appConnection = mcpManager.getConnection(serverName);
if (appConnection) {
const connectionState = appConnection.getConnectionState();
// For app-level connections, consider them authenticated if they're in 'connected' state
// This is more reliable than isConnected() which does network calls
toolToAdd.authenticated = connectionState === 'connected';
} else {
logger.warn(`[getAvailableTools] No app-level connection found for ${serverName}`);
toolToAdd.authenticated = false;
}
} catch (error) {
logger.error(
`[getAvailableTools] Error checking app-level connection status for ${serverName}:`,
error,
);
toolToAdd.authenticated = false;
}
}
toolsOutput.push(toolToAdd);
}
const finalTools = filterUniquePlugins(toolsOutput);
await cache.set(CacheKeys.TOOLS, finalTools);
res.status(200).json(finalTools);
const dedupedTools = filterUniquePlugins([...userPlugins, ...finalTools]);
res.status(200).json(dedupedTools);
} catch (error) {
logger.error('[getAvailableTools]', error);
res.status(500).json({ message: error.message });
}
};
/**
* Converts MCP function format tools to plugin format
* @param {Object} functionTools - Object with function format tools
* @param {Object} customConfig - Custom configuration for MCP servers
* @returns {Array} Array of plugin objects
*/
async function convertMCPToolsToPlugins(functionTools, customConfig, userId = null) {
const plugins = [];
for (const [toolKey, toolData] of Object.entries(functionTools)) {
if (!toolData.function || !toolKey.includes(Constants.mcp_delimiter)) {
continue;
}
const functionData = toolData.function;
const parts = toolKey.split(Constants.mcp_delimiter);
const serverName = parts[parts.length - 1];
const plugin = {
name: parts[0], // Use the tool name without server suffix
pluginKey: toolKey,
description: functionData.description || '',
authenticated: false, // Default to false, will be updated based on connection status
icon: undefined,
};
// Build authConfig for MCP tools
const serverConfig = customConfig?.mcpServers?.[serverName];
if (!serverConfig?.customUserVars) {
plugin.authConfig = [];
plugin.authenticated = true; // No auth required
plugins.push(plugin);
continue;
}
const customVarKeys = Object.keys(serverConfig.customUserVars);
if (customVarKeys.length === 0) {
plugin.authConfig = [];
plugin.authenticated = true; // No auth required
} else {
plugin.authConfig = Object.entries(serverConfig.customUserVars).map(([key, value]) => ({
authField: key,
label: value.title || key,
description: value.description || '',
}));
// Check actual connection status for MCP tools with auth requirements
if (userId) {
try {
const mcpManager = getMCPManager(userId);
const connectionStatus = await mcpManager.getUserConnectionStatus(userId, serverName);
plugin.authenticated = connectionStatus.connected;
} catch (error) {
logger.error(
`[convertMCPToolsToPlugins] Error checking connection status for ${serverName}:`,
error,
);
plugin.authenticated = false;
}
} else {
plugin.authenticated = false;
}
}
plugins.push(plugin);
}
return plugins;
}
module.exports = {
getAvailableTools,
getAvailablePluginsController,

View File

@@ -1,11 +1,5 @@
const {
Tools,
Constants,
FileSources,
webSearchKeys,
extractWebSearchEnvVars,
} = require('librechat-data-provider');
const { logger } = require('@librechat/data-schemas');
const { webSearchKeys, extractWebSearchEnvVars } = require('@librechat/api');
const {
getFiles,
updateUser,
@@ -20,6 +14,7 @@ const { updateUserPluginAuth, deleteUserPluginAuth } = require('~/server/service
const { updateUserPluginsService, deleteUserKey } = require('~/server/services/UserService');
const { verifyEmail, resendVerificationEmail } = require('~/server/services/AuthService');
const { needsRefresh, getNewS3URL } = require('~/server/services/Files/S3/crud');
const { Tools, Constants, FileSources } = require('librechat-data-provider');
const { processDeleteRequest } = require('~/server/services/Files/process');
const { Transaction, Balance, User } = require('~/db/models');
const { deleteToolCalls } = require('~/models/ToolCall');
@@ -180,14 +175,18 @@ const updateUserPluginsController = async (req, res) => {
try {
const mcpManager = getMCPManager(user.id);
if (mcpManager) {
// Extract server name from pluginKey (e.g., "mcp_myserver" -> "myserver")
const serverName = pluginKey.replace(Constants.mcp_prefix, '');
logger.info(
`[updateUserPluginsController] Disconnecting MCP connections for user ${user.id} after plugin auth update for ${pluginKey}.`,
`[updateUserPluginsController] Disconnecting MCP connection for user ${user.id} and server ${serverName} after plugin auth update for ${pluginKey}.`,
);
await mcpManager.disconnectUserConnections(user.id);
// Don't kill the server connection on revoke anymore, user can just reinitialize the server if thats what they want
await mcpManager.disconnectUserConnection(user.id, serverName);
}
} catch (disconnectError) {
logger.error(
`[updateUserPluginsController] Error disconnecting MCP connections for user ${user.id} after plugin auth update:`,
`[updateUserPluginsController] Error disconnecting MCP connection for user ${user.id} after plugin auth update:`,
disconnectError,
);
// Do not fail the request for this, but log it.

View File

@@ -0,0 +1,195 @@
const { duplicateAgent } = require('../v1');
const { getAgent, createAgent } = require('~/models/Agent');
const { getActions } = require('~/models/Action');
const { nanoid } = require('nanoid');
jest.mock('~/models/Agent');
jest.mock('~/models/Action');
jest.mock('nanoid');
describe('duplicateAgent', () => {
let req, res;
beforeEach(() => {
req = {
params: { id: 'agent_123' },
user: { id: 'user_456' },
};
res = {
status: jest.fn().mockReturnThis(),
json: jest.fn(),
};
jest.clearAllMocks();
});
it('should duplicate an agent successfully', async () => {
const mockAgent = {
id: 'agent_123',
name: 'Test Agent',
description: 'Test Description',
instructions: 'Test Instructions',
provider: 'openai',
model: 'gpt-4',
tools: ['file_search'],
actions: [],
author: 'user_789',
versions: [{ name: 'Test Agent', version: 1 }],
__v: 0,
};
const mockNewAgent = {
id: 'agent_new_123',
name: 'Test Agent (1/2/23, 12:34)',
description: 'Test Description',
instructions: 'Test Instructions',
provider: 'openai',
model: 'gpt-4',
tools: ['file_search'],
actions: [],
author: 'user_456',
versions: [
{
name: 'Test Agent (1/2/23, 12:34)',
description: 'Test Description',
instructions: 'Test Instructions',
provider: 'openai',
model: 'gpt-4',
tools: ['file_search'],
actions: [],
createdAt: new Date(),
updatedAt: new Date(),
},
],
};
getAgent.mockResolvedValue(mockAgent);
getActions.mockResolvedValue([]);
nanoid.mockReturnValue('new_123');
createAgent.mockResolvedValue(mockNewAgent);
await duplicateAgent(req, res);
expect(getAgent).toHaveBeenCalledWith({ id: 'agent_123' });
expect(getActions).toHaveBeenCalledWith({ agent_id: 'agent_123' }, true);
expect(createAgent).toHaveBeenCalledWith(
expect.objectContaining({
id: 'agent_new_123',
author: 'user_456',
name: expect.stringContaining('Test Agent ('),
description: 'Test Description',
instructions: 'Test Instructions',
provider: 'openai',
model: 'gpt-4',
tools: ['file_search'],
actions: [],
}),
);
expect(createAgent).toHaveBeenCalledWith(
expect.not.objectContaining({
versions: expect.anything(),
__v: expect.anything(),
}),
);
expect(res.status).toHaveBeenCalledWith(201);
expect(res.json).toHaveBeenCalledWith({
agent: mockNewAgent,
actions: [],
});
});
it('should ensure duplicated agent has clean versions array without nested fields', async () => {
const mockAgent = {
id: 'agent_123',
name: 'Test Agent',
description: 'Test Description',
versions: [
{
name: 'Test Agent',
versions: [{ name: 'Nested' }],
__v: 1,
},
],
__v: 2,
};
const mockNewAgent = {
id: 'agent_new_123',
name: 'Test Agent (1/2/23, 12:34)',
description: 'Test Description',
versions: [
{
name: 'Test Agent (1/2/23, 12:34)',
description: 'Test Description',
createdAt: new Date(),
updatedAt: new Date(),
},
],
};
getAgent.mockResolvedValue(mockAgent);
getActions.mockResolvedValue([]);
nanoid.mockReturnValue('new_123');
createAgent.mockResolvedValue(mockNewAgent);
await duplicateAgent(req, res);
expect(mockNewAgent.versions).toHaveLength(1);
const firstVersion = mockNewAgent.versions[0];
expect(firstVersion).not.toHaveProperty('versions');
expect(firstVersion).not.toHaveProperty('__v');
expect(mockNewAgent).not.toHaveProperty('__v');
expect(res.status).toHaveBeenCalledWith(201);
});
it('should return 404 if agent not found', async () => {
getAgent.mockResolvedValue(null);
await duplicateAgent(req, res);
expect(res.status).toHaveBeenCalledWith(404);
expect(res.json).toHaveBeenCalledWith({
error: 'Agent not found',
status: 'error',
});
});
it('should handle tool_resources.ocr correctly', async () => {
const mockAgent = {
id: 'agent_123',
name: 'Test Agent',
tool_resources: {
ocr: { enabled: true, config: 'test' },
other: { should: 'not be copied' },
},
};
getAgent.mockResolvedValue(mockAgent);
getActions.mockResolvedValue([]);
nanoid.mockReturnValue('new_123');
createAgent.mockResolvedValue({ id: 'agent_new_123' });
await duplicateAgent(req, res);
expect(createAgent).toHaveBeenCalledWith(
expect.objectContaining({
tool_resources: {
ocr: { enabled: true, config: 'test' },
},
}),
);
});
it('should handle errors gracefully', async () => {
getAgent.mockRejectedValue(new Error('Database error'));
await duplicateAgent(req, res);
expect(res.status).toHaveBeenCalledWith(500);
expect(res.json).toHaveBeenCalledWith({ error: 'Database error' });
});
});

View File

@@ -1,19 +1,23 @@
require('events').EventEmitter.defaultMaxListeners = 100;
const { logger } = require('@librechat/data-schemas');
const { DynamicStructuredTool } = require('@langchain/core/tools');
const { getBufferString, HumanMessage } = require('@langchain/core/messages');
const {
sendEvent,
createRun,
Tokenizer,
checkAccess,
memoryInstructions,
formatContentStrings,
createMemoryProcessor,
} = require('@librechat/api');
const {
Callback,
Providers,
GraphEvents,
TitleMethod,
formatMessage,
formatAgentMessages,
formatContentStrings,
getTokenCountForMessage,
createMetadataAggregator,
} = require('@librechat/agents');
@@ -23,24 +27,26 @@ const {
VisionModes,
ContentTypes,
EModelEndpoint,
KnownEndpoints,
PermissionTypes,
isAgentsEndpoint,
AgentCapabilities,
bedrockInputSchema,
removeNullishValues,
} = require('librechat-data-provider');
const { DynamicStructuredTool } = require('@langchain/core/tools');
const { getBufferString, HumanMessage } = require('@langchain/core/messages');
const { createGetMCPAuthMap, checkCapability } = require('~/server/services/Config');
const {
findPluginAuthsByKeys,
getFormattedMemories,
deleteMemory,
setMemory,
} = require('~/models');
const { getMCPAuthMap, checkCapability, hasCustomUserVars } = require('~/server/services/Config');
const { addCacheControl, createContextHandlers } = require('~/app/clients/prompts');
const { initializeAgent } = require('~/server/services/Endpoints/agents/agent');
const { spendTokens, spendStructuredTokens } = require('~/models/spendTokens');
const { getFormattedMemories, deleteMemory, setMemory } = require('~/models');
const { encodeAndFormat } = require('~/server/services/Files/images/encode');
const { getProviderConfig } = require('~/server/services/Endpoints');
const { checkAccess } = require('~/server/middleware/roles/access');
const BaseClient = require('~/app/clients/BaseClient');
const { getRoleByName } = require('~/models/Role');
const { loadAgent } = require('~/models/Agent');
const { getMCPManager } = require('~/config');
@@ -53,6 +59,7 @@ const omitTitleOptions = new Set([
'thinkingBudget',
'includeThoughts',
'maxOutputTokens',
'additionalModelRequestFields',
]);
/**
@@ -69,8 +76,6 @@ const payloadParser = ({ req, agent, endpoint }) => {
return req.body.endpointOption.model_parameters;
};
const legacyContentEndpoints = new Set([KnownEndpoints.groq, KnownEndpoints.deepseek]);
const noSystemModelRegex = [/\b(o1-preview|o1-mini|amazon\.titan-text)\b/gi];
function createTokenCounter(encoding) {
@@ -401,7 +406,12 @@ class AgentClient extends BaseClient {
if (user.personalization?.memories === false) {
return;
}
const hasAccess = await checkAccess(user, PermissionTypes.MEMORIES, [Permissions.USE]);
const hasAccess = await checkAccess({
user,
permissionType: PermissionTypes.MEMORIES,
permissions: [Permissions.USE],
getRoleByName,
});
if (!hasAccess) {
logger.debug(
@@ -446,6 +456,12 @@ class AgentClient extends BaseClient {
res: this.options.res,
agent: prelimAgent,
allowedProviders,
endpointOption: {
endpoint:
prelimAgent.id !== Constants.EPHEMERAL_AGENT_ID
? EModelEndpoint.agents
: memoryConfig.agent?.provider,
},
});
if (!agent) {
@@ -519,7 +535,10 @@ class AgentClient extends BaseClient {
messagesToProcess = [...messages.slice(-messageWindowSize)];
}
}
return await this.processMemory(messagesToProcess);
const bufferString = getBufferString(messagesToProcess);
const bufferMessage = new HumanMessage(`# Current Chat:\n\n${bufferString}`);
return await this.processMemory([bufferMessage]);
} catch (error) {
logger.error('Memory Agent failed to process memory', error);
}
@@ -691,17 +710,12 @@ class AgentClient extends BaseClient {
version: 'v2',
};
const getUserMCPAuthMap = await createGetMCPAuthMap();
const toolSet = new Set((this.options.agent.tools ?? []).map((tool) => tool && tool.name));
let { messages: initialMessages, indexTokenCountMap } = formatAgentMessages(
payload,
this.indexTokenCountMap,
toolSet,
);
if (legacyContentEndpoints.has(this.options.agent.endpoint?.toLowerCase())) {
initialMessages = formatContentStrings(initialMessages);
}
/**
*
@@ -765,6 +779,9 @@ class AgentClient extends BaseClient {
}
let messages = _messages;
if (agent.useLegacyContent === true) {
messages = formatContentStrings(messages);
}
if (
agent.model_parameters?.clientOptions?.defaultHeaders?.['anthropic-beta']?.includes(
'prompt-caching',
@@ -813,10 +830,11 @@ class AgentClient extends BaseClient {
}
try {
if (getUserMCPAuthMap) {
config.configurable.userMCPAuthMap = await getUserMCPAuthMap({
if (await hasCustomUserVars()) {
config.configurable.userMCPAuthMap = await getMCPAuthMap({
tools: agent.tools,
userId: this.options.req.user.id,
findPluginAuthsByKeys,
});
}
} catch (err) {
@@ -992,7 +1010,7 @@ class AgentClient extends BaseClient {
}
const { handleLLMEnd, collected: collectedMetadata } = createMetadataAggregator();
const { req, res, agent } = this.options;
const endpoint = agent.endpoint;
let endpoint = agent.endpoint;
/** @type {import('@librechat/agents').ClientOptions} */
let clientOptions = {
@@ -1000,17 +1018,32 @@ class AgentClient extends BaseClient {
model: agent.model_parameters.model,
};
const { getOptions, overrideProvider, customEndpointConfig } =
await getProviderConfig(endpoint);
let titleProviderConfig = await getProviderConfig(endpoint);
/** @type {TEndpoint | undefined} */
const endpointConfig = req.app.locals[endpoint] ?? customEndpointConfig;
const endpointConfig =
req.app.locals.all ?? req.app.locals[endpoint] ?? titleProviderConfig.customEndpointConfig;
if (!endpointConfig) {
logger.warn(
'[api/server/controllers/agents/client.js #titleConvo] Error getting endpoint config',
);
}
if (endpointConfig?.titleEndpoint && endpointConfig.titleEndpoint !== endpoint) {
try {
titleProviderConfig = await getProviderConfig(endpointConfig.titleEndpoint);
endpoint = endpointConfig.titleEndpoint;
} catch (error) {
logger.warn(
`[api/server/controllers/agents/client.js #titleConvo] Error getting title endpoint config for ${endpointConfig.titleEndpoint}, falling back to default`,
error,
);
// Fall back to original provider config
endpoint = agent.endpoint;
titleProviderConfig = await getProviderConfig(endpoint);
}
}
if (
endpointConfig &&
endpointConfig.titleModel &&
@@ -1019,7 +1052,7 @@ class AgentClient extends BaseClient {
clientOptions.model = endpointConfig.titleModel;
}
const options = await getOptions({
const options = await titleProviderConfig.getOptions({
req,
res,
optionsOnly: true,
@@ -1028,12 +1061,18 @@ class AgentClient extends BaseClient {
endpointOption: { model_parameters: clientOptions },
});
let provider = options.provider ?? overrideProvider ?? agent.provider;
let provider = options.provider ?? titleProviderConfig.overrideProvider ?? agent.provider;
if (
endpoint === EModelEndpoint.azureOpenAI &&
options.llmConfig?.azureOpenAIApiInstanceName == null
) {
provider = Providers.OPENAI;
} else if (
endpoint === EModelEndpoint.azureOpenAI &&
options.llmConfig?.azureOpenAIApiInstanceName != null &&
provider !== Providers.AZURE
) {
provider = Providers.AZURE;
}
/** @type {import('@librechat/agents').ClientOptions} */
@@ -1055,16 +1094,23 @@ class AgentClient extends BaseClient {
),
);
if (provider === Providers.GOOGLE) {
if (
provider === Providers.GOOGLE &&
(endpointConfig?.titleMethod === TitleMethod.FUNCTIONS ||
endpointConfig?.titleMethod === TitleMethod.STRUCTURED)
) {
clientOptions.json = true;
}
try {
const titleResult = await this.run.generateTitle({
provider,
clientOptions,
inputText: text,
contentParts: this.contentParts,
clientOptions,
titleMethod: endpointConfig?.titleMethod,
titlePrompt: endpointConfig?.titlePrompt,
titlePromptTemplate: endpointConfig?.titlePromptTemplate,
chainOptions: {
signal: abortController.signal,
callbacks: [
@@ -1112,8 +1158,52 @@ class AgentClient extends BaseClient {
}
}
/** Silent method, as `recordCollectedUsage` is used instead */
async recordTokenUsage() {}
/**
* @param {object} params
* @param {number} params.promptTokens
* @param {number} params.completionTokens
* @param {OpenAIUsageMetadata} [params.usage]
* @param {string} [params.model]
* @param {string} [params.context='message']
* @returns {Promise<void>}
*/
async recordTokenUsage({ model, promptTokens, completionTokens, usage, context = 'message' }) {
try {
await spendTokens(
{
model,
context,
conversationId: this.conversationId,
user: this.user ?? this.options.req.user?.id,
endpointTokenConfig: this.options.endpointTokenConfig,
},
{ promptTokens, completionTokens },
);
if (
usage &&
typeof usage === 'object' &&
'reasoning_tokens' in usage &&
typeof usage.reasoning_tokens === 'number'
) {
await spendTokens(
{
model,
context: 'reasoning',
conversationId: this.conversationId,
user: this.user ?? this.options.req.user?.id,
endpointTokenConfig: this.options.endpointTokenConfig,
},
{ completionTokens: usage.reasoning_tokens },
);
}
} catch (error) {
logger.error(
'[api/server/controllers/agents/client.js #recordTokenUsage] Error recording token usage',
error,
);
}
}
getEncoding() {
return 'o200k_base';

View File

@@ -0,0 +1,730 @@
const { Providers } = require('@librechat/agents');
const { Constants, EModelEndpoint } = require('librechat-data-provider');
const AgentClient = require('./client');
jest.mock('@librechat/agents', () => ({
...jest.requireActual('@librechat/agents'),
createMetadataAggregator: () => ({
handleLLMEnd: jest.fn(),
collected: [],
}),
}));
describe('AgentClient - titleConvo', () => {
let client;
let mockRun;
let mockReq;
let mockRes;
let mockAgent;
let mockOptions;
beforeEach(() => {
// Reset all mocks
jest.clearAllMocks();
// Mock run object
mockRun = {
generateTitle: jest.fn().mockResolvedValue({
title: 'Generated Title',
}),
};
// Mock agent - with both endpoint and provider
mockAgent = {
id: 'agent-123',
endpoint: EModelEndpoint.openAI, // Use a valid provider as endpoint for getProviderConfig
provider: EModelEndpoint.openAI, // Add provider property
model_parameters: {
model: 'gpt-4',
},
};
// Mock request and response
mockReq = {
app: {
locals: {
[EModelEndpoint.openAI]: {
// Match the agent endpoint
titleModel: 'gpt-3.5-turbo',
titlePrompt: 'Custom title prompt',
titleMethod: 'structured',
titlePromptTemplate: 'Template: {{content}}',
},
},
},
user: {
id: 'user-123',
},
body: {
model: 'gpt-4',
endpoint: EModelEndpoint.openAI,
key: null,
},
};
mockRes = {};
// Mock options
mockOptions = {
req: mockReq,
res: mockRes,
agent: mockAgent,
endpointTokenConfig: {},
};
// Create client instance
client = new AgentClient(mockOptions);
client.run = mockRun;
client.responseMessageId = 'response-123';
client.conversationId = 'convo-123';
client.contentParts = [{ type: 'text', text: 'Test content' }];
client.recordCollectedUsage = jest.fn().mockResolvedValue(); // Mock as async function that resolves
});
describe('titleConvo method', () => {
it('should throw error if run is not initialized', async () => {
client.run = null;
await expect(
client.titleConvo({ text: 'Test', abortController: new AbortController() }),
).rejects.toThrow('Run not initialized');
});
it('should use titlePrompt from endpoint config', async () => {
const text = 'Test conversation text';
const abortController = new AbortController();
await client.titleConvo({ text, abortController });
expect(mockRun.generateTitle).toHaveBeenCalledWith(
expect.objectContaining({
titlePrompt: 'Custom title prompt',
}),
);
});
it('should use titlePromptTemplate from endpoint config', async () => {
const text = 'Test conversation text';
const abortController = new AbortController();
await client.titleConvo({ text, abortController });
expect(mockRun.generateTitle).toHaveBeenCalledWith(
expect.objectContaining({
titlePromptTemplate: 'Template: {{content}}',
}),
);
});
it('should use titleMethod from endpoint config', async () => {
const text = 'Test conversation text';
const abortController = new AbortController();
await client.titleConvo({ text, abortController });
expect(mockRun.generateTitle).toHaveBeenCalledWith(
expect.objectContaining({
provider: Providers.OPENAI,
titleMethod: 'structured',
}),
);
});
it('should use titleModel from endpoint config when provided', async () => {
const text = 'Test conversation text';
const abortController = new AbortController();
await client.titleConvo({ text, abortController });
// Check that generateTitle was called with correct clientOptions
const generateTitleCall = mockRun.generateTitle.mock.calls[0][0];
expect(generateTitleCall.clientOptions.model).toBe('gpt-3.5-turbo');
});
it('should handle missing endpoint config gracefully', async () => {
// Remove endpoint config
mockReq.app.locals[EModelEndpoint.openAI] = undefined;
const text = 'Test conversation text';
const abortController = new AbortController();
await client.titleConvo({ text, abortController });
expect(mockRun.generateTitle).toHaveBeenCalledWith(
expect.objectContaining({
titlePrompt: undefined,
titlePromptTemplate: undefined,
titleMethod: undefined,
}),
);
});
it('should use agent model when titleModel is not provided', async () => {
// Remove titleModel from config
delete mockReq.app.locals[EModelEndpoint.openAI].titleModel;
const text = 'Test conversation text';
const abortController = new AbortController();
await client.titleConvo({ text, abortController });
const generateTitleCall = mockRun.generateTitle.mock.calls[0][0];
expect(generateTitleCall.clientOptions.model).toBe('gpt-4'); // Should use agent's model
});
it('should not use titleModel when it equals CURRENT_MODEL constant', async () => {
mockReq.app.locals[EModelEndpoint.openAI].titleModel = Constants.CURRENT_MODEL;
const text = 'Test conversation text';
const abortController = new AbortController();
await client.titleConvo({ text, abortController });
const generateTitleCall = mockRun.generateTitle.mock.calls[0][0];
expect(generateTitleCall.clientOptions.model).toBe('gpt-4'); // Should use agent's model
});
it('should pass all required parameters to generateTitle', async () => {
const text = 'Test conversation text';
const abortController = new AbortController();
await client.titleConvo({ text, abortController });
expect(mockRun.generateTitle).toHaveBeenCalledWith({
provider: expect.any(String),
inputText: text,
contentParts: client.contentParts,
clientOptions: expect.objectContaining({
model: 'gpt-3.5-turbo',
}),
titlePrompt: 'Custom title prompt',
titlePromptTemplate: 'Template: {{content}}',
titleMethod: 'structured',
chainOptions: expect.objectContaining({
signal: abortController.signal,
}),
});
});
it('should record collected usage after title generation', async () => {
const text = 'Test conversation text';
const abortController = new AbortController();
await client.titleConvo({ text, abortController });
expect(client.recordCollectedUsage).toHaveBeenCalledWith({
model: 'gpt-3.5-turbo',
context: 'title',
collectedUsage: expect.any(Array),
});
});
it('should return the generated title', async () => {
const text = 'Test conversation text';
const abortController = new AbortController();
const result = await client.titleConvo({ text, abortController });
expect(result).toBe('Generated Title');
});
it('should handle errors gracefully and return undefined', async () => {
mockRun.generateTitle.mockRejectedValue(new Error('Title generation failed'));
const text = 'Test conversation text';
const abortController = new AbortController();
const result = await client.titleConvo({ text, abortController });
expect(result).toBeUndefined();
});
it('should pass titleEndpoint configuration to generateTitle', async () => {
// Mock the API key just for this test
const originalApiKey = process.env.ANTHROPIC_API_KEY;
process.env.ANTHROPIC_API_KEY = 'test-api-key';
// Add titleEndpoint to the config
mockReq.app.locals[EModelEndpoint.openAI].titleEndpoint = EModelEndpoint.anthropic;
mockReq.app.locals[EModelEndpoint.openAI].titleMethod = 'structured';
mockReq.app.locals[EModelEndpoint.openAI].titlePrompt = 'Custom title prompt';
mockReq.app.locals[EModelEndpoint.openAI].titlePromptTemplate = 'Custom template';
const text = 'Test conversation text';
const abortController = new AbortController();
await client.titleConvo({ text, abortController });
// Verify generateTitle was called with the custom configuration
expect(mockRun.generateTitle).toHaveBeenCalledWith(
expect.objectContaining({
titleMethod: 'structured',
provider: Providers.ANTHROPIC,
titlePrompt: 'Custom title prompt',
titlePromptTemplate: 'Custom template',
}),
);
// Restore the original API key
if (originalApiKey) {
process.env.ANTHROPIC_API_KEY = originalApiKey;
} else {
delete process.env.ANTHROPIC_API_KEY;
}
});
it('should use all config when endpoint config is missing', async () => {
// Remove endpoint-specific config
delete mockReq.app.locals[EModelEndpoint.openAI].titleModel;
delete mockReq.app.locals[EModelEndpoint.openAI].titlePrompt;
delete mockReq.app.locals[EModelEndpoint.openAI].titleMethod;
delete mockReq.app.locals[EModelEndpoint.openAI].titlePromptTemplate;
// Set 'all' config
mockReq.app.locals.all = {
titleModel: 'gpt-4o-mini',
titlePrompt: 'All config title prompt',
titleMethod: 'completion',
titlePromptTemplate: 'All config template: {{content}}',
};
const text = 'Test conversation text';
const abortController = new AbortController();
await client.titleConvo({ text, abortController });
// Verify generateTitle was called with 'all' config values
expect(mockRun.generateTitle).toHaveBeenCalledWith(
expect.objectContaining({
titleMethod: 'completion',
titlePrompt: 'All config title prompt',
titlePromptTemplate: 'All config template: {{content}}',
}),
);
// Check that the model was set from 'all' config
const generateTitleCall = mockRun.generateTitle.mock.calls[0][0];
expect(generateTitleCall.clientOptions.model).toBe('gpt-4o-mini');
});
it('should prioritize all config over endpoint config for title settings', async () => {
// Set both endpoint and 'all' config
mockReq.app.locals[EModelEndpoint.openAI].titleModel = 'gpt-3.5-turbo';
mockReq.app.locals[EModelEndpoint.openAI].titlePrompt = 'Endpoint title prompt';
mockReq.app.locals[EModelEndpoint.openAI].titleMethod = 'structured';
// Remove titlePromptTemplate from endpoint config to test fallback
delete mockReq.app.locals[EModelEndpoint.openAI].titlePromptTemplate;
mockReq.app.locals.all = {
titleModel: 'gpt-4o-mini',
titlePrompt: 'All config title prompt',
titleMethod: 'completion',
titlePromptTemplate: 'All config template',
};
const text = 'Test conversation text';
const abortController = new AbortController();
await client.titleConvo({ text, abortController });
// Verify 'all' config takes precedence over endpoint config
expect(mockRun.generateTitle).toHaveBeenCalledWith(
expect.objectContaining({
titleMethod: 'completion',
titlePrompt: 'All config title prompt',
titlePromptTemplate: 'All config template',
}),
);
// Check that the model was set from 'all' config
const generateTitleCall = mockRun.generateTitle.mock.calls[0][0];
expect(generateTitleCall.clientOptions.model).toBe('gpt-4o-mini');
});
it('should use all config with titleEndpoint and verify provider switch', async () => {
// Mock the API key for the titleEndpoint provider
const originalApiKey = process.env.ANTHROPIC_API_KEY;
process.env.ANTHROPIC_API_KEY = 'test-anthropic-key';
// Remove endpoint-specific config to test 'all' config
delete mockReq.app.locals[EModelEndpoint.openAI];
// Set comprehensive 'all' config with all new title options
mockReq.app.locals.all = {
titleConvo: true,
titleModel: 'claude-3-haiku-20240307',
titleMethod: 'completion', // Testing the new default method
titlePrompt: 'Generate a concise, descriptive title for this conversation',
titlePromptTemplate: 'Conversation summary: {{content}}',
titleEndpoint: EModelEndpoint.anthropic, // Should switch provider to Anthropic
};
const text = 'Test conversation about AI and machine learning';
const abortController = new AbortController();
await client.titleConvo({ text, abortController });
// Verify all config values were used
expect(mockRun.generateTitle).toHaveBeenCalledWith(
expect.objectContaining({
provider: Providers.ANTHROPIC, // Critical: Verify provider switched to Anthropic
titleMethod: 'completion',
titlePrompt: 'Generate a concise, descriptive title for this conversation',
titlePromptTemplate: 'Conversation summary: {{content}}',
inputText: text,
contentParts: client.contentParts,
}),
);
// Verify the model was set from 'all' config
const generateTitleCall = mockRun.generateTitle.mock.calls[0][0];
expect(generateTitleCall.clientOptions.model).toBe('claude-3-haiku-20240307');
// Verify other client options are set correctly
expect(generateTitleCall.clientOptions).toMatchObject({
model: 'claude-3-haiku-20240307',
// Note: Anthropic's getOptions may set its own maxTokens value
});
// Restore the original API key
if (originalApiKey) {
process.env.ANTHROPIC_API_KEY = originalApiKey;
} else {
delete process.env.ANTHROPIC_API_KEY;
}
});
it('should test all titleMethod options from all config', async () => {
// Test each titleMethod: 'completion', 'functions', 'structured'
const titleMethods = ['completion', 'functions', 'structured'];
for (const method of titleMethods) {
// Clear previous calls
mockRun.generateTitle.mockClear();
// Remove endpoint config
delete mockReq.app.locals[EModelEndpoint.openAI];
// Set 'all' config with specific titleMethod
mockReq.app.locals.all = {
titleModel: 'gpt-4o-mini',
titleMethod: method,
titlePrompt: `Testing ${method} method`,
titlePromptTemplate: `Template for ${method}: {{content}}`,
};
const text = `Test conversation for ${method} method`;
const abortController = new AbortController();
await client.titleConvo({ text, abortController });
// Verify the correct titleMethod was used
expect(mockRun.generateTitle).toHaveBeenCalledWith(
expect.objectContaining({
titleMethod: method,
titlePrompt: `Testing ${method} method`,
titlePromptTemplate: `Template for ${method}: {{content}}`,
}),
);
}
});
describe('Azure-specific title generation', () => {
let originalEnv;
beforeEach(() => {
// Reset mocks
jest.clearAllMocks();
// Save original environment variables
originalEnv = { ...process.env };
// Mock Azure API keys
process.env.AZURE_OPENAI_API_KEY = 'test-azure-key';
process.env.AZURE_API_KEY = 'test-azure-key';
process.env.EASTUS_API_KEY = 'test-eastus-key';
process.env.EASTUS2_API_KEY = 'test-eastus2-key';
});
afterEach(() => {
// Restore environment variables
process.env = originalEnv;
});
it('should use OPENAI provider for Azure serverless endpoints', async () => {
// Set up Azure endpoint with serverless config
mockAgent.endpoint = EModelEndpoint.azureOpenAI;
mockAgent.provider = EModelEndpoint.azureOpenAI;
mockReq.app.locals[EModelEndpoint.azureOpenAI] = {
titleConvo: true,
titleModel: 'grok-3',
titleMethod: 'completion',
titlePrompt: 'Azure serverless title prompt',
streamRate: 35,
modelGroupMap: {
'grok-3': {
group: 'Azure AI Foundry',
deploymentName: 'grok-3',
},
},
groupMap: {
'Azure AI Foundry': {
apiKey: '${AZURE_API_KEY}',
baseURL: 'https://test.services.ai.azure.com/models',
version: '2024-05-01-preview',
serverless: true,
models: {
'grok-3': {
deploymentName: 'grok-3',
},
},
},
},
};
mockReq.body.endpoint = EModelEndpoint.azureOpenAI;
mockReq.body.model = 'grok-3';
const text = 'Test Azure serverless conversation';
const abortController = new AbortController();
await client.titleConvo({ text, abortController });
// Verify provider was switched to OPENAI for serverless
expect(mockRun.generateTitle).toHaveBeenCalledWith(
expect.objectContaining({
provider: Providers.OPENAI, // Should be OPENAI for serverless
titleMethod: 'completion',
titlePrompt: 'Azure serverless title prompt',
}),
);
});
it('should use AZURE provider for Azure endpoints with instanceName', async () => {
// Set up Azure endpoint
mockAgent.endpoint = EModelEndpoint.azureOpenAI;
mockAgent.provider = EModelEndpoint.azureOpenAI;
mockReq.app.locals[EModelEndpoint.azureOpenAI] = {
titleConvo: true,
titleModel: 'gpt-4o',
titleMethod: 'structured',
titlePrompt: 'Azure instance title prompt',
streamRate: 35,
modelGroupMap: {
'gpt-4o': {
group: 'eastus',
deploymentName: 'gpt-4o',
},
},
groupMap: {
eastus: {
apiKey: '${EASTUS_API_KEY}',
instanceName: 'region-instance',
version: '2024-02-15-preview',
models: {
'gpt-4o': {
deploymentName: 'gpt-4o',
},
},
},
},
};
mockReq.body.endpoint = EModelEndpoint.azureOpenAI;
mockReq.body.model = 'gpt-4o';
const text = 'Test Azure instance conversation';
const abortController = new AbortController();
await client.titleConvo({ text, abortController });
// Verify provider remains AZURE with instanceName
expect(mockRun.generateTitle).toHaveBeenCalledWith(
expect.objectContaining({
provider: Providers.AZURE,
titleMethod: 'structured',
titlePrompt: 'Azure instance title prompt',
}),
);
});
it('should handle Azure titleModel with CURRENT_MODEL constant', async () => {
// Set up Azure endpoint
mockAgent.endpoint = EModelEndpoint.azureOpenAI;
mockAgent.provider = EModelEndpoint.azureOpenAI;
mockAgent.model_parameters.model = 'gpt-4o-latest';
mockReq.app.locals[EModelEndpoint.azureOpenAI] = {
titleConvo: true,
titleModel: Constants.CURRENT_MODEL,
titleMethod: 'functions',
streamRate: 35,
modelGroupMap: {
'gpt-4o-latest': {
group: 'region-eastus',
deploymentName: 'gpt-4o-mini',
version: '2024-02-15-preview',
},
},
groupMap: {
'region-eastus': {
apiKey: '${EASTUS2_API_KEY}',
instanceName: 'test-instance',
version: '2024-12-01-preview',
models: {
'gpt-4o-latest': {
deploymentName: 'gpt-4o-mini',
version: '2024-02-15-preview',
},
},
},
},
};
mockReq.body.endpoint = EModelEndpoint.azureOpenAI;
mockReq.body.model = 'gpt-4o-latest';
const text = 'Test Azure current model';
const abortController = new AbortController();
await client.titleConvo({ text, abortController });
// Verify it uses the correct model when titleModel is CURRENT_MODEL
const generateTitleCall = mockRun.generateTitle.mock.calls[0][0];
// When CURRENT_MODEL is used with Azure, the model gets mapped to the deployment name
// In this case, 'gpt-4o-latest' is mapped to 'gpt-4o-mini' deployment
expect(generateTitleCall.clientOptions.model).toBe('gpt-4o-mini');
// Also verify that CURRENT_MODEL constant was not passed as the model
expect(generateTitleCall.clientOptions.model).not.toBe(Constants.CURRENT_MODEL);
});
it('should handle Azure with multiple model groups', async () => {
// Set up Azure endpoint
mockAgent.endpoint = EModelEndpoint.azureOpenAI;
mockAgent.provider = EModelEndpoint.azureOpenAI;
mockReq.app.locals[EModelEndpoint.azureOpenAI] = {
titleConvo: true,
titleModel: 'o1-mini',
titleMethod: 'completion',
streamRate: 35,
modelGroupMap: {
'gpt-4o': {
group: 'eastus',
deploymentName: 'gpt-4o',
},
'o1-mini': {
group: 'region-eastus',
deploymentName: 'o1-mini',
},
'codex-mini': {
group: 'codex-mini',
deploymentName: 'codex-mini',
},
},
groupMap: {
eastus: {
apiKey: '${EASTUS_API_KEY}',
instanceName: 'region-eastus',
version: '2024-02-15-preview',
models: {
'gpt-4o': {
deploymentName: 'gpt-4o',
},
},
},
'region-eastus': {
apiKey: '${EASTUS2_API_KEY}',
instanceName: 'region-eastus2',
version: '2024-12-01-preview',
models: {
'o1-mini': {
deploymentName: 'o1-mini',
},
},
},
'codex-mini': {
apiKey: '${AZURE_API_KEY}',
baseURL: 'https://example.cognitiveservices.azure.com/openai/',
version: '2025-04-01-preview',
serverless: true,
models: {
'codex-mini': {
deploymentName: 'codex-mini',
},
},
},
},
};
mockReq.body.endpoint = EModelEndpoint.azureOpenAI;
mockReq.body.model = 'o1-mini';
const text = 'Test Azure multi-group conversation';
const abortController = new AbortController();
await client.titleConvo({ text, abortController });
// Verify correct model and provider are used
expect(mockRun.generateTitle).toHaveBeenCalledWith(
expect.objectContaining({
provider: Providers.AZURE,
titleMethod: 'completion',
}),
);
const generateTitleCall = mockRun.generateTitle.mock.calls[0][0];
expect(generateTitleCall.clientOptions.model).toBe('o1-mini');
expect(generateTitleCall.clientOptions.maxTokens).toBeUndefined(); // o1 models shouldn't have maxTokens
});
it('should use all config as fallback for Azure endpoints', async () => {
// Set up Azure endpoint with minimal config
mockAgent.endpoint = EModelEndpoint.azureOpenAI;
mockAgent.provider = EModelEndpoint.azureOpenAI;
mockReq.body.endpoint = EModelEndpoint.azureOpenAI;
mockReq.body.model = 'gpt-4';
// Remove Azure-specific config
delete mockReq.app.locals[EModelEndpoint.azureOpenAI];
// Set 'all' config as fallback with a serverless Azure config
mockReq.app.locals.all = {
titleConvo: true,
titleModel: 'gpt-4',
titleMethod: 'structured',
titlePrompt: 'Fallback title prompt from all config',
titlePromptTemplate: 'Template: {{content}}',
modelGroupMap: {
'gpt-4': {
group: 'default-group',
deploymentName: 'gpt-4',
},
},
groupMap: {
'default-group': {
apiKey: '${AZURE_API_KEY}',
baseURL: 'https://default.openai.azure.com/',
version: '2024-02-15-preview',
serverless: true,
models: {
'gpt-4': {
deploymentName: 'gpt-4',
},
},
},
},
};
const text = 'Test Azure with all config fallback';
const abortController = new AbortController();
await client.titleConvo({ text, abortController });
// Verify all config is used
expect(mockRun.generateTitle).toHaveBeenCalledWith(
expect.objectContaining({
provider: Providers.OPENAI, // Should be OPENAI when no instanceName
titleMethod: 'structured',
titlePrompt: 'Fallback title prompt from all config',
titlePromptTemplate: 'Template: {{content}}',
}),
);
});
});
});
});

View File

@@ -12,10 +12,14 @@ const { saveMessage } = require('~/models');
const AgentController = async (req, res, next, initializeClient, addTitle) => {
let {
text,
isRegenerate,
endpointOption,
conversationId,
isContinued = false,
editedContent = null,
parentMessageId = null,
overrideParentMessageId = null,
responseMessageId: editedResponseMessageId = null,
} = req.body;
let sender;
@@ -67,7 +71,7 @@ const AgentController = async (req, res, next, initializeClient, addTitle) => {
handler();
}
} catch (e) {
// Ignore cleanup errors
logger.error('[AgentController] Error in cleanup handler', e);
}
}
}
@@ -155,7 +159,7 @@ const AgentController = async (req, res, next, initializeClient, addTitle) => {
try {
res.removeListener('close', closeHandler);
} catch (e) {
// Ignore
logger.error('[AgentController] Error removing close listener', e);
}
});
@@ -163,10 +167,15 @@ const AgentController = async (req, res, next, initializeClient, addTitle) => {
user: userId,
onStart,
getReqData,
isContinued,
isRegenerate,
editedContent,
conversationId,
parentMessageId,
abortController,
overrideParentMessageId,
isEdited: !!editedContent,
responseMessageId: editedResponseMessageId,
progressOptions: {
res,
},

View File

@@ -1,6 +1,8 @@
const { z } = require('zod');
const fs = require('fs').promises;
const { nanoid } = require('nanoid');
const { logger } = require('@librechat/data-schemas');
const { agentCreateSchema, agentUpdateSchema } = require('@librechat/api');
const {
Tools,
Constants,
@@ -8,6 +10,7 @@ const {
SystemRoles,
EToolResources,
actionDelimiter,
removeNullishValues,
} = require('librechat-data-provider');
const {
getAgent,
@@ -30,6 +33,7 @@ const { deleteFileByFilter } = require('~/models/File');
const systemTools = {
[Tools.execute_code]: true,
[Tools.file_search]: true,
[Tools.web_search]: true,
};
/**
@@ -42,9 +46,13 @@ const systemTools = {
*/
const createAgentHandler = async (req, res) => {
try {
const { tools = [], provider, name, description, instructions, model, ...agentData } = req.body;
const validatedData = agentCreateSchema.parse(req.body);
const { tools = [], ...agentData } = removeNullishValues(validatedData);
const { id: userId } = req.user;
agentData.id = `agent_${nanoid()}`;
agentData.author = userId;
agentData.tools = [];
const availableTools = await getCachedTools({ includeGlobal: true });
@@ -58,19 +66,13 @@ const createAgentHandler = async (req, res) => {
}
}
Object.assign(agentData, {
author: userId,
name,
description,
instructions,
provider,
model,
});
agentData.id = `agent_${nanoid()}`;
const agent = await createAgent(agentData);
res.status(201).json(agent);
} catch (error) {
if (error instanceof z.ZodError) {
logger.error('[/Agents] Validation error', error.errors);
return res.status(400).json({ error: 'Invalid request data', details: error.errors });
}
logger.error('[/Agents] Error creating agent', error);
res.status(500).json({ error: error.message });
}
@@ -154,14 +156,16 @@ const getAgentHandler = async (req, res) => {
const updateAgentHandler = async (req, res) => {
try {
const id = req.params.id;
const { projectIds, removeProjectIds, ...updateData } = req.body;
const validatedData = agentUpdateSchema.parse(req.body);
const { projectIds, removeProjectIds, ...updateData } = removeNullishValues(validatedData);
const isAdmin = req.user.role === SystemRoles.ADMIN;
const existingAgent = await getAgent({ id });
const isAuthor = existingAgent.author.toString() === req.user.id;
if (!existingAgent) {
return res.status(404).json({ error: 'Agent not found' });
}
const isAuthor = existingAgent.author.toString() === req.user.id;
const hasEditPermission = existingAgent.isCollaborative || isAdmin || isAuthor;
if (!hasEditPermission) {
@@ -200,6 +204,11 @@ const updateAgentHandler = async (req, res) => {
return res.json(updatedAgent);
} catch (error) {
if (error instanceof z.ZodError) {
logger.error('[/Agents/:id] Validation error', error.errors);
return res.status(400).json({ error: 'Invalid request data', details: error.errors });
}
logger.error('[/Agents/:id] Error updating Agent', error);
if (error.statusCode === 409) {
@@ -242,6 +251,8 @@ const duplicateAgentHandler = async (req, res) => {
createdAt: _createdAt,
updatedAt: _updatedAt,
tool_resources: _tool_resources = {},
versions: _versions,
__v: _v,
...cloneData
} = agent;
cloneData.name = `${agent.name} (${new Date().toLocaleString('en-US', {
@@ -380,6 +391,22 @@ const uploadAgentAvatarHandler = async (req, res) => {
return res.status(400).json({ message: 'Agent ID is required' });
}
const isAdmin = req.user.role === SystemRoles.ADMIN;
const existingAgent = await getAgent({ id: agent_id });
if (!existingAgent) {
return res.status(404).json({ error: 'Agent not found' });
}
const isAuthor = existingAgent.author.toString() === req.user.id;
const hasEditPermission = existingAgent.isCollaborative || isAdmin || isAuthor;
if (!hasEditPermission) {
return res.status(403).json({
error: 'You do not have permission to modify this non-collaborative agent',
});
}
const buffer = await fs.readFile(req.file.path);
const fileStrategy = req.app.locals.fileStrategy;
@@ -402,14 +429,7 @@ const uploadAgentAvatarHandler = async (req, res) => {
source: fileStrategy,
};
let _avatar;
try {
const agent = await getAgent({ id: agent_id });
_avatar = agent.avatar;
} catch (error) {
logger.error('[/:agent_id/avatar] Error fetching agent', error);
_avatar = {};
}
let _avatar = existingAgent.avatar;
if (_avatar && _avatar.source) {
const { deleteFile } = getStrategyFunctions(_avatar.source);
@@ -431,7 +451,7 @@ const uploadAgentAvatarHandler = async (req, res) => {
};
promises.push(
await updateAgent({ id: agent_id, author: req.user.id }, data, {
await updateAgent({ id: agent_id }, data, {
updatingUserId: req.user.id,
}),
);

View File

@@ -0,0 +1,659 @@
const mongoose = require('mongoose');
const { v4: uuidv4 } = require('uuid');
const { MongoMemoryServer } = require('mongodb-memory-server');
const { agentSchema } = require('@librechat/data-schemas');
// Only mock the dependencies that are not database-related
jest.mock('~/server/services/Config', () => ({
getCachedTools: jest.fn().mockResolvedValue({
web_search: true,
execute_code: true,
file_search: true,
}),
}));
jest.mock('~/models/Project', () => ({
getProjectByName: jest.fn().mockResolvedValue(null),
}));
jest.mock('~/server/services/Files/strategies', () => ({
getStrategyFunctions: jest.fn(),
}));
jest.mock('~/server/services/Files/images/avatar', () => ({
resizeAvatar: jest.fn(),
}));
jest.mock('~/server/services/Files/S3/crud', () => ({
refreshS3Url: jest.fn(),
}));
jest.mock('~/server/services/Files/process', () => ({
filterFile: jest.fn(),
}));
jest.mock('~/models/Action', () => ({
updateAction: jest.fn(),
getActions: jest.fn().mockResolvedValue([]),
}));
jest.mock('~/models/File', () => ({
deleteFileByFilter: jest.fn(),
}));
const { createAgent: createAgentHandler, updateAgent: updateAgentHandler } = require('./v1');
/**
* @type {import('mongoose').Model<import('@librechat/data-schemas').IAgent>}
*/
let Agent;
describe('Agent Controllers - Mass Assignment Protection', () => {
let mongoServer;
let mockReq;
let mockRes;
beforeAll(async () => {
mongoServer = await MongoMemoryServer.create();
const mongoUri = mongoServer.getUri();
await mongoose.connect(mongoUri);
Agent = mongoose.models.Agent || mongoose.model('Agent', agentSchema);
}, 20000);
afterAll(async () => {
await mongoose.disconnect();
await mongoServer.stop();
});
beforeEach(async () => {
await Agent.deleteMany({});
// Reset all mocks
jest.clearAllMocks();
// Setup mock request and response objects
mockReq = {
user: {
id: new mongoose.Types.ObjectId().toString(),
role: 'USER',
},
body: {},
params: {},
app: {
locals: {
fileStrategy: 'local',
},
},
};
mockRes = {
status: jest.fn().mockReturnThis(),
json: jest.fn().mockReturnThis(),
};
});
describe('createAgentHandler', () => {
test('should create agent with allowed fields only', async () => {
const validData = {
name: 'Test Agent',
description: 'A test agent',
instructions: 'Be helpful',
provider: 'openai',
model: 'gpt-4',
tools: ['web_search'],
model_parameters: { temperature: 0.7 },
tool_resources: {
file_search: { file_ids: ['file1', 'file2'] },
},
};
mockReq.body = validData;
await createAgentHandler(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(201);
expect(mockRes.json).toHaveBeenCalled();
const createdAgent = mockRes.json.mock.calls[0][0];
expect(createdAgent.name).toBe('Test Agent');
expect(createdAgent.description).toBe('A test agent');
expect(createdAgent.provider).toBe('openai');
expect(createdAgent.model).toBe('gpt-4');
expect(createdAgent.author.toString()).toBe(mockReq.user.id);
expect(createdAgent.tools).toContain('web_search');
// Verify in database
const agentInDb = await Agent.findOne({ id: createdAgent.id });
expect(agentInDb).toBeDefined();
expect(agentInDb.name).toBe('Test Agent');
expect(agentInDb.author.toString()).toBe(mockReq.user.id);
});
test('should reject creation with unauthorized fields (mass assignment protection)', async () => {
const maliciousData = {
// Required fields
provider: 'openai',
model: 'gpt-4',
name: 'Malicious Agent',
// Unauthorized fields that should be stripped
author: new mongoose.Types.ObjectId().toString(), // Should not be able to set author
authorName: 'Hacker', // Should be stripped
isCollaborative: true, // Should be stripped on creation
versions: [], // Should be stripped
_id: new mongoose.Types.ObjectId(), // Should be stripped
id: 'custom_agent_id', // Should be overridden
createdAt: new Date('2020-01-01'), // Should be stripped
updatedAt: new Date('2020-01-01'), // Should be stripped
};
mockReq.body = maliciousData;
await createAgentHandler(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(201);
const createdAgent = mockRes.json.mock.calls[0][0];
// Verify unauthorized fields were not set
expect(createdAgent.author.toString()).toBe(mockReq.user.id); // Should be the request user, not the malicious value
expect(createdAgent.authorName).toBeUndefined();
expect(createdAgent.isCollaborative).toBeFalsy();
expect(createdAgent.versions).toHaveLength(1); // Should have exactly 1 version from creation
expect(createdAgent.id).not.toBe('custom_agent_id'); // Should have generated ID
expect(createdAgent.id).toMatch(/^agent_/); // Should have proper prefix
// Verify timestamps are recent (not the malicious dates)
const createdTime = new Date(createdAgent.createdAt).getTime();
const now = Date.now();
expect(now - createdTime).toBeLessThan(5000); // Created within last 5 seconds
// Verify in database
const agentInDb = await Agent.findOne({ id: createdAgent.id });
expect(agentInDb.author.toString()).toBe(mockReq.user.id);
expect(agentInDb.authorName).toBeUndefined();
});
test('should validate required fields', async () => {
const invalidData = {
name: 'Missing Required Fields',
// Missing provider and model
};
mockReq.body = invalidData;
await createAgentHandler(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(400);
expect(mockRes.json).toHaveBeenCalledWith(
expect.objectContaining({
error: 'Invalid request data',
details: expect.any(Array),
}),
);
// Verify nothing was created in database
const count = await Agent.countDocuments();
expect(count).toBe(0);
});
test('should handle tool_resources validation', async () => {
const dataWithInvalidToolResources = {
provider: 'openai',
model: 'gpt-4',
name: 'Agent with Tool Resources',
tool_resources: {
// Valid resources
file_search: {
file_ids: ['file1', 'file2'],
vector_store_ids: ['vs1'],
},
execute_code: {
file_ids: ['file3'],
},
// Invalid resource (should be stripped by schema)
invalid_resource: {
file_ids: ['file4'],
},
},
};
mockReq.body = dataWithInvalidToolResources;
await createAgentHandler(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(201);
const createdAgent = mockRes.json.mock.calls[0][0];
expect(createdAgent.tool_resources).toBeDefined();
expect(createdAgent.tool_resources.file_search).toBeDefined();
expect(createdAgent.tool_resources.execute_code).toBeDefined();
expect(createdAgent.tool_resources.invalid_resource).toBeUndefined(); // Should be stripped
// Verify in database
const agentInDb = await Agent.findOne({ id: createdAgent.id });
expect(agentInDb.tool_resources.invalid_resource).toBeUndefined();
});
test('should handle avatar validation', async () => {
const dataWithAvatar = {
provider: 'openai',
model: 'gpt-4',
name: 'Agent with Avatar',
avatar: {
filepath: 'https://example.com/avatar.png',
source: 's3',
},
};
mockReq.body = dataWithAvatar;
await createAgentHandler(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(201);
const createdAgent = mockRes.json.mock.calls[0][0];
expect(createdAgent.avatar).toEqual({
filepath: 'https://example.com/avatar.png',
source: 's3',
});
});
test('should handle invalid avatar format', async () => {
const dataWithInvalidAvatar = {
provider: 'openai',
model: 'gpt-4',
name: 'Agent with Invalid Avatar',
avatar: 'just-a-string', // Invalid format
};
mockReq.body = dataWithInvalidAvatar;
await createAgentHandler(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(400);
expect(mockRes.json).toHaveBeenCalledWith(
expect.objectContaining({
error: 'Invalid request data',
}),
);
});
});
describe('updateAgentHandler', () => {
let existingAgentId;
let existingAgentAuthorId;
beforeEach(async () => {
// Create an existing agent for update tests
existingAgentAuthorId = new mongoose.Types.ObjectId();
const agent = await Agent.create({
id: `agent_${uuidv4()}`,
name: 'Original Agent',
provider: 'openai',
model: 'gpt-3.5-turbo',
author: existingAgentAuthorId,
description: 'Original description',
isCollaborative: false,
versions: [
{
name: 'Original Agent',
provider: 'openai',
model: 'gpt-3.5-turbo',
description: 'Original description',
createdAt: new Date(),
updatedAt: new Date(),
},
],
});
existingAgentId = agent.id;
});
test('should update agent with allowed fields only', async () => {
mockReq.user.id = existingAgentAuthorId.toString(); // Set as author
mockReq.params.id = existingAgentId;
mockReq.body = {
name: 'Updated Agent',
description: 'Updated description',
model: 'gpt-4',
isCollaborative: true, // This IS allowed in updates
};
await updateAgentHandler(mockReq, mockRes);
expect(mockRes.status).not.toHaveBeenCalledWith(400);
expect(mockRes.status).not.toHaveBeenCalledWith(403);
expect(mockRes.json).toHaveBeenCalled();
const updatedAgent = mockRes.json.mock.calls[0][0];
expect(updatedAgent.name).toBe('Updated Agent');
expect(updatedAgent.description).toBe('Updated description');
expect(updatedAgent.model).toBe('gpt-4');
expect(updatedAgent.isCollaborative).toBe(true);
expect(updatedAgent.author).toBe(existingAgentAuthorId.toString());
// Verify in database
const agentInDb = await Agent.findOne({ id: existingAgentId });
expect(agentInDb.name).toBe('Updated Agent');
expect(agentInDb.isCollaborative).toBe(true);
});
test('should reject update with unauthorized fields (mass assignment protection)', async () => {
mockReq.user.id = existingAgentAuthorId.toString();
mockReq.params.id = existingAgentId;
mockReq.body = {
name: 'Updated Name',
// Unauthorized fields that should be stripped
author: new mongoose.Types.ObjectId().toString(), // Should not be able to change author
authorName: 'Hacker', // Should be stripped
id: 'different_agent_id', // Should be stripped
_id: new mongoose.Types.ObjectId(), // Should be stripped
versions: [], // Should be stripped
createdAt: new Date('2020-01-01'), // Should be stripped
updatedAt: new Date('2020-01-01'), // Should be stripped
};
await updateAgentHandler(mockReq, mockRes);
expect(mockRes.json).toHaveBeenCalled();
const updatedAgent = mockRes.json.mock.calls[0][0];
// Verify unauthorized fields were not changed
expect(updatedAgent.author).toBe(existingAgentAuthorId.toString()); // Should not have changed
expect(updatedAgent.authorName).toBeUndefined();
expect(updatedAgent.id).toBe(existingAgentId); // Should not have changed
expect(updatedAgent.name).toBe('Updated Name'); // Only this should have changed
// Verify in database
const agentInDb = await Agent.findOne({ id: existingAgentId });
expect(agentInDb.author.toString()).toBe(existingAgentAuthorId.toString());
expect(agentInDb.id).toBe(existingAgentId);
});
test('should reject update from non-author when not collaborative', async () => {
const differentUserId = new mongoose.Types.ObjectId().toString();
mockReq.user.id = differentUserId; // Different user
mockReq.params.id = existingAgentId;
mockReq.body = {
name: 'Unauthorized Update',
};
await updateAgentHandler(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(403);
expect(mockRes.json).toHaveBeenCalledWith({
error: 'You do not have permission to modify this non-collaborative agent',
});
// Verify agent was not modified in database
const agentInDb = await Agent.findOne({ id: existingAgentId });
expect(agentInDb.name).toBe('Original Agent');
});
test('should allow update from non-author when collaborative', async () => {
// First make the agent collaborative
await Agent.updateOne({ id: existingAgentId }, { isCollaborative: true });
const differentUserId = new mongoose.Types.ObjectId().toString();
mockReq.user.id = differentUserId; // Different user
mockReq.params.id = existingAgentId;
mockReq.body = {
name: 'Collaborative Update',
};
await updateAgentHandler(mockReq, mockRes);
expect(mockRes.status).not.toHaveBeenCalledWith(403);
expect(mockRes.json).toHaveBeenCalled();
const updatedAgent = mockRes.json.mock.calls[0][0];
expect(updatedAgent.name).toBe('Collaborative Update');
// Author field should be removed for non-author
expect(updatedAgent.author).toBeUndefined();
// Verify in database
const agentInDb = await Agent.findOne({ id: existingAgentId });
expect(agentInDb.name).toBe('Collaborative Update');
});
test('should allow admin to update any agent', async () => {
const adminUserId = new mongoose.Types.ObjectId().toString();
mockReq.user.id = adminUserId;
mockReq.user.role = 'ADMIN'; // Set as admin
mockReq.params.id = existingAgentId;
mockReq.body = {
name: 'Admin Update',
};
await updateAgentHandler(mockReq, mockRes);
expect(mockRes.status).not.toHaveBeenCalledWith(403);
expect(mockRes.json).toHaveBeenCalled();
const updatedAgent = mockRes.json.mock.calls[0][0];
expect(updatedAgent.name).toBe('Admin Update');
});
test('should handle projectIds updates', async () => {
mockReq.user.id = existingAgentAuthorId.toString();
mockReq.params.id = existingAgentId;
const projectId1 = new mongoose.Types.ObjectId().toString();
const projectId2 = new mongoose.Types.ObjectId().toString();
mockReq.body = {
projectIds: [projectId1, projectId2],
};
await updateAgentHandler(mockReq, mockRes);
expect(mockRes.json).toHaveBeenCalled();
const updatedAgent = mockRes.json.mock.calls[0][0];
expect(updatedAgent).toBeDefined();
// Note: updateAgentProjects requires more setup, so we just verify the handler doesn't crash
});
test('should validate tool_resources in updates', async () => {
mockReq.user.id = existingAgentAuthorId.toString();
mockReq.params.id = existingAgentId;
mockReq.body = {
tool_resources: {
ocr: {
file_ids: ['ocr1', 'ocr2'],
},
execute_code: {
file_ids: ['img1'],
},
// Invalid tool resource
invalid_tool: {
file_ids: ['invalid'],
},
},
};
await updateAgentHandler(mockReq, mockRes);
expect(mockRes.json).toHaveBeenCalled();
const updatedAgent = mockRes.json.mock.calls[0][0];
expect(updatedAgent.tool_resources).toBeDefined();
expect(updatedAgent.tool_resources.ocr).toBeDefined();
expect(updatedAgent.tool_resources.execute_code).toBeDefined();
expect(updatedAgent.tool_resources.invalid_tool).toBeUndefined();
});
test('should return 404 for non-existent agent', async () => {
mockReq.user.id = existingAgentAuthorId.toString();
mockReq.params.id = `agent_${uuidv4()}`; // Non-existent ID
mockReq.body = {
name: 'Update Non-existent',
};
await updateAgentHandler(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(404);
expect(mockRes.json).toHaveBeenCalledWith({ error: 'Agent not found' });
});
test('should handle validation errors properly', async () => {
mockReq.user.id = existingAgentAuthorId.toString();
mockReq.params.id = existingAgentId;
mockReq.body = {
model_parameters: 'invalid-not-an-object', // Should be an object
};
await updateAgentHandler(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(400);
expect(mockRes.json).toHaveBeenCalledWith(
expect.objectContaining({
error: 'Invalid request data',
details: expect.any(Array),
}),
);
});
});
describe('Mass Assignment Attack Scenarios', () => {
test('should prevent setting system fields during creation', async () => {
const systemFields = {
provider: 'openai',
model: 'gpt-4',
name: 'System Fields Test',
// System fields that should never be settable by users
__v: 99,
_id: new mongoose.Types.ObjectId(),
versions: [
{
name: 'Fake Version',
provider: 'fake',
model: 'fake-model',
},
],
};
mockReq.body = systemFields;
await createAgentHandler(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(201);
const createdAgent = mockRes.json.mock.calls[0][0];
// Verify system fields were not affected
expect(createdAgent.__v).not.toBe(99);
expect(createdAgent.versions).toHaveLength(1); // Should only have the auto-created version
expect(createdAgent.versions[0].name).toBe('System Fields Test'); // From actual creation
expect(createdAgent.versions[0].provider).toBe('openai'); // From actual creation
// Verify in database
const agentInDb = await Agent.findOne({ id: createdAgent.id });
expect(agentInDb.__v).not.toBe(99);
});
test('should prevent privilege escalation through isCollaborative', async () => {
// Create a non-collaborative agent
const authorId = new mongoose.Types.ObjectId();
const agent = await Agent.create({
id: `agent_${uuidv4()}`,
name: 'Private Agent',
provider: 'openai',
model: 'gpt-4',
author: authorId,
isCollaborative: false,
versions: [
{
name: 'Private Agent',
provider: 'openai',
model: 'gpt-4',
createdAt: new Date(),
updatedAt: new Date(),
},
],
});
// Try to make it collaborative as a different user
const attackerId = new mongoose.Types.ObjectId().toString();
mockReq.user.id = attackerId;
mockReq.params.id = agent.id;
mockReq.body = {
isCollaborative: true, // Trying to escalate privileges
};
await updateAgentHandler(mockReq, mockRes);
// Should be rejected
expect(mockRes.status).toHaveBeenCalledWith(403);
// Verify in database that it's still not collaborative
const agentInDb = await Agent.findOne({ id: agent.id });
expect(agentInDb.isCollaborative).toBe(false);
});
test('should prevent author hijacking', async () => {
const originalAuthorId = new mongoose.Types.ObjectId();
const attackerId = new mongoose.Types.ObjectId();
// Admin creates an agent
mockReq.user.id = originalAuthorId.toString();
mockReq.user.role = 'ADMIN';
mockReq.body = {
provider: 'openai',
model: 'gpt-4',
name: 'Admin Agent',
author: attackerId.toString(), // Trying to set different author
};
await createAgentHandler(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(201);
const createdAgent = mockRes.json.mock.calls[0][0];
// Author should be the actual user, not the attempted value
expect(createdAgent.author.toString()).toBe(originalAuthorId.toString());
expect(createdAgent.author.toString()).not.toBe(attackerId.toString());
// Verify in database
const agentInDb = await Agent.findOne({ id: createdAgent.id });
expect(agentInDb.author.toString()).toBe(originalAuthorId.toString());
});
test('should strip unknown fields to prevent future vulnerabilities', async () => {
mockReq.body = {
provider: 'openai',
model: 'gpt-4',
name: 'Future Proof Test',
// Unknown fields that might be added in future
superAdminAccess: true,
bypassAllChecks: true,
internalFlag: 'secret',
futureFeature: 'exploit',
};
await createAgentHandler(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(201);
const createdAgent = mockRes.json.mock.calls[0][0];
// Verify unknown fields were stripped
expect(createdAgent.superAdminAccess).toBeUndefined();
expect(createdAgent.bypassAllChecks).toBeUndefined();
expect(createdAgent.internalFlag).toBeUndefined();
expect(createdAgent.futureFeature).toBeUndefined();
// Also check in database
const agentInDb = await Agent.findOne({ id: createdAgent.id }).lean();
expect(agentInDb.superAdminAccess).toBeUndefined();
expect(agentInDb.bypassAllChecks).toBeUndefined();
expect(agentInDb.internalFlag).toBeUndefined();
expect(agentInDb.futureFeature).toBeUndefined();
});
});
});

View File

@@ -1,21 +1,21 @@
const { nanoid } = require('nanoid');
const { EnvVar } = require('@librechat/agents');
const { logger } = require('@librechat/data-schemas');
const { checkAccess, loadWebSearchAuth } = require('@librechat/api');
const {
Tools,
AuthType,
Permissions,
ToolCallTypes,
PermissionTypes,
loadWebSearchAuth,
} = require('librechat-data-provider');
const { processFileURL, uploadImageBuffer } = require('~/server/services/Files/process');
const { processCodeOutput } = require('~/server/services/Files/Code/process');
const { createToolCall, getToolCallsByConvo } = require('~/models/ToolCall');
const { loadAuthValues } = require('~/server/services/Tools/credentials');
const { loadTools } = require('~/app/clients/tools/util');
const { checkAccess } = require('~/server/middleware');
const { getRoleByName } = require('~/models/Role');
const { getMessage } = require('~/models/Message');
const { logger } = require('~/config');
const fieldsMap = {
[Tools.execute_code]: [EnvVar.CODE_API_KEY],
@@ -79,6 +79,7 @@ const verifyToolAuth = async (req, res) => {
throwError: false,
});
} catch (error) {
logger.error('Error loading auth values', error);
res.status(200).json({ authenticated: false, message: AuthType.USER_PROVIDED });
return;
}
@@ -132,7 +133,12 @@ const callTool = async (req, res) => {
logger.debug(`[${toolId}/call] User: ${req.user.id}`);
let hasAccess = true;
if (toolAccessPermType[toolId]) {
hasAccess = await checkAccess(req.user, toolAccessPermType[toolId], [Permissions.USE]);
hasAccess = await checkAccess({
user: req.user,
permissionType: toolAccessPermType[toolId],
permissions: [Permissions.USE],
getRoleByName,
});
}
if (!hasAccess) {
logger.warn(

View File

@@ -16,7 +16,7 @@ const { connectDb, indexSync } = require('~/db');
const validateImageRequest = require('./middleware/validateImageRequest');
const { jwtLogin, ldapLogin, passportLogin } = require('~/strategies');
const errorController = require('./controllers/ErrorController');
const initializeMCP = require('./services/initializeMCP');
const initializeMCPs = require('./services/initializeMCPs');
const configureSocialLogins = require('./socialLogins');
const AppService = require('./services/AppService');
const staticCache = require('./utils/staticCache');
@@ -55,7 +55,6 @@ const startServer = async () => {
/* Middleware */
app.use(noIndex);
app.use(errorController);
app.use(express.json({ limit: '3mb' }));
app.use(express.urlencoded({ extended: true, limit: '3mb' }));
app.use(mongoSanitize());
@@ -121,6 +120,9 @@ const startServer = async () => {
app.use('/api/tags', routes.tags);
app.use('/api/mcp', routes.mcp);
// Add the error controller one more time after all routes
app.use(errorController);
app.use((req, res) => {
res.set({
'Cache-Control': process.env.INDEX_CACHE_CONTROL || 'no-cache, no-store, must-revalidate',
@@ -144,7 +146,7 @@ const startServer = async () => {
logger.info(`Server listening at http://${host == '0.0.0.0' ? 'localhost' : host}:${port}`);
}
initializeMCP(app);
initializeMCPs(app);
});
};

View File

@@ -1,5 +1,4 @@
const fs = require('fs');
const path = require('path');
const request = require('supertest');
const { MongoMemoryServer } = require('mongodb-memory-server');
const mongoose = require('mongoose');
@@ -59,6 +58,30 @@ describe('Server Configuration', () => {
expect(response.headers['pragma']).toBe('no-cache');
expect(response.headers['expires']).toBe('0');
});
it('should return 500 for unknown errors via ErrorController', async () => {
// Testing the error handling here on top of unit tests to ensure the middleware is correctly integrated
// Mock MongoDB operations to fail
const originalFindOne = mongoose.models.User.findOne;
const mockError = new Error('MongoDB operation failed');
mongoose.models.User.findOne = jest.fn().mockImplementation(() => {
throw mockError;
});
try {
const response = await request(app).post('/api/auth/login').send({
email: 'test@example.com',
password: 'password123',
});
expect(response.status).toBe(500);
expect(response.text).toBe('An unknown error occurred.');
} finally {
// Restore original function
mongoose.models.User.findOne = originalFindOne;
}
});
});
// Polls the /health endpoint every 30ms for up to 10 seconds to wait for the server to start completely

View File

@@ -1,3 +1,4 @@
const { handleError } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const {
EndpointURLs,
@@ -14,7 +15,6 @@ const openAI = require('~/server/services/Endpoints/openAI');
const agents = require('~/server/services/Endpoints/agents');
const custom = require('~/server/services/Endpoints/custom');
const google = require('~/server/services/Endpoints/google');
const { handleError } = require('~/server/utils');
const buildFunction = {
[EModelEndpoint.openAI]: openAI.buildOptions,

View File

@@ -18,7 +18,6 @@ const message = 'Your account has been temporarily banned due to violations of o
* @function
* @param {Object} req - Express Request object.
* @param {Object} res - Express Response object.
* @param {String} errorMessage - Error message to be displayed in case of /api/ask or /api/edit request.
*
* @returns {Promise<Object>} - Returns a Promise which when resolved sends a response status of 403 with a specific message if request is not of api/ask or api/edit types. If it is, calls `denyRequest()` function.
*/
@@ -135,6 +134,7 @@ const checkBan = async (req, res, next = () => {}) => {
return await banResponse(req, res);
} catch (error) {
logger.error('Error in checkBan middleware:', error);
return next(error);
}
};

View File

@@ -1,4 +1,4 @@
const { Time, CacheKeys } = require('librechat-data-provider');
const { Time, CacheKeys, ViolationTypes } = require('librechat-data-provider');
const clearPendingReq = require('~/cache/clearPendingReq');
const { logViolation, getLogStores } = require('~/cache');
const { isEnabled } = require('~/server/utils');
@@ -37,7 +37,7 @@ const concurrentLimiter = async (req, res, next) => {
const userId = req.user?.id ?? req.user?._id ?? '';
const limit = Math.max(CONCURRENT_MESSAGE_MAX, 1);
const type = 'concurrent';
const type = ViolationTypes.CONCURRENT;
const key = `${isEnabled(USE_REDIS) ? namespace : ''}:${userId}`;
const pendingRequests = +((await cache.get(key)) ?? 0);

View File

@@ -0,0 +1,79 @@
const rateLimit = require('express-rate-limit');
const { ViolationTypes } = require('librechat-data-provider');
const { limiterCache } = require('~/cache/cacheFactory');
const logViolation = require('~/cache/logViolation');
const getEnvironmentVariables = () => {
const FORK_IP_MAX = parseInt(process.env.FORK_IP_MAX) || 30;
const FORK_IP_WINDOW = parseInt(process.env.FORK_IP_WINDOW) || 1;
const FORK_USER_MAX = parseInt(process.env.FORK_USER_MAX) || 7;
const FORK_USER_WINDOW = parseInt(process.env.FORK_USER_WINDOW) || 1;
const FORK_VIOLATION_SCORE = process.env.FORK_VIOLATION_SCORE;
const forkIpWindowMs = FORK_IP_WINDOW * 60 * 1000;
const forkIpMax = FORK_IP_MAX;
const forkIpWindowInMinutes = forkIpWindowMs / 60000;
const forkUserWindowMs = FORK_USER_WINDOW * 60 * 1000;
const forkUserMax = FORK_USER_MAX;
const forkUserWindowInMinutes = forkUserWindowMs / 60000;
return {
forkIpWindowMs,
forkIpMax,
forkIpWindowInMinutes,
forkUserWindowMs,
forkUserMax,
forkUserWindowInMinutes,
forkViolationScore: FORK_VIOLATION_SCORE,
};
};
const createForkHandler = (ip = true) => {
const {
forkIpMax,
forkUserMax,
forkViolationScore,
forkIpWindowInMinutes,
forkUserWindowInMinutes,
} = getEnvironmentVariables();
return async (req, res) => {
const type = ViolationTypes.FILE_UPLOAD_LIMIT;
const errorMessage = {
type,
max: ip ? forkIpMax : forkUserMax,
limiter: ip ? 'ip' : 'user',
windowInMinutes: ip ? forkIpWindowInMinutes : forkUserWindowInMinutes,
};
await logViolation(req, res, type, errorMessage, forkViolationScore);
res.status(429).json({ message: 'Too many conversation fork requests. Try again later' });
};
};
const createForkLimiters = () => {
const { forkIpWindowMs, forkIpMax, forkUserWindowMs, forkUserMax } = getEnvironmentVariables();
const ipLimiterOptions = {
windowMs: forkIpWindowMs,
max: forkIpMax,
handler: createForkHandler(),
store: limiterCache('fork_ip_limiter'),
};
const userLimiterOptions = {
windowMs: forkUserWindowMs,
max: forkUserMax,
handler: createForkHandler(false),
keyGenerator: function (req) {
return req.user?.id;
},
store: limiterCache('fork_user_limiter'),
};
const forkIpLimiter = rateLimit(ipLimiterOptions);
const forkUserLimiter = rateLimit(userLimiterOptions);
return { forkIpLimiter, forkUserLimiter };
};
module.exports = { createForkLimiters };

View File

@@ -1,16 +1,14 @@
const rateLimit = require('express-rate-limit');
const { RedisStore } = require('rate-limit-redis');
const { ViolationTypes } = require('librechat-data-provider');
const ioredisClient = require('~/cache/ioredisClient');
const { limiterCache } = require('~/cache/cacheFactory');
const logViolation = require('~/cache/logViolation');
const { isEnabled } = require('~/server/utils');
const { logger } = require('~/config');
const getEnvironmentVariables = () => {
const IMPORT_IP_MAX = parseInt(process.env.IMPORT_IP_MAX) || 100;
const IMPORT_IP_WINDOW = parseInt(process.env.IMPORT_IP_WINDOW) || 15;
const IMPORT_USER_MAX = parseInt(process.env.IMPORT_USER_MAX) || 50;
const IMPORT_USER_WINDOW = parseInt(process.env.IMPORT_USER_WINDOW) || 15;
const IMPORT_VIOLATION_SCORE = process.env.IMPORT_VIOLATION_SCORE;
const importIpWindowMs = IMPORT_IP_WINDOW * 60 * 1000;
const importIpMax = IMPORT_IP_MAX;
@@ -27,12 +25,18 @@ const getEnvironmentVariables = () => {
importUserWindowMs,
importUserMax,
importUserWindowInMinutes,
importViolationScore: IMPORT_VIOLATION_SCORE,
};
};
const createImportHandler = (ip = true) => {
const { importIpMax, importIpWindowInMinutes, importUserMax, importUserWindowInMinutes } =
getEnvironmentVariables();
const {
importIpMax,
importUserMax,
importViolationScore,
importIpWindowInMinutes,
importUserWindowInMinutes,
} = getEnvironmentVariables();
return async (req, res) => {
const type = ViolationTypes.FILE_UPLOAD_LIMIT;
@@ -43,7 +47,7 @@ const createImportHandler = (ip = true) => {
windowInMinutes: ip ? importIpWindowInMinutes : importUserWindowInMinutes,
};
await logViolation(req, res, type, errorMessage);
await logViolation(req, res, type, errorMessage, importViolationScore);
res.status(429).json({ message: 'Too many conversation import requests. Try again later' });
};
};
@@ -56,6 +60,7 @@ const createImportLimiters = () => {
windowMs: importIpWindowMs,
max: importIpMax,
handler: createImportHandler(),
store: limiterCache('import_ip_limiter'),
};
const userLimiterOptions = {
windowMs: importUserWindowMs,
@@ -64,23 +69,9 @@ const createImportLimiters = () => {
keyGenerator: function (req) {
return req.user?.id; // Use the user ID or NULL if not available
},
store: limiterCache('import_user_limiter'),
};
if (isEnabled(process.env.USE_REDIS) && ioredisClient) {
logger.debug('Using Redis for import rate limiters.');
const sendCommand = (...args) => ioredisClient.call(...args);
const ipStore = new RedisStore({
sendCommand,
prefix: 'import_ip_limiter:',
});
const userStore = new RedisStore({
sendCommand,
prefix: 'import_user_limiter:',
});
ipLimiterOptions.store = ipStore;
userLimiterOptions.store = userStore;
}
const importIpLimiter = rateLimit(ipLimiterOptions);
const importUserLimiter = rateLimit(userLimiterOptions);
return { importIpLimiter, importUserLimiter };

View File

@@ -4,6 +4,7 @@ const createSTTLimiters = require('./sttLimiters');
const loginLimiter = require('./loginLimiter');
const importLimiters = require('./importLimiters');
const uploadLimiters = require('./uploadLimiters');
const forkLimiters = require('./forkLimiters');
const registerLimiter = require('./registerLimiter');
const toolCallLimiter = require('./toolCallLimiter');
const messageLimiters = require('./messageLimiters');
@@ -14,6 +15,7 @@ module.exports = {
...uploadLimiters,
...importLimiters,
...messageLimiters,
...forkLimiters,
loginLimiter,
registerLimiter,
toolCallLimiter,

View File

@@ -1,9 +1,8 @@
const rateLimit = require('express-rate-limit');
const { RedisStore } = require('rate-limit-redis');
const { removePorts, isEnabled } = require('~/server/utils');
const ioredisClient = require('~/cache/ioredisClient');
const { ViolationTypes } = require('librechat-data-provider');
const { removePorts } = require('~/server/utils');
const { limiterCache } = require('~/cache/cacheFactory');
const { logViolation } = require('~/cache');
const { logger } = require('~/config');
const { LOGIN_WINDOW = 5, LOGIN_MAX = 7, LOGIN_VIOLATION_SCORE: score } = process.env;
const windowMs = LOGIN_WINDOW * 60 * 1000;
@@ -12,7 +11,7 @@ const windowInMinutes = windowMs / 60000;
const message = `Too many login attempts, please try again after ${windowInMinutes} minutes.`;
const handler = async (req, res) => {
const type = 'logins';
const type = ViolationTypes.LOGINS;
const errorMessage = {
type,
max,
@@ -28,17 +27,9 @@ const limiterOptions = {
max,
handler,
keyGenerator: removePorts,
store: limiterCache('login_limiter'),
};
if (isEnabled(process.env.USE_REDIS) && ioredisClient) {
logger.debug('Using Redis for login rate limiter.');
const store = new RedisStore({
sendCommand: (...args) => ioredisClient.call(...args),
prefix: 'login_limiter:',
});
limiterOptions.store = store;
}
const loginLimiter = rateLimit(limiterOptions);
module.exports = loginLimiter;

View File

@@ -1,16 +1,15 @@
const rateLimit = require('express-rate-limit');
const { RedisStore } = require('rate-limit-redis');
const { ViolationTypes } = require('librechat-data-provider');
const denyRequest = require('~/server/middleware/denyRequest');
const ioredisClient = require('~/cache/ioredisClient');
const { isEnabled } = require('~/server/utils');
const { limiterCache } = require('~/cache/cacheFactory');
const { logViolation } = require('~/cache');
const { logger } = require('~/config');
const {
MESSAGE_IP_MAX = 40,
MESSAGE_IP_WINDOW = 1,
MESSAGE_USER_MAX = 40,
MESSAGE_USER_WINDOW = 1,
MESSAGE_VIOLATION_SCORE: score,
} = process.env;
const ipWindowMs = MESSAGE_IP_WINDOW * 60 * 1000;
@@ -31,7 +30,7 @@ const userWindowInMinutes = userWindowMs / 60000;
*/
const createHandler = (ip = true) => {
return async (req, res) => {
const type = 'message_limit';
const type = ViolationTypes.MESSAGE_LIMIT;
const errorMessage = {
type,
max: ip ? ipMax : userMax,
@@ -39,7 +38,7 @@ const createHandler = (ip = true) => {
windowInMinutes: ip ? ipWindowInMinutes : userWindowInMinutes,
};
await logViolation(req, res, type, errorMessage);
await logViolation(req, res, type, errorMessage, score);
return await denyRequest(req, res, errorMessage);
};
};
@@ -51,6 +50,7 @@ const ipLimiterOptions = {
windowMs: ipWindowMs,
max: ipMax,
handler: createHandler(),
store: limiterCache('message_ip_limiter'),
};
const userLimiterOptions = {
@@ -60,23 +60,9 @@ const userLimiterOptions = {
keyGenerator: function (req) {
return req.user?.id; // Use the user ID or NULL if not available
},
store: limiterCache('message_user_limiter'),
};
if (isEnabled(process.env.USE_REDIS) && ioredisClient) {
logger.debug('Using Redis for message rate limiters.');
const sendCommand = (...args) => ioredisClient.call(...args);
const ipStore = new RedisStore({
sendCommand,
prefix: 'message_ip_limiter:',
});
const userStore = new RedisStore({
sendCommand,
prefix: 'message_user_limiter:',
});
ipLimiterOptions.store = ipStore;
userLimiterOptions.store = userStore;
}
/**
* Message request rate limiter by IP
*/

View File

@@ -1,9 +1,8 @@
const rateLimit = require('express-rate-limit');
const { RedisStore } = require('rate-limit-redis');
const { removePorts, isEnabled } = require('~/server/utils');
const ioredisClient = require('~/cache/ioredisClient');
const { ViolationTypes } = require('librechat-data-provider');
const { removePorts } = require('~/server/utils');
const { limiterCache } = require('~/cache/cacheFactory');
const { logViolation } = require('~/cache');
const { logger } = require('~/config');
const { REGISTER_WINDOW = 60, REGISTER_MAX = 5, REGISTRATION_VIOLATION_SCORE: score } = process.env;
const windowMs = REGISTER_WINDOW * 60 * 1000;
@@ -12,7 +11,7 @@ const windowInMinutes = windowMs / 60000;
const message = `Too many accounts created, please try again after ${windowInMinutes} minutes`;
const handler = async (req, res) => {
const type = 'registrations';
const type = ViolationTypes.REGISTRATIONS;
const errorMessage = {
type,
max,
@@ -28,17 +27,9 @@ const limiterOptions = {
max,
handler,
keyGenerator: removePorts,
store: limiterCache('register_limiter'),
};
if (isEnabled(process.env.USE_REDIS) && ioredisClient) {
logger.debug('Using Redis for register rate limiter.');
const store = new RedisStore({
sendCommand: (...args) => ioredisClient.call(...args),
prefix: 'register_limiter:',
});
limiterOptions.store = store;
}
const registerLimiter = rateLimit(limiterOptions);
module.exports = registerLimiter;

View File

@@ -1,10 +1,8 @@
const rateLimit = require('express-rate-limit');
const { RedisStore } = require('rate-limit-redis');
const { ViolationTypes } = require('librechat-data-provider');
const { removePorts, isEnabled } = require('~/server/utils');
const ioredisClient = require('~/cache/ioredisClient');
const { removePorts } = require('~/server/utils');
const { limiterCache } = require('~/cache/cacheFactory');
const { logViolation } = require('~/cache');
const { logger } = require('~/config');
const {
RESET_PASSWORD_WINDOW = 2,
@@ -33,17 +31,9 @@ const limiterOptions = {
max,
handler,
keyGenerator: removePorts,
store: limiterCache('reset_password_limiter'),
};
if (isEnabled(process.env.USE_REDIS) && ioredisClient) {
logger.debug('Using Redis for reset password rate limiter.');
const store = new RedisStore({
sendCommand: (...args) => ioredisClient.call(...args),
prefix: 'reset_password_limiter:',
});
limiterOptions.store = store;
}
const resetPasswordLimiter = rateLimit(limiterOptions);
module.exports = resetPasswordLimiter;

View File

@@ -1,16 +1,14 @@
const rateLimit = require('express-rate-limit');
const { RedisStore } = require('rate-limit-redis');
const { ViolationTypes } = require('librechat-data-provider');
const ioredisClient = require('~/cache/ioredisClient');
const { limiterCache } = require('~/cache/cacheFactory');
const logViolation = require('~/cache/logViolation');
const { isEnabled } = require('~/server/utils');
const { logger } = require('~/config');
const getEnvironmentVariables = () => {
const STT_IP_MAX = parseInt(process.env.STT_IP_MAX) || 100;
const STT_IP_WINDOW = parseInt(process.env.STT_IP_WINDOW) || 1;
const STT_USER_MAX = parseInt(process.env.STT_USER_MAX) || 50;
const STT_USER_WINDOW = parseInt(process.env.STT_USER_WINDOW) || 1;
const STT_VIOLATION_SCORE = process.env.STT_VIOLATION_SCORE;
const sttIpWindowMs = STT_IP_WINDOW * 60 * 1000;
const sttIpMax = STT_IP_MAX;
@@ -27,11 +25,12 @@ const getEnvironmentVariables = () => {
sttUserWindowMs,
sttUserMax,
sttUserWindowInMinutes,
sttViolationScore: STT_VIOLATION_SCORE,
};
};
const createSTTHandler = (ip = true) => {
const { sttIpMax, sttIpWindowInMinutes, sttUserMax, sttUserWindowInMinutes } =
const { sttIpMax, sttIpWindowInMinutes, sttUserMax, sttUserWindowInMinutes, sttViolationScore } =
getEnvironmentVariables();
return async (req, res) => {
@@ -43,7 +42,7 @@ const createSTTHandler = (ip = true) => {
windowInMinutes: ip ? sttIpWindowInMinutes : sttUserWindowInMinutes,
};
await logViolation(req, res, type, errorMessage);
await logViolation(req, res, type, errorMessage, sttViolationScore);
res.status(429).json({ message: 'Too many STT requests. Try again later' });
};
};
@@ -55,6 +54,7 @@ const createSTTLimiters = () => {
windowMs: sttIpWindowMs,
max: sttIpMax,
handler: createSTTHandler(),
store: limiterCache('stt_ip_limiter'),
};
const userLimiterOptions = {
@@ -64,23 +64,9 @@ const createSTTLimiters = () => {
keyGenerator: function (req) {
return req.user?.id; // Use the user ID or NULL if not available
},
store: limiterCache('stt_user_limiter'),
};
if (isEnabled(process.env.USE_REDIS) && ioredisClient) {
logger.debug('Using Redis for STT rate limiters.');
const sendCommand = (...args) => ioredisClient.call(...args);
const ipStore = new RedisStore({
sendCommand,
prefix: 'stt_ip_limiter:',
});
const userStore = new RedisStore({
sendCommand,
prefix: 'stt_user_limiter:',
});
ipLimiterOptions.store = ipStore;
userLimiterOptions.store = userStore;
}
const sttIpLimiter = rateLimit(ipLimiterOptions);
const sttUserLimiter = rateLimit(userLimiterOptions);

View File

@@ -1,10 +1,9 @@
const rateLimit = require('express-rate-limit');
const { RedisStore } = require('rate-limit-redis');
const { ViolationTypes } = require('librechat-data-provider');
const ioredisClient = require('~/cache/ioredisClient');
const { limiterCache } = require('~/cache/cacheFactory');
const logViolation = require('~/cache/logViolation');
const { isEnabled } = require('~/server/utils');
const { logger } = require('~/config');
const { TOOL_CALL_VIOLATION_SCORE: score } = process.env;
const handler = async (req, res) => {
const type = ViolationTypes.TOOL_CALL_LIMIT;
@@ -15,7 +14,7 @@ const handler = async (req, res) => {
windowInMinutes: 1,
};
await logViolation(req, res, type, errorMessage, 0);
await logViolation(req, res, type, errorMessage, score);
res.status(429).json({ message: 'Too many tool call requests. Try again later' });
};
@@ -26,17 +25,9 @@ const limiterOptions = {
keyGenerator: function (req) {
return req.user?.id;
},
store: limiterCache('tool_call_limiter'),
};
if (isEnabled(process.env.USE_REDIS) && ioredisClient) {
logger.debug('Using Redis for tool call rate limiter.');
const store = new RedisStore({
sendCommand: (...args) => ioredisClient.call(...args),
prefix: 'tool_call_limiter:',
});
limiterOptions.store = store;
}
const toolCallLimiter = rateLimit(limiterOptions);
module.exports = toolCallLimiter;

View File

@@ -1,16 +1,14 @@
const rateLimit = require('express-rate-limit');
const { RedisStore } = require('rate-limit-redis');
const { ViolationTypes } = require('librechat-data-provider');
const ioredisClient = require('~/cache/ioredisClient');
const logViolation = require('~/cache/logViolation');
const { isEnabled } = require('~/server/utils');
const { logger } = require('~/config');
const { limiterCache } = require('~/cache/cacheFactory');
const getEnvironmentVariables = () => {
const TTS_IP_MAX = parseInt(process.env.TTS_IP_MAX) || 100;
const TTS_IP_WINDOW = parseInt(process.env.TTS_IP_WINDOW) || 1;
const TTS_USER_MAX = parseInt(process.env.TTS_USER_MAX) || 50;
const TTS_USER_WINDOW = parseInt(process.env.TTS_USER_WINDOW) || 1;
const TTS_VIOLATION_SCORE = process.env.TTS_VIOLATION_SCORE;
const ttsIpWindowMs = TTS_IP_WINDOW * 60 * 1000;
const ttsIpMax = TTS_IP_MAX;
@@ -27,11 +25,12 @@ const getEnvironmentVariables = () => {
ttsUserWindowMs,
ttsUserMax,
ttsUserWindowInMinutes,
ttsViolationScore: TTS_VIOLATION_SCORE,
};
};
const createTTSHandler = (ip = true) => {
const { ttsIpMax, ttsIpWindowInMinutes, ttsUserMax, ttsUserWindowInMinutes } =
const { ttsIpMax, ttsIpWindowInMinutes, ttsUserMax, ttsUserWindowInMinutes, ttsViolationScore } =
getEnvironmentVariables();
return async (req, res) => {
@@ -43,7 +42,7 @@ const createTTSHandler = (ip = true) => {
windowInMinutes: ip ? ttsIpWindowInMinutes : ttsUserWindowInMinutes,
};
await logViolation(req, res, type, errorMessage);
await logViolation(req, res, type, errorMessage, ttsViolationScore);
res.status(429).json({ message: 'Too many TTS requests. Try again later' });
};
};
@@ -55,32 +54,19 @@ const createTTSLimiters = () => {
windowMs: ttsIpWindowMs,
max: ttsIpMax,
handler: createTTSHandler(),
store: limiterCache('tts_ip_limiter'),
};
const userLimiterOptions = {
windowMs: ttsUserWindowMs,
max: ttsUserMax,
handler: createTTSHandler(false),
store: limiterCache('tts_user_limiter'),
keyGenerator: function (req) {
return req.user?.id; // Use the user ID or NULL if not available
},
};
if (isEnabled(process.env.USE_REDIS) && ioredisClient) {
logger.debug('Using Redis for TTS rate limiters.');
const sendCommand = (...args) => ioredisClient.call(...args);
const ipStore = new RedisStore({
sendCommand,
prefix: 'tts_ip_limiter:',
});
const userStore = new RedisStore({
sendCommand,
prefix: 'tts_user_limiter:',
});
ipLimiterOptions.store = ipStore;
userLimiterOptions.store = userStore;
}
const ttsIpLimiter = rateLimit(ipLimiterOptions);
const ttsUserLimiter = rateLimit(userLimiterOptions);

View File

@@ -1,16 +1,14 @@
const rateLimit = require('express-rate-limit');
const { RedisStore } = require('rate-limit-redis');
const { ViolationTypes } = require('librechat-data-provider');
const ioredisClient = require('~/cache/ioredisClient');
const { limiterCache } = require('~/cache/cacheFactory');
const logViolation = require('~/cache/logViolation');
const { isEnabled } = require('~/server/utils');
const { logger } = require('~/config');
const getEnvironmentVariables = () => {
const FILE_UPLOAD_IP_MAX = parseInt(process.env.FILE_UPLOAD_IP_MAX) || 100;
const FILE_UPLOAD_IP_WINDOW = parseInt(process.env.FILE_UPLOAD_IP_WINDOW) || 15;
const FILE_UPLOAD_USER_MAX = parseInt(process.env.FILE_UPLOAD_USER_MAX) || 50;
const FILE_UPLOAD_USER_WINDOW = parseInt(process.env.FILE_UPLOAD_USER_WINDOW) || 15;
const FILE_UPLOAD_VIOLATION_SCORE = process.env.FILE_UPLOAD_VIOLATION_SCORE;
const fileUploadIpWindowMs = FILE_UPLOAD_IP_WINDOW * 60 * 1000;
const fileUploadIpMax = FILE_UPLOAD_IP_MAX;
@@ -27,6 +25,7 @@ const getEnvironmentVariables = () => {
fileUploadUserWindowMs,
fileUploadUserMax,
fileUploadUserWindowInMinutes,
fileUploadViolationScore: FILE_UPLOAD_VIOLATION_SCORE,
};
};
@@ -36,6 +35,7 @@ const createFileUploadHandler = (ip = true) => {
fileUploadIpWindowInMinutes,
fileUploadUserMax,
fileUploadUserWindowInMinutes,
fileUploadViolationScore,
} = getEnvironmentVariables();
return async (req, res) => {
@@ -47,7 +47,7 @@ const createFileUploadHandler = (ip = true) => {
windowInMinutes: ip ? fileUploadIpWindowInMinutes : fileUploadUserWindowInMinutes,
};
await logViolation(req, res, type, errorMessage);
await logViolation(req, res, type, errorMessage, fileUploadViolationScore);
res.status(429).json({ message: 'Too many file upload requests. Try again later' });
};
};
@@ -60,6 +60,7 @@ const createFileLimiters = () => {
windowMs: fileUploadIpWindowMs,
max: fileUploadIpMax,
handler: createFileUploadHandler(),
store: limiterCache('file_upload_ip_limiter'),
};
const userLimiterOptions = {
@@ -69,23 +70,9 @@ const createFileLimiters = () => {
keyGenerator: function (req) {
return req.user?.id; // Use the user ID or NULL if not available
},
store: limiterCache('file_upload_user_limiter'),
};
if (isEnabled(process.env.USE_REDIS) && ioredisClient) {
logger.debug('Using Redis for file upload rate limiters.');
const sendCommand = (...args) => ioredisClient.call(...args);
const ipStore = new RedisStore({
sendCommand,
prefix: 'file_upload_ip_limiter:',
});
const userStore = new RedisStore({
sendCommand,
prefix: 'file_upload_user_limiter:',
});
ipLimiterOptions.store = ipStore;
userLimiterOptions.store = userStore;
}
const fileUploadIpLimiter = rateLimit(ipLimiterOptions);
const fileUploadUserLimiter = rateLimit(userLimiterOptions);

View File

@@ -1,10 +1,8 @@
const rateLimit = require('express-rate-limit');
const { RedisStore } = require('rate-limit-redis');
const { ViolationTypes } = require('librechat-data-provider');
const { removePorts, isEnabled } = require('~/server/utils');
const ioredisClient = require('~/cache/ioredisClient');
const { removePorts } = require('~/server/utils');
const { limiterCache } = require('~/cache/cacheFactory');
const { logViolation } = require('~/cache');
const { logger } = require('~/config');
const {
VERIFY_EMAIL_WINDOW = 2,
@@ -33,17 +31,9 @@ const limiterOptions = {
max,
handler,
keyGenerator: removePorts,
store: limiterCache('verify_email_limiter'),
};
if (isEnabled(process.env.USE_REDIS) && ioredisClient) {
logger.debug('Using Redis for verify email rate limiter.');
const store = new RedisStore({
sendCommand: (...args) => ioredisClient.call(...args),
prefix: 'verify_email_limiter:',
});
limiterOptions.store = store;
}
const verifyEmailLimiter = rateLimit(limiterOptions);
module.exports = verifyEmailLimiter;

View File

@@ -1,78 +0,0 @@
const { getRoleByName } = require('~/models/Role');
const { logger } = require('~/config');
/**
* Core function to check if a user has one or more required permissions
*
* @param {object} user - The user object
* @param {PermissionTypes} permissionType - The type of permission to check
* @param {Permissions[]} permissions - The list of specific permissions to check
* @param {Record<Permissions, string[]>} [bodyProps] - An optional object where keys are permissions and values are arrays of properties to check
* @param {object} [checkObject] - The object to check properties against
* @returns {Promise<boolean>} Whether the user has the required permissions
*/
const checkAccess = async (user, permissionType, permissions, bodyProps = {}, checkObject = {}) => {
if (!user) {
return false;
}
const role = await getRoleByName(user.role);
if (role && role.permissions && role.permissions[permissionType]) {
const hasAnyPermission = permissions.some((permission) => {
if (role.permissions[permissionType][permission]) {
return true;
}
if (bodyProps[permission] && checkObject) {
return bodyProps[permission].some((prop) =>
Object.prototype.hasOwnProperty.call(checkObject, prop),
);
}
return false;
});
return hasAnyPermission;
}
return false;
};
/**
* Middleware to check if a user has one or more required permissions, optionally based on `req.body` properties.
*
* @param {PermissionTypes} permissionType - The type of permission to check.
* @param {Permissions[]} permissions - The list of specific permissions to check.
* @param {Record<Permissions, string[]>} [bodyProps] - An optional object where keys are permissions and values are arrays of `req.body` properties to check.
* @returns {(req: ServerRequest, res: ServerResponse, next: NextFunction) => Promise<void>} Express middleware function.
*/
const generateCheckAccess = (permissionType, permissions, bodyProps = {}) => {
return async (req, res, next) => {
try {
const hasAccess = await checkAccess(
req.user,
permissionType,
permissions,
bodyProps,
req.body,
);
if (hasAccess) {
return next();
}
logger.warn(
`[${permissionType}] Forbidden: Insufficient permissions for User ${req.user.id}: ${permissions.join(', ')}`,
);
return res.status(403).json({ message: 'Forbidden: Insufficient permissions' });
} catch (error) {
logger.error(error);
return res.status(500).json({ message: `Server error: ${error.message}` });
}
};
};
module.exports = {
checkAccess,
generateCheckAccess,
};

View File

@@ -1,8 +1,5 @@
const checkAdmin = require('./admin');
const { checkAccess, generateCheckAccess } = require('./access');
module.exports = {
checkAdmin,
checkAccess,
generateCheckAccess,
};

View File

@@ -1,5 +1,6 @@
const uap = require('ua-parser-js');
const { handleError } = require('../utils');
const { ViolationTypes } = require('librechat-data-provider');
const { handleError } = require('@librechat/api');
const { logViolation } = require('../../cache');
/**
@@ -21,7 +22,7 @@ async function uaParser(req, res, next) {
const ua = uap(req.headers['user-agent']);
if (!ua.browser.name) {
const type = 'non_browser';
const type = ViolationTypes.NON_BROWSER;
await logViolation(req, res, type, { type }, score);
return handleError(res, { message: 'Illegal request' });
}

View File

@@ -1,4 +1,4 @@
const { handleError } = require('../utils');
const { handleError } = require('@librechat/api');
function validateEndpoint(req, res, next) {
const { endpoint: _endpoint, endpointType } = req.body;

View File

@@ -1,6 +1,6 @@
const { handleError } = require('@librechat/api');
const { ViolationTypes } = require('librechat-data-provider');
const { getModelsConfig } = require('~/server/controllers/ModelController');
const { handleError } = require('~/server/utils');
const { logViolation } = require('~/cache');
/**
* Validates the model of the request.

View File

@@ -0,0 +1,162 @@
const fs = require('fs');
const path = require('path');
const express = require('express');
const request = require('supertest');
const zlib = require('zlib');
// Create test setup
const mockTestDir = path.join(__dirname, 'test-static-route');
// Mock the paths module to point to our test directory
jest.mock('~/config/paths', () => ({
imageOutput: mockTestDir,
}));
describe('Static Route Integration', () => {
let app;
let staticRoute;
let testDir;
let testImagePath;
beforeAll(() => {
// Create a test directory and files
testDir = mockTestDir;
testImagePath = path.join(testDir, 'test-image.jpg');
if (!fs.existsSync(testDir)) {
fs.mkdirSync(testDir, { recursive: true });
}
// Create a test image file
fs.writeFileSync(testImagePath, 'fake-image-data');
// Create a gzipped version of the test image (for gzip scanning tests)
fs.writeFileSync(testImagePath + '.gz', zlib.gzipSync('fake-image-data'));
});
afterAll(() => {
// Clean up test files
if (fs.existsSync(testDir)) {
fs.rmSync(testDir, { recursive: true, force: true });
}
});
// Helper function to set up static route with specific config
const setupStaticRoute = (skipGzipScan = false) => {
if (skipGzipScan) {
delete process.env.ENABLE_IMAGE_OUTPUT_GZIP_SCAN;
} else {
process.env.ENABLE_IMAGE_OUTPUT_GZIP_SCAN = 'true';
}
staticRoute = require('../static');
app.use('/images', staticRoute);
};
beforeEach(() => {
// Clear the module cache to get fresh imports
jest.resetModules();
app = express();
// Clear environment variables
delete process.env.ENABLE_IMAGE_OUTPUT_GZIP_SCAN;
delete process.env.NODE_ENV;
});
describe('route functionality', () => {
it('should serve static image files', async () => {
process.env.NODE_ENV = 'production';
setupStaticRoute();
const response = await request(app).get('/images/test-image.jpg').expect(200);
expect(response.body.toString()).toBe('fake-image-data');
});
it('should return 404 for non-existent files', async () => {
setupStaticRoute();
const response = await request(app).get('/images/nonexistent.jpg');
expect(response.status).toBe(404);
});
});
describe('cache behavior', () => {
it('should set cache headers for images in production', async () => {
process.env.NODE_ENV = 'production';
setupStaticRoute();
const response = await request(app).get('/images/test-image.jpg').expect(200);
expect(response.headers['cache-control']).toBe('public, max-age=172800, s-maxage=86400');
});
it('should not set cache headers in development', async () => {
process.env.NODE_ENV = 'development';
setupStaticRoute();
const response = await request(app).get('/images/test-image.jpg').expect(200);
// Our middleware should not set the production cache-control header in development
expect(response.headers['cache-control']).not.toBe('public, max-age=172800, s-maxage=86400');
});
});
describe('gzip compression behavior', () => {
beforeEach(() => {
process.env.NODE_ENV = 'production';
});
it('should serve gzipped files when gzip scanning is enabled', async () => {
setupStaticRoute(false); // Enable gzip scanning
const response = await request(app)
.get('/images/test-image.jpg')
.set('Accept-Encoding', 'gzip')
.expect(200);
expect(response.headers['content-encoding']).toBe('gzip');
expect(response.body.toString()).toBe('fake-image-data');
});
it('should not serve gzipped files when gzip scanning is disabled', async () => {
setupStaticRoute(true); // Disable gzip scanning
const response = await request(app)
.get('/images/test-image.jpg')
.set('Accept-Encoding', 'gzip')
.expect(200);
expect(response.headers['content-encoding']).toBeUndefined();
expect(response.body.toString()).toBe('fake-image-data');
});
});
describe('path configuration', () => {
it('should use the configured imageOutput path', async () => {
setupStaticRoute();
const response = await request(app).get('/images/test-image.jpg').expect(200);
expect(response.body.toString()).toBe('fake-image-data');
});
it('should serve from subdirectories', async () => {
// Create a subdirectory with a file
const subDir = path.join(testDir, 'thumbs');
fs.mkdirSync(subDir, { recursive: true });
const thumbPath = path.join(subDir, 'thumb.jpg');
fs.writeFileSync(thumbPath, 'thumbnail-data');
setupStaticRoute();
const response = await request(app).get('/images/thumbs/thumb.jpg').expect(200);
expect(response.body.toString()).toBe('thumbnail-data');
// Clean up
fs.rmSync(subDir, { recursive: true, force: true });
});
});
});

View File

@@ -1,14 +1,28 @@
const express = require('express');
const { nanoid } = require('nanoid');
const { actionDelimiter, SystemRoles, removeNullishValues } = require('librechat-data-provider');
const { logger } = require('@librechat/data-schemas');
const { generateCheckAccess } = require('@librechat/api');
const {
SystemRoles,
Permissions,
PermissionTypes,
actionDelimiter,
removeNullishValues,
} = require('librechat-data-provider');
const { encryptMetadata, domainParser } = require('~/server/services/ActionService');
const { updateAction, getActions, deleteAction } = require('~/models/Action');
const { isActionDomainAllowed } = require('~/server/services/domains');
const { getAgent, updateAgent } = require('~/models/Agent');
const { logger } = require('~/config');
const { getRoleByName } = require('~/models/Role');
const router = express.Router();
const checkAgentCreate = generateCheckAccess({
permissionType: PermissionTypes.AGENTS,
permissions: [Permissions.USE, Permissions.CREATE],
getRoleByName,
});
// If the user has ADMIN role
// then action edition is possible even if not owner of the assistant
const isAdmin = (req) => {
@@ -41,7 +55,7 @@ router.get('/', async (req, res) => {
* @param {ActionMetadata} req.body.metadata - Metadata for the action.
* @returns {Object} 200 - success response - application/json
*/
router.post('/:agent_id', async (req, res) => {
router.post('/:agent_id', checkAgentCreate, async (req, res) => {
try {
const { agent_id } = req.params;
@@ -149,7 +163,7 @@ router.post('/:agent_id', async (req, res) => {
* @param {string} req.params.action_id - The ID of the action to delete.
* @returns {Object} 200 - success response - application/json
*/
router.delete('/:agent_id/:action_id', async (req, res) => {
router.delete('/:agent_id/:action_id', checkAgentCreate, async (req, res) => {
try {
const { agent_id, action_id } = req.params;
const admin = isAdmin(req);

View File

@@ -1,22 +1,28 @@
const express = require('express');
const { generateCheckAccess, skipAgentCheck } = require('@librechat/api');
const { PermissionTypes, Permissions } = require('librechat-data-provider');
const {
setHeaders,
moderateText,
// validateModel,
generateCheckAccess,
validateConvoAccess,
buildEndpointOption,
} = require('~/server/middleware');
const { initializeClient } = require('~/server/services/Endpoints/agents');
const AgentController = require('~/server/controllers/agents/request');
const addTitle = require('~/server/services/Endpoints/agents/title');
const { getRoleByName } = require('~/models/Role');
const router = express.Router();
router.use(moderateText);
const checkAgentAccess = generateCheckAccess(PermissionTypes.AGENTS, [Permissions.USE]);
const checkAgentAccess = generateCheckAccess({
permissionType: PermissionTypes.AGENTS,
permissions: [Permissions.USE],
skipCheck: skipAgentCheck,
getRoleByName,
});
router.use(checkAgentAccess);
router.use(validateConvoAccess);

View File

@@ -1,29 +1,36 @@
const express = require('express');
const { generateCheckAccess } = require('@librechat/api');
const { PermissionTypes, Permissions } = require('librechat-data-provider');
const { requireJwtAuth, generateCheckAccess } = require('~/server/middleware');
const { requireJwtAuth } = require('~/server/middleware');
const v1 = require('~/server/controllers/agents/v1');
const { getRoleByName } = require('~/models/Role');
const actions = require('./actions');
const tools = require('./tools');
const router = express.Router();
const avatar = express.Router();
const checkAgentAccess = generateCheckAccess(PermissionTypes.AGENTS, [Permissions.USE]);
const checkAgentCreate = generateCheckAccess(PermissionTypes.AGENTS, [
Permissions.USE,
Permissions.CREATE,
]);
const checkAgentAccess = generateCheckAccess({
permissionType: PermissionTypes.AGENTS,
permissions: [Permissions.USE],
getRoleByName,
});
const checkAgentCreate = generateCheckAccess({
permissionType: PermissionTypes.AGENTS,
permissions: [Permissions.USE, Permissions.CREATE],
getRoleByName,
});
const checkGlobalAgentShare = generateCheckAccess(
PermissionTypes.AGENTS,
[Permissions.USE, Permissions.CREATE],
{
const checkGlobalAgentShare = generateCheckAccess({
permissionType: PermissionTypes.AGENTS,
permissions: [Permissions.USE, Permissions.CREATE],
bodyProps: {
[Permissions.SHARED_GLOBAL]: ['projectIds', 'removeProjectIds'],
},
);
getRoleByName,
});
router.use(requireJwtAuth);
router.use(checkAgentAccess);
/**
* Agent actions route.

View File

@@ -106,6 +106,7 @@ router.get('/', async function (req, res) {
const serverConfig = config.mcpServers[serverName];
payload.mcpServers[serverName] = {
customUserVars: serverConfig?.customUserVars || {},
requiresOAuth: req.app.locals.mcpOAuthRequirements?.[serverName] || false,
};
}
}

View File

@@ -1,16 +1,17 @@
const multer = require('multer');
const express = require('express');
const { sleep } = require('@librechat/agents');
const { isEnabled } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { CacheKeys, EModelEndpoint } = require('librechat-data-provider');
const { getConvosByCursor, deleteConvos, getConvo, saveConvo } = require('~/models/Conversation');
const { forkConversation, duplicateConversation } = require('~/server/utils/import/fork');
const { createImportLimiters, createForkLimiters } = require('~/server/middleware');
const { storage, importFileFilter } = require('~/server/routes/files/multer');
const requireJwtAuth = require('~/server/middleware/requireJwtAuth');
const { importConversations } = require('~/server/utils/import');
const { createImportLimiters } = require('~/server/middleware');
const { deleteToolCalls } = require('~/models/ToolCall');
const { isEnabled, sleep } = require('~/server/utils');
const getLogStores = require('~/cache/getLogStores');
const { logger } = require('~/config');
const assistantClients = {
[EModelEndpoint.azureAssistants]: require('~/server/services/Endpoints/azureAssistants'),
@@ -43,6 +44,7 @@ router.get('/', async (req, res) => {
});
res.status(200).json(result);
} catch (error) {
logger.error('Error fetching conversations', error);
res.status(500).json({ error: 'Error fetching conversations' });
}
});
@@ -156,6 +158,7 @@ router.post('/update', async (req, res) => {
});
const { importIpLimiter, importUserLimiter } = createImportLimiters();
const { forkIpLimiter, forkUserLimiter } = createForkLimiters();
const upload = multer({ storage: storage, fileFilter: importFileFilter });
/**
@@ -189,7 +192,7 @@ router.post(
* @param {express.Response<TForkConvoResponse>} res - Express response object.
* @returns {Promise<void>} - The response after forking the conversation.
*/
router.post('/fork', async (req, res) => {
router.post('/fork', forkIpLimiter, forkUserLimiter, async (req, res) => {
try {
/** @type {TForkConvoRequest} */
const { conversationId, messageId, option, splitAtTarget, latestMessageId } = req.body;

View File

@@ -0,0 +1,282 @@
const express = require('express');
const request = require('supertest');
const mongoose = require('mongoose');
const { v4: uuidv4 } = require('uuid');
const { MongoMemoryServer } = require('mongodb-memory-server');
const { GLOBAL_PROJECT_NAME } = require('librechat-data-provider').Constants;
// Mock dependencies
jest.mock('~/server/services/Files/process', () => ({
processDeleteRequest: jest.fn().mockResolvedValue({}),
filterFile: jest.fn(),
processFileUpload: jest.fn(),
processAgentFileUpload: jest.fn(),
}));
jest.mock('~/server/services/Files/strategies', () => ({
getStrategyFunctions: jest.fn(() => ({})),
}));
jest.mock('~/server/controllers/assistants/helpers', () => ({
getOpenAIClient: jest.fn(),
}));
jest.mock('~/server/services/Tools/credentials', () => ({
loadAuthValues: jest.fn(),
}));
jest.mock('~/server/services/Files/S3/crud', () => ({
refreshS3FileUrls: jest.fn(),
}));
jest.mock('~/cache', () => ({
getLogStores: jest.fn(() => ({
get: jest.fn(),
set: jest.fn(),
})),
}));
jest.mock('~/config', () => ({
logger: {
error: jest.fn(),
warn: jest.fn(),
debug: jest.fn(),
},
}));
const { createFile } = require('~/models/File');
const { createAgent } = require('~/models/Agent');
const { getProjectByName } = require('~/models/Project');
// Import the router after mocks
const router = require('./files');
describe('File Routes - Agent Files Endpoint', () => {
let app;
let mongoServer;
let authorId;
let otherUserId;
let agentId;
let fileId1;
let fileId2;
let fileId3;
beforeAll(async () => {
mongoServer = await MongoMemoryServer.create();
await mongoose.connect(mongoServer.getUri());
// Initialize models
require('~/db/models');
app = express();
app.use(express.json());
// Mock authentication middleware
app.use((req, res, next) => {
req.user = { id: otherUserId || 'default-user' };
req.app = { locals: {} };
next();
});
app.use('/files', router);
});
afterAll(async () => {
await mongoose.disconnect();
await mongoServer.stop();
});
beforeEach(async () => {
jest.clearAllMocks();
// Clear database
const collections = mongoose.connection.collections;
for (const key in collections) {
await collections[key].deleteMany({});
}
authorId = new mongoose.Types.ObjectId().toString();
otherUserId = new mongoose.Types.ObjectId().toString();
agentId = uuidv4();
fileId1 = uuidv4();
fileId2 = uuidv4();
fileId3 = uuidv4();
// Create files
await createFile({
user: authorId,
file_id: fileId1,
filename: 'agent-file1.txt',
filepath: `/uploads/${authorId}/${fileId1}`,
bytes: 1024,
type: 'text/plain',
});
await createFile({
user: authorId,
file_id: fileId2,
filename: 'agent-file2.txt',
filepath: `/uploads/${authorId}/${fileId2}`,
bytes: 2048,
type: 'text/plain',
});
await createFile({
user: otherUserId,
file_id: fileId3,
filename: 'user-file.txt',
filepath: `/uploads/${otherUserId}/${fileId3}`,
bytes: 512,
type: 'text/plain',
});
// Create an agent with files attached
await createAgent({
id: agentId,
name: 'Test Agent',
author: authorId,
model: 'gpt-4',
provider: 'openai',
isCollaborative: true,
tool_resources: {
file_search: {
file_ids: [fileId1, fileId2],
},
},
});
// Share the agent globally
const globalProject = await getProjectByName(GLOBAL_PROJECT_NAME, '_id');
if (globalProject) {
const { updateAgent } = require('~/models/Agent');
await updateAgent({ id: agentId }, { projectIds: [globalProject._id] });
}
});
describe('GET /files/agent/:agent_id', () => {
it('should return files accessible through the agent for non-author', async () => {
const response = await request(app).get(`/files/agent/${agentId}`);
expect(response.status).toBe(200);
expect(response.body).toHaveLength(2); // Only agent files, not user-owned files
const fileIds = response.body.map((f) => f.file_id);
expect(fileIds).toContain(fileId1);
expect(fileIds).toContain(fileId2);
expect(fileIds).not.toContain(fileId3); // User's own file not included
});
it('should return 400 when agent_id is not provided', async () => {
const response = await request(app).get('/files/agent/');
expect(response.status).toBe(404); // Express returns 404 for missing route parameter
});
it('should return empty array for non-existent agent', async () => {
const response = await request(app).get('/files/agent/non-existent-agent');
expect(response.status).toBe(200);
expect(response.body).toEqual([]); // Empty array for non-existent agent
});
it('should return empty array when agent is not collaborative', async () => {
// Create a non-collaborative agent
const nonCollabAgentId = uuidv4();
await createAgent({
id: nonCollabAgentId,
name: 'Non-Collaborative Agent',
author: authorId,
model: 'gpt-4',
provider: 'openai',
isCollaborative: false,
tool_resources: {
file_search: {
file_ids: [fileId1],
},
},
});
// Share it globally
const globalProject = await getProjectByName(GLOBAL_PROJECT_NAME, '_id');
if (globalProject) {
const { updateAgent } = require('~/models/Agent');
await updateAgent({ id: nonCollabAgentId }, { projectIds: [globalProject._id] });
}
const response = await request(app).get(`/files/agent/${nonCollabAgentId}`);
expect(response.status).toBe(200);
expect(response.body).toEqual([]); // Empty array when not collaborative
});
it('should return agent files for agent author', async () => {
// Create a new app instance with author authentication
const authorApp = express();
authorApp.use(express.json());
authorApp.use((req, res, next) => {
req.user = { id: authorId };
req.app = { locals: {} };
next();
});
authorApp.use('/files', router);
const response = await request(authorApp).get(`/files/agent/${agentId}`);
expect(response.status).toBe(200);
expect(response.body).toHaveLength(2); // Agent files for author
const fileIds = response.body.map((f) => f.file_id);
expect(fileIds).toContain(fileId1);
expect(fileIds).toContain(fileId2);
expect(fileIds).not.toContain(fileId3); // User's own file not included
});
it('should return files uploaded by other users to shared agent for author', async () => {
// Create a file uploaded by another user
const otherUserFileId = uuidv4();
const anotherUserId = new mongoose.Types.ObjectId().toString();
await createFile({
user: anotherUserId,
file_id: otherUserFileId,
filename: 'other-user-file.txt',
filepath: `/uploads/${anotherUserId}/${otherUserFileId}`,
bytes: 4096,
type: 'text/plain',
});
// Update agent to include the file uploaded by another user
const { updateAgent } = require('~/models/Agent');
await updateAgent(
{ id: agentId },
{
tool_resources: {
file_search: {
file_ids: [fileId1, fileId2, otherUserFileId],
},
},
},
);
// Create app instance with author authentication
const authorApp = express();
authorApp.use(express.json());
authorApp.use((req, res, next) => {
req.user = { id: authorId };
req.app = { locals: {} };
next();
});
authorApp.use('/files', router);
const response = await request(authorApp).get(`/files/agent/${agentId}`);
expect(response.status).toBe(200);
expect(response.body).toHaveLength(3); // Including file from another user
const fileIds = response.body.map((f) => f.file_id);
expect(fileIds).toContain(fileId1);
expect(fileIds).toContain(fileId2);
expect(fileIds).toContain(otherUserFileId); // File uploaded by another user
});
});
});

View File

@@ -5,6 +5,7 @@ const {
Time,
isUUID,
CacheKeys,
Constants,
FileSources,
EModelEndpoint,
isAgentsEndpoint,
@@ -16,11 +17,12 @@ const {
processDeleteRequest,
processAgentFileUpload,
} = require('~/server/services/Files/process');
const { getFiles, batchUpdateFiles, hasAccessToFilesViaAgent } = require('~/models/File');
const { getStrategyFunctions } = require('~/server/services/Files/strategies');
const { getOpenAIClient } = require('~/server/controllers/assistants/helpers');
const { loadAuthValues } = require('~/server/services/Tools/credentials');
const { refreshS3FileUrls } = require('~/server/services/Files/S3/crud');
const { getFiles, batchUpdateFiles } = require('~/models/File');
const { getProjectByName } = require('~/models/Project');
const { getAssistant } = require('~/models/Assistant');
const { getAgent } = require('~/models/Agent');
const { getLogStores } = require('~/cache');
@@ -50,6 +52,68 @@ router.get('/', async (req, res) => {
}
});
/**
* Get files specific to an agent
* @route GET /files/agent/:agent_id
* @param {string} agent_id - The agent ID to get files for
* @returns {Promise<TFile[]>} Array of files attached to the agent
*/
router.get('/agent/:agent_id', async (req, res) => {
try {
const { agent_id } = req.params;
const userId = req.user.id;
if (!agent_id) {
return res.status(400).json({ error: 'Agent ID is required' });
}
// Get the agent to check ownership and attached files
const agent = await getAgent({ id: agent_id });
if (!agent) {
// No agent found, return empty array
return res.status(200).json([]);
}
// Check if user has access to the agent
if (agent.author.toString() !== userId) {
// Non-authors need the agent to be globally shared and collaborative
const globalProject = await getProjectByName(Constants.GLOBAL_PROJECT_NAME, '_id');
if (
!globalProject ||
!agent.projectIds.some((pid) => pid.toString() === globalProject._id.toString()) ||
!agent.isCollaborative
) {
return res.status(200).json([]);
}
}
// Collect all file IDs from agent's tool resources
const agentFileIds = [];
if (agent.tool_resources) {
for (const [, resource] of Object.entries(agent.tool_resources)) {
if (resource?.file_ids && Array.isArray(resource.file_ids)) {
agentFileIds.push(...resource.file_ids);
}
}
}
// If no files attached to agent, return empty array
if (agentFileIds.length === 0) {
return res.status(200).json([]);
}
// Get only the files attached to this agent
const files = await getFiles({ file_id: { $in: agentFileIds } }, null, { text: 0 });
res.status(200).json(files);
} catch (error) {
logger.error('[/files/agent/:agent_id] Error fetching agent files:', error);
res.status(500).json({ error: 'Failed to fetch agent files' });
}
});
router.get('/config', async (req, res) => {
try {
res.status(200).json(req.app.locals.fileConfig);
@@ -86,11 +150,62 @@ router.delete('/', async (req, res) => {
const fileIds = files.map((file) => file.file_id);
const dbFiles = await getFiles({ file_id: { $in: fileIds } });
const unauthorizedFiles = dbFiles.filter((file) => file.user.toString() !== req.user.id);
const ownedFiles = [];
const nonOwnedFiles = [];
const fileMap = new Map();
for (const file of dbFiles) {
fileMap.set(file.file_id, file);
if (file.user.toString() === req.user.id) {
ownedFiles.push(file);
} else {
nonOwnedFiles.push(file);
}
}
// If all files are owned by the user, no need for further checks
if (nonOwnedFiles.length === 0) {
await processDeleteRequest({ req, files: ownedFiles });
logger.debug(
`[/files] Files deleted successfully: ${ownedFiles
.filter((f) => f.file_id)
.map((f) => f.file_id)
.join(', ')}`,
);
res.status(200).json({ message: 'Files deleted successfully' });
return;
}
// Check access for non-owned files
let authorizedFiles = [...ownedFiles];
let unauthorizedFiles = [];
if (req.body.agent_id && nonOwnedFiles.length > 0) {
// Batch check access for all non-owned files
const nonOwnedFileIds = nonOwnedFiles.map((f) => f.file_id);
const accessMap = await hasAccessToFilesViaAgent(
req.user.id,
nonOwnedFileIds,
req.body.agent_id,
);
// Separate authorized and unauthorized files
for (const file of nonOwnedFiles) {
if (accessMap.get(file.file_id)) {
authorizedFiles.push(file);
} else {
unauthorizedFiles.push(file);
}
}
} else {
// No agent context, all non-owned files are unauthorized
unauthorizedFiles = nonOwnedFiles;
}
if (unauthorizedFiles.length > 0) {
return res.status(403).json({
message: 'You can only delete your own files',
message: 'You can only delete files you have access to',
unauthorizedFiles: unauthorizedFiles.map((f) => f.file_id),
});
}
@@ -131,10 +246,10 @@ router.delete('/', async (req, res) => {
.json({ message: 'File associations removed successfully from Azure Assistant' });
}
await processDeleteRequest({ req, files: dbFiles });
await processDeleteRequest({ req, files: authorizedFiles });
logger.debug(
`[/files] Files deleted successfully: ${files
`[/files] Files deleted successfully: ${authorizedFiles
.filter((f) => f.file_id)
.map((f) => f.file_id)
.join(', ')}`,

View File

@@ -0,0 +1,302 @@
const express = require('express');
const request = require('supertest');
const mongoose = require('mongoose');
const { v4: uuidv4 } = require('uuid');
const { MongoMemoryServer } = require('mongodb-memory-server');
const { GLOBAL_PROJECT_NAME } = require('librechat-data-provider').Constants;
// Mock dependencies
jest.mock('~/server/services/Files/process', () => ({
processDeleteRequest: jest.fn().mockResolvedValue({}),
filterFile: jest.fn(),
processFileUpload: jest.fn(),
processAgentFileUpload: jest.fn(),
}));
jest.mock('~/server/services/Files/strategies', () => ({
getStrategyFunctions: jest.fn(() => ({})),
}));
jest.mock('~/server/controllers/assistants/helpers', () => ({
getOpenAIClient: jest.fn(),
}));
jest.mock('~/server/services/Tools/credentials', () => ({
loadAuthValues: jest.fn(),
}));
jest.mock('~/server/services/Files/S3/crud', () => ({
refreshS3FileUrls: jest.fn(),
}));
jest.mock('~/cache', () => ({
getLogStores: jest.fn(() => ({
get: jest.fn(),
set: jest.fn(),
})),
}));
jest.mock('~/config', () => ({
logger: {
error: jest.fn(),
warn: jest.fn(),
debug: jest.fn(),
},
}));
const { createFile } = require('~/models/File');
const { createAgent } = require('~/models/Agent');
const { getProjectByName } = require('~/models/Project');
const { processDeleteRequest } = require('~/server/services/Files/process');
// Import the router after mocks
const router = require('./files');
describe('File Routes - Delete with Agent Access', () => {
let app;
let mongoServer;
let authorId;
let otherUserId;
let agentId;
let fileId;
beforeAll(async () => {
mongoServer = await MongoMemoryServer.create();
await mongoose.connect(mongoServer.getUri());
// Initialize models
require('~/db/models');
app = express();
app.use(express.json());
// Mock authentication middleware
app.use((req, res, next) => {
req.user = { id: otherUserId || 'default-user' };
req.app = { locals: {} };
next();
});
app.use('/files', router);
});
afterAll(async () => {
await mongoose.disconnect();
await mongoServer.stop();
});
beforeEach(async () => {
jest.clearAllMocks();
// Clear database
const collections = mongoose.connection.collections;
for (const key in collections) {
await collections[key].deleteMany({});
}
authorId = new mongoose.Types.ObjectId().toString();
otherUserId = new mongoose.Types.ObjectId().toString();
fileId = uuidv4();
// Create a file owned by the author
await createFile({
user: authorId,
file_id: fileId,
filename: 'test.txt',
filepath: `/uploads/${authorId}/${fileId}`,
bytes: 1024,
type: 'text/plain',
});
// Create an agent with the file attached
const agent = await createAgent({
id: uuidv4(),
name: 'Test Agent',
author: authorId,
model: 'gpt-4',
provider: 'openai',
isCollaborative: true,
tool_resources: {
file_search: {
file_ids: [fileId],
},
},
});
agentId = agent.id;
// Share the agent globally
const globalProject = await getProjectByName(GLOBAL_PROJECT_NAME, '_id');
if (globalProject) {
const { updateAgent } = require('~/models/Agent');
await updateAgent({ id: agentId }, { projectIds: [globalProject._id] });
}
});
describe('DELETE /files', () => {
it('should allow deleting files owned by the user', async () => {
// Create a file owned by the current user
const userFileId = uuidv4();
await createFile({
user: otherUserId,
file_id: userFileId,
filename: 'user-file.txt',
filepath: `/uploads/${otherUserId}/${userFileId}`,
bytes: 1024,
type: 'text/plain',
});
const response = await request(app)
.delete('/files')
.send({
files: [
{
file_id: userFileId,
filepath: `/uploads/${otherUserId}/${userFileId}`,
},
],
});
expect(response.status).toBe(200);
expect(response.body.message).toBe('Files deleted successfully');
expect(processDeleteRequest).toHaveBeenCalled();
});
it('should prevent deleting files not owned by user without agent context', async () => {
const response = await request(app)
.delete('/files')
.send({
files: [
{
file_id: fileId,
filepath: `/uploads/${authorId}/${fileId}`,
},
],
});
expect(response.status).toBe(403);
expect(response.body.message).toBe('You can only delete files you have access to');
expect(response.body.unauthorizedFiles).toContain(fileId);
expect(processDeleteRequest).not.toHaveBeenCalled();
});
it('should allow deleting files accessible through shared agent', async () => {
const response = await request(app)
.delete('/files')
.send({
agent_id: agentId,
files: [
{
file_id: fileId,
filepath: `/uploads/${authorId}/${fileId}`,
},
],
});
expect(response.status).toBe(200);
expect(response.body.message).toBe('Files deleted successfully');
expect(processDeleteRequest).toHaveBeenCalled();
});
it('should prevent deleting files not attached to the specified agent', async () => {
// Create another file not attached to the agent
const unattachedFileId = uuidv4();
await createFile({
user: authorId,
file_id: unattachedFileId,
filename: 'unattached.txt',
filepath: `/uploads/${authorId}/${unattachedFileId}`,
bytes: 1024,
type: 'text/plain',
});
const response = await request(app)
.delete('/files')
.send({
agent_id: agentId,
files: [
{
file_id: unattachedFileId,
filepath: `/uploads/${authorId}/${unattachedFileId}`,
},
],
});
expect(response.status).toBe(403);
expect(response.body.message).toBe('You can only delete files you have access to');
expect(response.body.unauthorizedFiles).toContain(unattachedFileId);
});
it('should handle mixed authorized and unauthorized files', async () => {
// Create a file owned by the current user
const userFileId = uuidv4();
await createFile({
user: otherUserId,
file_id: userFileId,
filename: 'user-file.txt',
filepath: `/uploads/${otherUserId}/${userFileId}`,
bytes: 1024,
type: 'text/plain',
});
// Create an unauthorized file
const unauthorizedFileId = uuidv4();
await createFile({
user: authorId,
file_id: unauthorizedFileId,
filename: 'unauthorized.txt',
filepath: `/uploads/${authorId}/${unauthorizedFileId}`,
bytes: 1024,
type: 'text/plain',
});
const response = await request(app)
.delete('/files')
.send({
agent_id: agentId,
files: [
{
file_id: fileId, // Authorized through agent
filepath: `/uploads/${authorId}/${fileId}`,
},
{
file_id: userFileId, // Owned by user
filepath: `/uploads/${otherUserId}/${userFileId}`,
},
{
file_id: unauthorizedFileId, // Not authorized
filepath: `/uploads/${authorId}/${unauthorizedFileId}`,
},
],
});
expect(response.status).toBe(403);
expect(response.body.message).toBe('You can only delete files you have access to');
expect(response.body.unauthorizedFiles).toContain(unauthorizedFileId);
expect(response.body.unauthorizedFiles).not.toContain(fileId);
expect(response.body.unauthorizedFiles).not.toContain(userFileId);
});
it('should prevent deleting files when agent is not collaborative', async () => {
// Update the agent to be non-collaborative
const { updateAgent } = require('~/models/Agent');
await updateAgent({ id: agentId }, { isCollaborative: false });
const response = await request(app)
.delete('/files')
.send({
agent_id: agentId,
files: [
{
file_id: fileId,
filepath: `/uploads/${authorId}/${fileId}`,
},
],
});
expect(response.status).toBe(403);
expect(response.body.message).toBe('You can only delete files you have access to');
expect(response.body.unauthorizedFiles).toContain(fileId);
expect(processDeleteRequest).not.toHaveBeenCalled();
});
});
});

View File

@@ -477,7 +477,9 @@ describe('Multer Configuration', () => {
done(new Error('Expected mkdirSync to throw an error but no error was thrown'));
} catch (error) {
// This is the expected behavior - mkdirSync throws synchronously for invalid paths
expect(error.code).toBe('EACCES');
// On Linux, this typically returns EACCES (permission denied)
// On macOS/Darwin, this returns ENOENT (no such file or directory)
expect(['EACCES', 'ENOENT']).toContain(error.code);
done();
}
});

View File

@@ -1,13 +1,18 @@
const { Router } = require('express');
const { MCPOAuthHandler } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { CacheKeys } = require('librechat-data-provider');
const { MCPOAuthHandler } = require('@librechat/api');
const { CacheKeys, Constants } = require('librechat-data-provider');
const { findToken, updateToken, createToken, deleteTokens } = require('~/models');
const { setCachedTools, getCachedTools, loadCustomConfig } = require('~/server/services/Config');
const { getUserPluginAuthValueByPlugin } = require('~/server/services/PluginService');
const { getMCPManager, getFlowStateManager } = require('~/config');
const { requireJwtAuth } = require('~/server/middleware');
const { getFlowStateManager } = require('~/config');
const { getLogStores } = require('~/cache');
const router = Router();
const suppressLogging = true;
/**
* Initiate OAuth flow
* This endpoint is called when the user clicks the auth link in the UI
@@ -202,4 +207,472 @@ router.get('/oauth/status/:flowId', async (req, res) => {
}
});
/**
* Get connection status for all MCP servers
* This endpoint returns the actual connection status from MCPManager
*/
router.get('/connection/status', requireJwtAuth, async (req, res) => {
try {
const user = req.user;
if (!user?.id) {
return res.status(401).json({ error: 'User not authenticated' });
}
const mcpManager = getMCPManager(null, true); // Skip idle checks to avoid ping spam
const connectionStatus = {};
// Get all MCP server names from custom config
const config = await loadCustomConfig(suppressLogging);
const mcpConfig = config?.mcpServers;
if (mcpConfig) {
for (const [serverName, config] of Object.entries(mcpConfig)) {
try {
// Check if this is an app-level connection (exists in mcpManager.connections)
const appConnection = mcpManager.getConnection(serverName);
const hasAppConnection = !!appConnection;
// Check if this is a user-level connection (exists in mcpManager.userConnections)
const userConnection = mcpManager.getUserConnectionIfExists(user.id, serverName);
const hasUserConnection = !!userConnection;
// Use lightweight connection state check instead of ping-based isConnected()
let connected = false;
if (hasAppConnection) {
// Check connection state without ping to avoid rate limits
connected = appConnection.connectionState === 'connected';
} else if (hasUserConnection) {
// Check connection state without ping to avoid rate limits
connected = userConnection.connectionState === 'connected';
}
// Determine if this server requires user authentication
const hasAuthConfig =
config.customUserVars && Object.keys(config.customUserVars).length > 0;
const requiresOAuth = req.app.locals.mcpOAuthRequirements?.[serverName] || false;
connectionStatus[serverName] = {
connected,
hasAuthConfig,
hasConnection: hasAppConnection || hasUserConnection,
isAppLevel: hasAppConnection,
isUserLevel: hasUserConnection,
requiresOAuth,
};
} catch (error) {
logger.error(
`[MCP Connection Status] Error checking connection for ${serverName}:`,
error,
);
connectionStatus[serverName] = {
connected: false,
hasAuthConfig: config.customUserVars && Object.keys(config.customUserVars).length > 0,
hasConnection: false,
isAppLevel: false,
isUserLevel: false,
requiresOAuth: req.app.locals.mcpOAuthRequirements?.[serverName] || false,
error: error.message,
};
}
}
}
res.json({
success: true,
connectionStatus,
});
} catch (error) {
logger.error('[MCP Connection Status] Failed to get connection status', error);
res.status(500).json({ error: 'Failed to get connection status' });
}
});
/**
* Check which authentication values exist for a specific MCP server
* This endpoint returns only boolean flags indicating if values are set, not the actual values
*/
router.get('/:serverName/auth-values', requireJwtAuth, async (req, res) => {
try {
const { serverName } = req.params;
const user = req.user;
if (!user?.id) {
return res.status(401).json({ error: 'User not authenticated' });
}
const config = await loadCustomConfig(suppressLogging);
if (!config || !config.mcpServers || !config.mcpServers[serverName]) {
return res.status(404).json({
error: `MCP server '${serverName}' not found in configuration`,
});
}
const serverConfig = config.mcpServers[serverName];
const pluginKey = `${Constants.mcp_prefix}${serverName}`;
const authValueFlags = {};
// Check existence of saved values for each custom user variable (don't fetch actual values)
if (serverConfig.customUserVars && typeof serverConfig.customUserVars === 'object') {
for (const varName of Object.keys(serverConfig.customUserVars)) {
try {
const value = await getUserPluginAuthValueByPlugin(user.id, varName, pluginKey, false);
// Only store boolean flag indicating if value exists
authValueFlags[varName] = !!(value && value.length > 0);
} catch (err) {
logger.error(
`[MCP Auth Value Flags] Error checking ${varName} for user ${user.id}:`,
err,
);
// Default to false if we can't check
authValueFlags[varName] = false;
}
}
}
res.json({
success: true,
serverName,
authValueFlags,
});
} catch (error) {
logger.error(
`[MCP Auth Value Flags] Failed to check auth value flags for ${req.params.serverName}`,
error,
);
res.status(500).json({ error: 'Failed to check auth value flags' });
}
});
/**
* Check if a specific MCP server requires OAuth
* This endpoint checks if a specific MCP server requires OAuth authentication
*/
router.get('/:serverName/oauth/required', requireJwtAuth, async (req, res) => {
try {
const { serverName } = req.params;
const user = req.user;
if (!user?.id) {
return res.status(401).json({ error: 'User not authenticated' });
}
const mcpManager = getMCPManager();
const requiresOAuth = await mcpManager.isOAuthRequired(serverName);
res.json({
success: true,
serverName,
requiresOAuth,
});
} catch (error) {
logger.error(
`[MCP OAuth Required] Failed to check OAuth requirement for ${req.params.serverName}`,
error,
);
res.status(500).json({ error: 'Failed to check OAuth requirement' });
}
});
/**
* Complete MCP server reinitialization after OAuth
* This endpoint completes the reinitialization process after OAuth authentication
*/
router.post('/:serverName/reinitialize/complete', requireJwtAuth, async (req, res) => {
let responseSent = false;
try {
const { serverName } = req.params;
const user = req.user;
if (!user?.id) {
responseSent = true;
return res.status(401).json({ error: 'User not authenticated' });
}
logger.info(`[MCP Complete Reinitialize] Starting completion for ${serverName}`);
const mcpManager = getMCPManager();
// Wait for connection to be established via event-driven approach
const userConnection = await new Promise((resolve, reject) => {
// Set a reasonable timeout (10 seconds)
const timeout = setTimeout(() => {
mcpManager.removeListener('connectionEstablished', connectionHandler);
reject(new Error('Timeout waiting for connection establishment'));
}, 10000);
const connectionHandler = ({
userId: eventUserId,
serverName: eventServerName,
connection,
}) => {
if (eventUserId === user.id && eventServerName === serverName) {
clearTimeout(timeout);
mcpManager.removeListener('connectionEstablished', connectionHandler);
resolve(connection);
}
};
// Check if connection already exists
const existingConnection = mcpManager.getUserConnectionIfExists(user.id, serverName);
if (existingConnection) {
clearTimeout(timeout);
resolve(existingConnection);
return;
}
// Listen for the connection establishment event
mcpManager.on('connectionEstablished', connectionHandler);
});
if (!userConnection) {
responseSent = true;
return res.status(404).json({ error: 'User connection not found' });
}
const userTools = (await getCachedTools({ userId: user.id })) || {};
// Remove any old tools from this server in the user's cache
const mcpDelimiter = Constants.mcp_delimiter;
for (const key of Object.keys(userTools)) {
if (key.endsWith(`${mcpDelimiter}${serverName}`)) {
delete userTools[key];
}
}
// Add the new tools from this server
const tools = await userConnection.fetchTools();
for (const tool of tools) {
const name = `${tool.name}${Constants.mcp_delimiter}${serverName}`;
userTools[name] = {
type: 'function',
['function']: {
name,
description: tool.description,
parameters: tool.inputSchema,
},
};
}
// Save the updated user tool cache
await setCachedTools(userTools, { userId: user.id });
responseSent = true;
res.json({
success: true,
message: `MCP server '${serverName}' reinitialized successfully`,
serverName,
});
} catch (error) {
logger.error(
`[MCP Complete Reinitialize] Error completing reinitialization for ${req.params.serverName}:`,
error,
);
if (!responseSent) {
res.status(500).json({
success: false,
message: 'Failed to complete MCP server reinitialization',
serverName: req.params.serverName,
});
}
}
});
/**
* Reinitialize MCP server
* This endpoint allows reinitializing a specific MCP server
*/
router.post('/:serverName/reinitialize', requireJwtAuth, async (req, res) => {
let responseSent = false;
try {
const { serverName } = req.params;
const user = req.user;
if (!user?.id) {
responseSent = true;
return res.status(401).json({ error: 'User not authenticated' });
}
logger.info(`[MCP Reinitialize] Reinitializing server: ${serverName}`);
const config = await loadCustomConfig(suppressLogging);
if (!config || !config.mcpServers || !config.mcpServers[serverName]) {
responseSent = true;
return res.status(404).json({
error: `MCP server '${serverName}' not found in configuration`,
});
}
const flowsCache = getLogStores(CacheKeys.FLOWS);
const flowManager = getFlowStateManager(flowsCache);
const mcpManager = getMCPManager();
// Clean up any stale OAuth flows for this server
try {
const flowId = MCPOAuthHandler.generateFlowId(user.id, serverName);
const existingFlow = await flowManager.getFlowState(flowId, 'mcp_oauth');
if (existingFlow && existingFlow.status === 'PENDING') {
logger.info(`[MCP Reinitialize] Cleaning up stale OAuth flow for ${serverName}`);
await flowManager.failFlow(flowId, 'mcp_oauth', new Error('OAuth flow interrupted'));
}
} catch (error) {
logger.warn(
`[MCP Reinitialize] Error cleaning up stale OAuth flow for ${serverName}:`,
error,
);
}
await mcpManager.disconnectServer(serverName);
logger.info(`[MCP Reinitialize] Disconnected existing server: ${serverName}`);
const serverConfig = config.mcpServers[serverName];
mcpManager.mcpConfigs[serverName] = serverConfig;
let customUserVars = {};
if (serverConfig.customUserVars && typeof serverConfig.customUserVars === 'object') {
for (const varName of Object.keys(serverConfig.customUserVars)) {
try {
const pluginKey = `${Constants.mcp_prefix}${serverName}`;
const value = await getUserPluginAuthValueByPlugin(user.id, varName, pluginKey, false);
if (value) {
customUserVars[varName] = value;
}
} catch (err) {
logger.error(`[MCP Reinitialize] Error fetching ${varName} for user ${user.id}:`, err);
}
}
}
let userConnection = null;
try {
userConnection = await mcpManager.getUserConnection({
user,
serverName,
flowManager,
customUserVars,
tokenMethods: {
findToken,
updateToken,
createToken,
deleteTokens,
},
oauthStart: (authURL) => {
// This will be called if OAuth is required
responseSent = true;
logger.info(`[MCP Reinitialize] OAuth required for ${serverName}, auth URL: ${authURL}`);
// Get the flow ID for polling
const flowId = MCPOAuthHandler.generateFlowId(user.id, serverName);
// Return the OAuth response immediately - client will poll for completion
res.json({
success: false,
oauthRequired: true,
authURL,
flowId,
message: `OAuth authentication required for MCP server '${serverName}'`,
serverName,
});
},
oauthEnd: () => {
// This will be called when OAuth flow completes
logger.info(`[MCP Reinitialize] OAuth flow completed for ${serverName}`);
},
});
// If response was already sent for OAuth, don't continue
if (responseSent) {
return;
}
} catch (err) {
logger.error(`[MCP Reinitialize] Error initializing MCP server ${serverName} for user:`, err);
// Check if this is an OAuth error
if (err.message && err.message.includes('OAuth required')) {
// Try to get the OAuth URL from the flow manager
try {
const flowId = MCPOAuthHandler.generateFlowId(user.id, serverName);
const existingFlow = await flowManager.getFlowState(flowId, 'mcp_oauth');
if (existingFlow && existingFlow.metadata) {
const { serverUrl, oauth: oauthConfig } = existingFlow.metadata;
if (serverUrl && oauthConfig) {
const { authorizationUrl: authUrl } = await MCPOAuthHandler.initiateOAuthFlow(
serverName,
serverUrl,
user.id,
oauthConfig,
);
return res.json({
success: false,
oauthRequired: true,
authURL: authUrl,
flowId,
message: `OAuth authentication required for MCP server '${serverName}'`,
serverName,
});
}
}
} catch (oauthErr) {
logger.error(`[MCP Reinitialize] Error getting OAuth URL for ${serverName}:`, oauthErr);
}
responseSent = true;
return res.status(401).json({
success: false,
oauthRequired: true,
message: `OAuth authentication required for MCP server '${serverName}'`,
serverName,
});
}
responseSent = true;
return res.status(500).json({ error: 'Failed to reinitialize MCP server for user' });
}
const userTools = (await getCachedTools({ userId: user.id })) || {};
// Remove any old tools from this server in the user's cache
const mcpDelimiter = Constants.mcp_delimiter;
for (const key of Object.keys(userTools)) {
if (key.endsWith(`${mcpDelimiter}${serverName}`)) {
delete userTools[key];
}
}
// Add the new tools from this server
const tools = await userConnection.fetchTools();
for (const tool of tools) {
const name = `${tool.name}${Constants.mcp_delimiter}${serverName}`;
userTools[name] = {
type: 'function',
['function']: {
name,
description: tool.description,
parameters: tool.inputSchema,
},
};
}
// Save the updated user tool cache
await setCachedTools(userTools, { userId: user.id });
responseSent = true;
res.json({
success: true,
message: `MCP server '${serverName}' reinitialized successfully`,
serverName,
});
} catch (error) {
logger.error('[MCP Reinitialize] Unexpected error', error);
if (!responseSent) {
res.status(500).json({ error: 'Internal server error' });
}
}
});
module.exports = router;

View File

@@ -1,37 +1,43 @@
const express = require('express');
const { Tokenizer } = require('@librechat/api');
const { Tokenizer, generateCheckAccess } = require('@librechat/api');
const { PermissionTypes, Permissions } = require('librechat-data-provider');
const {
getAllUserMemories,
toggleUserMemories,
createMemory,
setMemory,
deleteMemory,
setMemory,
} = require('~/models');
const { requireJwtAuth, generateCheckAccess } = require('~/server/middleware');
const { requireJwtAuth } = require('~/server/middleware');
const { getRoleByName } = require('~/models/Role');
const router = express.Router();
const checkMemoryRead = generateCheckAccess(PermissionTypes.MEMORIES, [
Permissions.USE,
Permissions.READ,
]);
const checkMemoryCreate = generateCheckAccess(PermissionTypes.MEMORIES, [
Permissions.USE,
Permissions.CREATE,
]);
const checkMemoryUpdate = generateCheckAccess(PermissionTypes.MEMORIES, [
Permissions.USE,
Permissions.UPDATE,
]);
const checkMemoryDelete = generateCheckAccess(PermissionTypes.MEMORIES, [
Permissions.USE,
Permissions.UPDATE,
]);
const checkMemoryOptOut = generateCheckAccess(PermissionTypes.MEMORIES, [
Permissions.USE,
Permissions.OPT_OUT,
]);
const checkMemoryRead = generateCheckAccess({
permissionType: PermissionTypes.MEMORIES,
permissions: [Permissions.USE, Permissions.READ],
getRoleByName,
});
const checkMemoryCreate = generateCheckAccess({
permissionType: PermissionTypes.MEMORIES,
permissions: [Permissions.USE, Permissions.CREATE],
getRoleByName,
});
const checkMemoryUpdate = generateCheckAccess({
permissionType: PermissionTypes.MEMORIES,
permissions: [Permissions.USE, Permissions.UPDATE],
getRoleByName,
});
const checkMemoryDelete = generateCheckAccess({
permissionType: PermissionTypes.MEMORIES,
permissions: [Permissions.USE, Permissions.UPDATE],
getRoleByName,
});
const checkMemoryOptOut = generateCheckAccess({
permissionType: PermissionTypes.MEMORIES,
permissions: [Permissions.USE, Permissions.OPT_OUT],
getRoleByName,
});
router.use(requireJwtAuth);
@@ -166,40 +172,68 @@ router.patch('/preferences', checkMemoryOptOut, async (req, res) => {
/**
* PATCH /memories/:key
* Updates the value of an existing memory entry for the authenticated user.
* Body: { value: string }
* Body: { key?: string, value: string }
* Returns 200 and { updated: true, memory: <updatedDoc> } when successful.
*/
router.patch('/:key', checkMemoryUpdate, async (req, res) => {
const { key } = req.params;
const { value } = req.body || {};
const { key: urlKey } = req.params;
const { key: bodyKey, value } = req.body || {};
if (typeof value !== 'string' || value.trim() === '') {
return res.status(400).json({ error: 'Value is required and must be a non-empty string.' });
}
// Use the key from the body if provided, otherwise use the key from the URL
const newKey = bodyKey || urlKey;
try {
const tokenCount = Tokenizer.getTokenCount(value, 'o200k_base');
const memories = await getAllUserMemories(req.user.id);
const existingMemory = memories.find((m) => m.key === key);
const existingMemory = memories.find((m) => m.key === urlKey);
if (!existingMemory) {
return res.status(404).json({ error: 'Memory not found.' });
}
const result = await setMemory({
userId: req.user.id,
key,
value,
tokenCount,
});
// If the key is changing, we need to handle it specially
if (newKey !== urlKey) {
const keyExists = memories.find((m) => m.key === newKey);
if (keyExists) {
return res.status(409).json({ error: 'Memory with this key already exists.' });
}
if (!result.ok) {
return res.status(500).json({ error: 'Failed to update memory.' });
const createResult = await createMemory({
userId: req.user.id,
key: newKey,
value,
tokenCount,
});
if (!createResult.ok) {
return res.status(500).json({ error: 'Failed to create new memory.' });
}
const deleteResult = await deleteMemory({ userId: req.user.id, key: urlKey });
if (!deleteResult.ok) {
return res.status(500).json({ error: 'Failed to delete old memory.' });
}
} else {
// Key is not changing, just update the value
const result = await setMemory({
userId: req.user.id,
key: newKey,
value,
tokenCount,
});
if (!result.ok) {
return res.status(500).json({ error: 'Failed to update memory.' });
}
}
const updatedMemories = await getAllUserMemories(req.user.id);
const updatedMemory = updatedMemories.find((m) => m.key === key);
const updatedMemory = updatedMemories.find((m) => m.key === newKey);
res.json({ updated: true, memory: updatedMemory });
} catch (error) {

View File

@@ -235,12 +235,13 @@ router.put('/:conversationId/:messageId', validateMessageReq, async (req, res) =
return res.status(400).json({ error: 'Content part not found' });
}
if (updatedContent[index].type !== ContentTypes.TEXT) {
const currentPartType = updatedContent[index].type;
if (currentPartType !== ContentTypes.TEXT && currentPartType !== ContentTypes.THINK) {
return res.status(400).json({ error: 'Cannot update non-text content' });
}
const oldText = updatedContent[index].text;
updatedContent[index] = { type: ContentTypes.TEXT, text };
const oldText = updatedContent[index][currentPartType];
updatedContent[index] = { type: currentPartType, [currentPartType]: text };
let tokenCount = message.tokenCount;
if (tokenCount !== undefined) {

View File

@@ -1,5 +1,7 @@
const express = require('express');
const { PermissionTypes, Permissions, SystemRoles } = require('librechat-data-provider');
const { logger } = require('@librechat/data-schemas');
const { generateCheckAccess } = require('@librechat/api');
const { Permissions, SystemRoles, PermissionTypes } = require('librechat-data-provider');
const {
getPrompt,
getPrompts,
@@ -14,24 +16,30 @@ const {
// updatePromptLabels,
makePromptProduction,
} = require('~/models/Prompt');
const { requireJwtAuth, generateCheckAccess } = require('~/server/middleware');
const { logger } = require('~/config');
const { requireJwtAuth } = require('~/server/middleware');
const { getRoleByName } = require('~/models/Role');
const router = express.Router();
const checkPromptAccess = generateCheckAccess(PermissionTypes.PROMPTS, [Permissions.USE]);
const checkPromptCreate = generateCheckAccess(PermissionTypes.PROMPTS, [
Permissions.USE,
Permissions.CREATE,
]);
const checkPromptAccess = generateCheckAccess({
permissionType: PermissionTypes.PROMPTS,
permissions: [Permissions.USE],
getRoleByName,
});
const checkPromptCreate = generateCheckAccess({
permissionType: PermissionTypes.PROMPTS,
permissions: [Permissions.USE, Permissions.CREATE],
getRoleByName,
});
const checkGlobalPromptShare = generateCheckAccess(
PermissionTypes.PROMPTS,
[Permissions.USE, Permissions.CREATE],
{
const checkGlobalPromptShare = generateCheckAccess({
permissionType: PermissionTypes.PROMPTS,
permissions: [Permissions.USE, Permissions.CREATE],
bodyProps: {
[Permissions.SHARED_GLOBAL]: ['projectIds', 'removeProjectIds'],
},
);
getRoleByName,
});
router.use(requireJwtAuth);
router.use(checkPromptAccess);

View File

@@ -1,8 +1,11 @@
const express = require('express');
const staticCache = require('../utils/staticCache');
const paths = require('~/config/paths');
const { isEnabled } = require('~/server/utils');
const skipGzipScan = !isEnabled(process.env.ENABLE_IMAGE_OUTPUT_GZIP_SCAN);
const router = express.Router();
router.use(staticCache(paths.imageOutput));
router.use(staticCache(paths.imageOutput, { skipGzipScan }));
module.exports = router;

View File

@@ -1,18 +1,24 @@
const express = require('express');
const { logger } = require('@librechat/data-schemas');
const { generateCheckAccess } = require('@librechat/api');
const { PermissionTypes, Permissions } = require('librechat-data-provider');
const {
getConversationTags,
updateTagsForConversation,
updateConversationTag,
createConversationTag,
deleteConversationTag,
updateTagsForConversation,
getConversationTags,
} = require('~/models/ConversationTag');
const { requireJwtAuth, generateCheckAccess } = require('~/server/middleware');
const { logger } = require('~/config');
const { requireJwtAuth } = require('~/server/middleware');
const { getRoleByName } = require('~/models/Role');
const router = express.Router();
const checkBookmarkAccess = generateCheckAccess(PermissionTypes.BOOKMARKS, [Permissions.USE]);
const checkBookmarkAccess = generateCheckAccess({
permissionType: PermissionTypes.BOOKMARKS,
permissions: [Permissions.USE],
getRoleByName,
});
router.use(requireJwtAuth);
router.use(checkBookmarkAccess);

View File

@@ -1,12 +1,11 @@
const { agentsConfigSetup, loadWebSearchConfig } = require('@librechat/api');
const {
FileSources,
loadOCRConfig,
EModelEndpoint,
loadMemoryConfig,
getConfigDefaults,
loadWebSearchConfig,
} = require('librechat-data-provider');
const { agentsConfigSetup } = require('@librechat/api');
const {
checkHealth,
checkConfig,
@@ -158,6 +157,10 @@ const AppService = async (app) => {
}
});
if (endpoints?.all) {
endpointLocals.all = endpoints.all;
}
app.locals = {
...defaultLocals,
fileConfig: config?.fileConfig,

View File

@@ -152,12 +152,14 @@ describe('AppService', () => {
filteredTools: undefined,
includedTools: undefined,
webSearch: {
safeSearch: 1,
jinaApiKey: '${JINA_API_KEY}',
cohereApiKey: '${COHERE_API_KEY}',
serperApiKey: '${SERPER_API_KEY}',
searxngApiKey: '${SEARXNG_API_KEY}',
firecrawlApiKey: '${FIRECRAWL_API_KEY}',
firecrawlApiUrl: '${FIRECRAWL_API_URL}',
jinaApiKey: '${JINA_API_KEY}',
safeSearch: 1,
serperApiKey: '${SERPER_API_KEY}',
searxngInstanceUrl: '${SEARXNG_INSTANCE_URL}',
},
memory: undefined,
agents: {
@@ -541,6 +543,206 @@ describe('AppService', () => {
expect(process.env.IMPORT_USER_MAX).toEqual('initialUserMax');
expect(process.env.IMPORT_USER_WINDOW).toEqual('initialUserWindow');
});
it('should correctly configure endpoint with titlePrompt, titleMethod, and titlePromptTemplate', async () => {
require('./Config/loadCustomConfig').mockImplementationOnce(() =>
Promise.resolve({
endpoints: {
[EModelEndpoint.openAI]: {
titleConvo: true,
titleModel: 'gpt-3.5-turbo',
titleMethod: 'structured',
titlePrompt: 'Custom title prompt for conversation',
titlePromptTemplate: 'Summarize this conversation: {{conversation}}',
},
[EModelEndpoint.assistants]: {
titleMethod: 'functions',
titlePrompt: 'Generate a title for this assistant conversation',
titlePromptTemplate: 'Assistant conversation template: {{messages}}',
},
[EModelEndpoint.azureOpenAI]: {
groups: azureGroups,
titleConvo: true,
titleMethod: 'completion',
titleModel: 'gpt-4',
titlePrompt: 'Azure title prompt',
titlePromptTemplate: 'Azure conversation: {{context}}',
},
},
}),
);
await AppService(app);
// Check OpenAI endpoint configuration
expect(app.locals).toHaveProperty(EModelEndpoint.openAI);
expect(app.locals[EModelEndpoint.openAI]).toEqual(
expect.objectContaining({
titleConvo: true,
titleModel: 'gpt-3.5-turbo',
titleMethod: 'structured',
titlePrompt: 'Custom title prompt for conversation',
titlePromptTemplate: 'Summarize this conversation: {{conversation}}',
}),
);
// Check Assistants endpoint configuration
expect(app.locals).toHaveProperty(EModelEndpoint.assistants);
expect(app.locals[EModelEndpoint.assistants]).toMatchObject({
titleMethod: 'functions',
titlePrompt: 'Generate a title for this assistant conversation',
titlePromptTemplate: 'Assistant conversation template: {{messages}}',
});
// Check Azure OpenAI endpoint configuration
expect(app.locals).toHaveProperty(EModelEndpoint.azureOpenAI);
expect(app.locals[EModelEndpoint.azureOpenAI]).toEqual(
expect.objectContaining({
titleConvo: true,
titleMethod: 'completion',
titleModel: 'gpt-4',
titlePrompt: 'Azure title prompt',
titlePromptTemplate: 'Azure conversation: {{context}}',
}),
);
});
it('should configure Agent endpoint with title generation settings', async () => {
require('./Config/loadCustomConfig').mockImplementationOnce(() =>
Promise.resolve({
endpoints: {
[EModelEndpoint.agents]: {
disableBuilder: false,
titleConvo: true,
titleModel: 'gpt-4',
titleMethod: 'structured',
titlePrompt: 'Generate a descriptive title for this agent conversation',
titlePromptTemplate: 'Agent conversation summary: {{content}}',
recursionLimit: 15,
capabilities: [AgentCapabilities.tools, AgentCapabilities.actions],
},
},
}),
);
await AppService(app);
expect(app.locals).toHaveProperty(EModelEndpoint.agents);
expect(app.locals[EModelEndpoint.agents]).toMatchObject({
disableBuilder: false,
titleConvo: true,
titleModel: 'gpt-4',
titleMethod: 'structured',
titlePrompt: 'Generate a descriptive title for this agent conversation',
titlePromptTemplate: 'Agent conversation summary: {{content}}',
recursionLimit: 15,
capabilities: expect.arrayContaining([AgentCapabilities.tools, AgentCapabilities.actions]),
});
});
it('should handle missing title configuration options with defaults', async () => {
require('./Config/loadCustomConfig').mockImplementationOnce(() =>
Promise.resolve({
endpoints: {
[EModelEndpoint.openAI]: {
titleConvo: true,
// titlePrompt and titlePromptTemplate are not provided
},
},
}),
);
await AppService(app);
expect(app.locals).toHaveProperty(EModelEndpoint.openAI);
expect(app.locals[EModelEndpoint.openAI]).toMatchObject({
titleConvo: true,
});
// Check that the optional fields are undefined when not provided
expect(app.locals[EModelEndpoint.openAI].titlePrompt).toBeUndefined();
expect(app.locals[EModelEndpoint.openAI].titlePromptTemplate).toBeUndefined();
expect(app.locals[EModelEndpoint.openAI].titleMethod).toBeUndefined();
});
it('should correctly configure titleEndpoint when specified', async () => {
require('./Config/loadCustomConfig').mockImplementationOnce(() =>
Promise.resolve({
endpoints: {
[EModelEndpoint.openAI]: {
titleConvo: true,
titleModel: 'gpt-3.5-turbo',
titleEndpoint: EModelEndpoint.anthropic,
titlePrompt: 'Generate a concise title',
},
[EModelEndpoint.agents]: {
titleEndpoint: 'custom-provider',
titleMethod: 'structured',
},
},
}),
);
await AppService(app);
// Check OpenAI endpoint has titleEndpoint
expect(app.locals).toHaveProperty(EModelEndpoint.openAI);
expect(app.locals[EModelEndpoint.openAI]).toMatchObject({
titleConvo: true,
titleModel: 'gpt-3.5-turbo',
titleEndpoint: EModelEndpoint.anthropic,
titlePrompt: 'Generate a concise title',
});
// Check Agents endpoint has titleEndpoint
expect(app.locals).toHaveProperty(EModelEndpoint.agents);
expect(app.locals[EModelEndpoint.agents]).toMatchObject({
titleEndpoint: 'custom-provider',
titleMethod: 'structured',
});
});
it('should correctly configure all endpoint when specified', async () => {
require('./Config/loadCustomConfig').mockImplementationOnce(() =>
Promise.resolve({
endpoints: {
all: {
titleConvo: true,
titleModel: 'gpt-4o-mini',
titleMethod: 'structured',
titlePrompt: 'Default title prompt for all endpoints',
titlePromptTemplate: 'Default template: {{conversation}}',
titleEndpoint: EModelEndpoint.anthropic,
streamRate: 50,
},
[EModelEndpoint.openAI]: {
titleConvo: true,
titleModel: 'gpt-3.5-turbo',
},
},
}),
);
await AppService(app);
// Check that 'all' endpoint config is loaded
expect(app.locals).toHaveProperty('all');
expect(app.locals.all).toMatchObject({
titleConvo: true,
titleModel: 'gpt-4o-mini',
titleMethod: 'structured',
titlePrompt: 'Default title prompt for all endpoints',
titlePromptTemplate: 'Default template: {{conversation}}',
titleEndpoint: EModelEndpoint.anthropic,
streamRate: 50,
});
// Check that OpenAI endpoint has its own config
expect(app.locals).toHaveProperty(EModelEndpoint.openAI);
expect(app.locals[EModelEndpoint.openAI]).toMatchObject({
titleConvo: true,
titleModel: 'gpt-3.5-turbo',
});
});
});
describe('AppService updating app.locals and issuing warnings', () => {

View File

@@ -1,4 +1,5 @@
const bcrypt = require('bcryptjs');
const jwt = require('jsonwebtoken');
const { webcrypto } = require('node:crypto');
const { isEnabled } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
@@ -499,6 +500,18 @@ const resendVerificationEmail = async (req) => {
};
}
};
/**
* Generate a short-lived JWT token
* @param {String} userId - The ID of the user
* @param {String} [expireIn='5m'] - The expiration time for the token (default is 5 minutes)
* @returns {String} - The generated JWT token
*/
const generateShortLivedToken = (userId, expireIn = '5m') => {
return jwt.sign({ id: userId }, process.env.JWT_SECRET, {
expiresIn: expireIn,
algorithm: 'HS256',
});
};
module.exports = {
logoutUser,
@@ -506,7 +519,8 @@ module.exports = {
registerUser,
setAuthTokens,
resetPassword,
setOpenIDAuthTokens,
requestPasswordReset,
resendVerificationEmail,
setOpenIDAuthTokens,
generateShortLivedToken,
};

View File

@@ -1,10 +1,9 @@
const { logger } = require('@librechat/data-schemas');
const { getUserMCPAuthMap } = require('@librechat/api');
const { isEnabled, getUserMCPAuthMap } = require('@librechat/api');
const { CacheKeys, EModelEndpoint } = require('librechat-data-provider');
const { normalizeEndpointName, isEnabled } = require('~/server/utils');
const { normalizeEndpointName } = require('~/server/utils');
const loadCustomConfig = require('./loadCustomConfig');
const { getCachedTools } = require('./getCachedTools');
const { findPluginAuthsByKeys } = require('~/models');
const getLogStores = require('~/cache/getLogStores');
/**
@@ -55,46 +54,48 @@ const getCustomEndpointConfig = async (endpoint) => {
);
};
async function createGetMCPAuthMap() {
/**
* @param {Object} params
* @param {string} params.userId
* @param {GenericTool[]} [params.tools]
* @param {import('@librechat/data-schemas').PluginAuthMethods['findPluginAuthsByKeys']} params.findPluginAuthsByKeys
* @returns {Promise<Record<string, Record<string, string>> | undefined>}
*/
async function getMCPAuthMap({ userId, tools, findPluginAuthsByKeys }) {
try {
if (!tools || tools.length === 0) {
return;
}
const appTools = await getCachedTools({
userId,
});
return await getUserMCPAuthMap({
tools,
userId,
appTools,
findPluginAuthsByKeys,
});
} catch (err) {
logger.error(
`[api/server/controllers/agents/client.js #chatCompletion] Error getting custom user vars for agent`,
err,
);
}
}
/**
* @returns {Promise<boolean>}
*/
async function hasCustomUserVars() {
const customConfig = await getCustomConfig();
const mcpServers = customConfig?.mcpServers;
const hasCustomUserVars = Object.values(mcpServers ?? {}).some((server) => server.customUserVars);
if (!hasCustomUserVars) {
return;
}
/**
* @param {Object} params
* @param {GenericTool[]} [params.tools]
* @param {string} params.userId
* @returns {Promise<Record<string, Record<string, string>> | undefined>}
*/
return async function ({ tools, userId }) {
try {
if (!tools || tools.length === 0) {
return;
}
const appTools = await getCachedTools({
userId,
});
return await getUserMCPAuthMap({
tools,
userId,
appTools,
findPluginAuthsByKeys,
});
} catch (err) {
logger.error(
`[api/server/controllers/agents/client.js #chatCompletion] Error getting custom user vars for agent`,
err,
);
}
};
return Object.values(mcpServers ?? {}).some((server) => server.customUserVars);
}
module.exports = {
getMCPAuthMap,
getCustomConfig,
getBalanceConfig,
createGetMCPAuthMap,
hasCustomUserVars,
getCustomEndpointConfig,
};

View File

@@ -1,4 +1,10 @@
const { CacheKeys, EModelEndpoint, orderEndpointsConfig } = require('librechat-data-provider');
const {
CacheKeys,
EModelEndpoint,
isAgentsEndpoint,
orderEndpointsConfig,
defaultAgentCapabilities,
} = require('librechat-data-provider');
const loadDefaultEndpointsConfig = require('./loadDefaultEConfig');
const loadConfigEndpoints = require('./loadConfigEndpoints');
const getLogStores = require('~/cache/getLogStores');
@@ -80,8 +86,12 @@ async function getEndpointsConfig(req) {
* @returns {Promise<boolean>}
*/
const checkCapability = async (req, capability) => {
const isAgents = isAgentsEndpoint(req.body?.original_endpoint || req.body?.endpoint);
const endpointsConfig = await getEndpointsConfig(req);
const capabilities = endpointsConfig?.[EModelEndpoint.agents]?.capabilities ?? [];
const capabilities =
isAgents || endpointsConfig?.[EModelEndpoint.agents]?.capabilities != null
? (endpointsConfig?.[EModelEndpoint.agents]?.capabilities ?? [])
: defaultAgentCapabilities;
return capabilities.includes(capability);
};

View File

@@ -1,5 +1,7 @@
const path = require('path');
const { logger } = require('@librechat/data-schemas');
const { loadServiceKey, isUserProvided } = require('@librechat/api');
const { EModelEndpoint } = require('librechat-data-provider');
const { isUserProvided } = require('~/server/utils');
const { config } = require('./EndpointService');
const { openAIApiKey, azureOpenAIApiKey, useAzurePlugins, userProvidedOpenAI, googleKey } = config;
@@ -9,37 +11,41 @@ const { openAIApiKey, azureOpenAIApiKey, useAzurePlugins, userProvidedOpenAI, go
* @param {Express.Request} req - The request object
*/
async function loadAsyncEndpoints(req) {
let i = 0;
let serviceKey, googleUserProvides;
try {
serviceKey = require('~/data/auth.json');
} catch (e) {
if (i === 0) {
i++;
/** Check if GOOGLE_KEY is provided at all(including 'user_provided') */
const isGoogleKeyProvided = googleKey && googleKey.trim() !== '';
if (isGoogleKeyProvided) {
/** If GOOGLE_KEY is provided, check if it's user_provided */
googleUserProvides = isUserProvided(googleKey);
} else {
/** Only attempt to load service key if GOOGLE_KEY is not provided */
const serviceKeyPath =
process.env.GOOGLE_SERVICE_KEY_FILE || path.join(__dirname, '../../..', 'data', 'auth.json');
try {
serviceKey = await loadServiceKey(serviceKeyPath);
} catch (error) {
logger.error('Error loading service key', error);
serviceKey = null;
}
}
if (isUserProvided(googleKey)) {
googleUserProvides = true;
if (i <= 1) {
i++;
}
}
const google = serviceKey || googleKey ? { userProvide: googleUserProvides } : false;
const google = serviceKey || isGoogleKeyProvided ? { userProvide: googleUserProvides } : false;
const useAzure = req.app.locals[EModelEndpoint.azureOpenAI]?.plugins;
const gptPlugins =
useAzure || openAIApiKey || azureOpenAIApiKey
? {
availableAgents: ['classic', 'functions'],
userProvide: useAzure ? false : userProvidedOpenAI,
userProvideURL: useAzure
? false
: config[EModelEndpoint.openAI]?.userProvideURL ||
availableAgents: ['classic', 'functions'],
userProvide: useAzure ? false : userProvidedOpenAI,
userProvideURL: useAzure
? false
: config[EModelEndpoint.openAI]?.userProvideURL ||
config[EModelEndpoint.azureOpenAI]?.userProvideURL,
azure: useAzurePlugins || useAzure,
}
azure: useAzurePlugins || useAzure,
}
: false;
return { google, gptPlugins };

View File

@@ -23,9 +23,10 @@ let i = 0;
* Load custom configuration files and caches the object if the `cache` field at root is true.
* Validation via parsing the config file with the config schema.
* @function loadCustomConfig
* @param {boolean} [suppressLogging=false] - If true, suppresses the verbose config logging of the entire config when called.
* @returns {Promise<TCustomConfig | null>} A promise that resolves to null or the custom config object.
* */
async function loadCustomConfig() {
async function loadCustomConfig(suppressLogging = false) {
// Use CONFIG_PATH if set, otherwise fallback to defaultConfigPath
const configPath = process.env.CONFIG_PATH || defaultConfigPath;
@@ -108,9 +109,11 @@ https://www.librechat.ai/docs/configuration/stt_tts`);
return null;
} else {
logger.info('Custom config file loaded:');
logger.info(JSON.stringify(customConfig, null, 2));
logger.debug('Custom config:', customConfig);
if (!suppressLogging) {
logger.info('Custom config file loaded:');
logger.info(JSON.stringify(customConfig, null, 2));
logger.debug('Custom config:', customConfig);
}
}
(customConfig.endpoints?.custom ?? [])

View File

@@ -8,11 +8,12 @@ const {
ErrorTypes,
EModelEndpoint,
EToolResources,
isAgentsEndpoint,
replaceSpecialVars,
providerEndpointMap,
} = require('librechat-data-provider');
const { getProviderConfig } = require('~/server/services/Endpoints');
const generateArtifactsPrompt = require('~/app/clients/prompts/artifacts');
const { getProviderConfig } = require('~/server/services/Endpoints');
const { processFiles } = require('~/server/services/Files/process');
const { getFiles, getToolFilesByIds } = require('~/models/File');
const { getConvoFiles } = require('~/models/Conversation');
@@ -42,7 +43,11 @@ const initializeAgent = async ({
allowedProviders,
isInitialAgent = false,
}) => {
if (allowedProviders.size > 0 && !allowedProviders.has(agent.provider)) {
if (
isAgentsEndpoint(endpointOption?.endpoint) &&
allowedProviders.size > 0 &&
!allowedProviders.has(agent.provider)
) {
throw new Error(
`{ "type": "${ErrorTypes.INVALID_AGENT_PROVIDER}", "info": "${agent.provider}" }`,
);
@@ -82,10 +87,11 @@ const initializeAgent = async ({
attachments: currentFiles,
tool_resources: agent.tool_resources,
requestFileSet: new Set(requestFiles?.map((file) => file.file_id)),
agentId: agent.id,
});
const provider = agent.provider;
const { tools, toolContextMap } =
const { tools: structuredTools, toolContextMap } =
(await loadTools?.({
req,
res,
@@ -98,7 +104,7 @@ const initializeAgent = async ({
agent.endpoint = provider;
const { getOptions, overrideProvider } = await getProviderConfig(provider);
if (overrideProvider) {
if (overrideProvider !== agent.provider) {
agent.provider = overrideProvider;
}
@@ -140,6 +146,24 @@ const initializeAgent = async ({
agent.provider = options.provider;
}
/** @type {import('@librechat/agents').GenericTool[]} */
let tools = options.tools?.length ? options.tools : structuredTools;
if (
(agent.provider === Providers.GOOGLE || agent.provider === Providers.VERTEXAI) &&
options.tools?.length &&
structuredTools?.length
) {
throw new Error(`{ "type": "${ErrorTypes.GOOGLE_TOOL_CONFLICT}"}`);
} else if (
(agent.provider === Providers.OPENAI ||
agent.provider === Providers.AZURE ||
agent.provider === Providers.ANTHROPIC) &&
options.tools?.length &&
structuredTools?.length
) {
tools = structuredTools.concat(options.tools);
}
/** @type {import('@librechat/agents').ClientOptions} */
agent.model_parameters = { ...options.llmConfig };
if (options.configOptions) {
@@ -166,6 +190,7 @@ const initializeAgent = async ({
attachments,
resendFiles,
toolContextMap,
useLegacyContent: !!options.useLegacyContent,
maxContextTokens: (agentMaxContextTokens - maxTokens) * 0.9,
};
};

View File

@@ -78,7 +78,17 @@ function getLLMConfig(apiKey, options = {}) {
requestOptions.anthropicApiUrl = options.reverseProxyUrl;
}
const tools = [];
if (mergedOptions.web_search) {
tools.push({
type: 'web_search_20250305',
name: 'web_search',
});
}
return {
tools,
/** @type {AnthropicClientOptions} */
llmConfig: removeNullishValues(requestOptions),
};

View File

@@ -1,5 +1,5 @@
const OpenAI = require('openai');
const { HttpsProxyAgent } = require('https-proxy-agent');
const { ProxyAgent } = require('undici');
const { ErrorTypes, EModelEndpoint } = require('librechat-data-provider');
const {
getUserKeyValues,
@@ -59,7 +59,10 @@ const initializeClient = async ({ req, res, endpointOption, version, initAppClie
}
if (PROXY) {
opts.httpAgent = new HttpsProxyAgent(PROXY);
const proxyAgent = new ProxyAgent(PROXY);
opts.fetchOptions = {
dispatcher: proxyAgent,
};
}
if (OPENAI_ORGANIZATION) {

View File

@@ -1,5 +1,5 @@
// const OpenAI = require('openai');
const { HttpsProxyAgent } = require('https-proxy-agent');
const { ProxyAgent } = require('undici');
const { ErrorTypes } = require('librechat-data-provider');
const { getUserKey, getUserKeyExpiry, getUserKeyValues } = require('~/server/services/UserService');
const initializeClient = require('./initalize');
@@ -107,6 +107,7 @@ describe('initializeClient', () => {
const res = {};
const { openai } = await initializeClient({ req, res });
expect(openai.httpAgent).toBeInstanceOf(HttpsProxyAgent);
expect(openai.fetchOptions).toBeDefined();
expect(openai.fetchOptions.dispatcher).toBeInstanceOf(ProxyAgent);
});
});

View File

@@ -1,5 +1,5 @@
const OpenAI = require('openai');
const { HttpsProxyAgent } = require('https-proxy-agent');
const { ProxyAgent } = require('undici');
const { constructAzureURL, isUserProvided, resolveHeaders } = require('@librechat/api');
const { ErrorTypes, EModelEndpoint, mapModelToAzureConfig } = require('librechat-data-provider');
const {
@@ -158,7 +158,10 @@ const initializeClient = async ({ req, res, version, endpointOption, initAppClie
}
if (PROXY) {
opts.httpAgent = new HttpsProxyAgent(PROXY);
const proxyAgent = new ProxyAgent(PROXY);
opts.fetchOptions = {
dispatcher: proxyAgent,
};
}
if (OPENAI_ORGANIZATION) {

View File

@@ -1,5 +1,5 @@
// const OpenAI = require('openai');
const { HttpsProxyAgent } = require('https-proxy-agent');
const { ProxyAgent } = require('undici');
const { ErrorTypes } = require('librechat-data-provider');
const { getUserKey, getUserKeyExpiry, getUserKeyValues } = require('~/server/services/UserService');
const initializeClient = require('./initialize');
@@ -107,6 +107,7 @@ describe('initializeClient', () => {
const res = {};
const { openai } = await initializeClient({ req, res });
expect(openai.httpAgent).toBeInstanceOf(HttpsProxyAgent);
expect(openai.fetchOptions).toBeDefined();
expect(openai.fetchOptions.dispatcher).toBeInstanceOf(ProxyAgent);
});
});

View File

@@ -139,6 +139,9 @@ const initializeClient = async ({ req, res, endpointOption, optionsOnly, overrid
);
clientOptions.modelOptions.user = req.user.id;
const options = getOpenAIConfig(apiKey, clientOptions, endpoint);
if (options != null) {
options.useLegacyContent = true;
}
if (!customOptions.streamRate) {
return options;
}
@@ -156,6 +159,7 @@ const initializeClient = async ({ req, res, endpointOption, optionsOnly, overrid
}
return {
useLegacyContent: true,
llmConfig: modelOptions,
};
}

View File

@@ -1,5 +1,6 @@
const { getGoogleConfig, isEnabled } = require('@librechat/api');
const path = require('path');
const { EModelEndpoint, AuthKeys } = require('librechat-data-provider');
const { getGoogleConfig, isEnabled, loadServiceKey } = require('@librechat/api');
const { getUserKey, checkUserKeyExpiry } = require('~/server/services/UserService');
const { GoogleClient } = require('~/app');
@@ -15,10 +16,25 @@ const initializeClient = async ({ req, res, endpointOption, overrideModel, optio
}
let serviceKey = {};
try {
serviceKey = require('~/data/auth.json');
} catch (_e) {
// Do nothing
/** Check if GOOGLE_KEY is provided at all (including 'user_provided') */
const isGoogleKeyProvided =
(GOOGLE_KEY && GOOGLE_KEY.trim() !== '') || (isUserProvided && userKey != null);
if (!isGoogleKeyProvided) {
/** Only attempt to load service key if GOOGLE_KEY is not provided */
try {
const serviceKeyPath =
process.env.GOOGLE_SERVICE_KEY_FILE ||
path.join(__dirname, '../../../..', 'data', 'auth.json');
serviceKey = await loadServiceKey(serviceKeyPath);
if (!serviceKey) {
serviceKey = {};
}
} catch (_e) {
// Service key loading failed, but that's okay if not required
serviceKey = {};
}
}
const credentials = isUserProvided

View File

@@ -7,6 +7,16 @@ const initCustom = require('~/server/services/Endpoints/custom/initialize');
const initGoogle = require('~/server/services/Endpoints/google/initialize');
const { getCustomEndpointConfig } = require('~/server/services/Config');
/** Check if the provider is a known custom provider
* @param {string | undefined} [provider] - The provider string
* @returns {boolean} - True if the provider is a known custom provider, false otherwise
*/
function isKnownCustomProvider(provider) {
return [Providers.XAI, Providers.OLLAMA, Providers.DEEPSEEK, Providers.OPENROUTER].includes(
provider?.toLowerCase() || '',
);
}
const providerConfigMap = {
[Providers.XAI]: initCustom,
[Providers.OLLAMA]: initCustom,
@@ -24,13 +34,13 @@ const providerConfigMap = {
* @param {string} provider - The provider string
* @returns {Promise<{
* getOptions: Function,
* overrideProvider?: string,
* overrideProvider: string,
* customEndpointConfig?: TEndpoint
* }>}
*/
async function getProviderConfig(provider) {
let getOptions = providerConfigMap[provider];
let overrideProvider;
let overrideProvider = provider;
/** @type {TEndpoint | undefined} */
let customEndpointConfig;
@@ -46,6 +56,13 @@ async function getProviderConfig(provider) {
overrideProvider = Providers.OPENAI;
}
if (isKnownCustomProvider(overrideProvider) && !customEndpointConfig) {
customEndpointConfig = await getCustomEndpointConfig(provider);
if (!customEndpointConfig) {
throw new Error(`Provider ${provider} not supported`);
}
}
return {
getOptions,
overrideProvider,

View File

@@ -65,19 +65,20 @@ const initializeClient = async ({
const isAzureOpenAI = endpoint === EModelEndpoint.azureOpenAI;
/** @type {false | TAzureConfig} */
const azureConfig = isAzureOpenAI && req.app.locals[EModelEndpoint.azureOpenAI];
let serverless = false;
if (isAzureOpenAI && azureConfig) {
const { modelGroupMap, groupMap } = azureConfig;
const {
azureOptions,
baseURL,
headers = {},
serverless,
serverless: _serverless,
} = mapModelToAzureConfig({
modelName,
modelGroupMap,
groupMap,
});
serverless = _serverless;
clientOptions.reverseProxyUrl = baseURL ?? clientOptions.reverseProxyUrl;
clientOptions.headers = resolveHeaders(
@@ -143,6 +144,9 @@ const initializeClient = async ({
clientOptions = Object.assign({ modelOptions }, clientOptions);
clientOptions.modelOptions.user = req.user.id;
const options = getOpenAIConfig(apiKey, clientOptions);
if (options != null && serverless === true) {
options.useLegacyContent = true;
}
const streamRate = clientOptions.streamRate;
if (!streamRate) {
return options;

View File

@@ -152,6 +152,7 @@ async function getSessionInfo(fileIdentifier, apiKey) {
* @param {Object} options
* @param {ServerRequest} options.req
* @param {Agent['tool_resources']} options.tool_resources
* @param {string} [options.agentId] - The agent ID for file access control
* @param {string} apiKey
* @returns {Promise<{
* files: Array<{ id: string; session_id: string; name: string }>,
@@ -159,11 +160,18 @@ async function getSessionInfo(fileIdentifier, apiKey) {
* }>}
*/
const primeFiles = async (options, apiKey) => {
const { tool_resources } = options;
const { tool_resources, req, agentId } = options;
const file_ids = tool_resources?.[EToolResources.execute_code]?.file_ids ?? [];
const agentResourceIds = new Set(file_ids);
const resourceFiles = tool_resources?.[EToolResources.execute_code]?.files ?? [];
const dbFiles = ((await getFiles({ file_id: { $in: file_ids } })) ?? []).concat(resourceFiles);
const dbFiles = (
(await getFiles(
{ file_id: { $in: file_ids } },
null,
{ text: 0 },
{ userId: req?.user?.id, agentId },
)) ?? []
).concat(resourceFiles);
const files = [];
const sessions = new Map();

View File

@@ -1,10 +1,11 @@
const fs = require('fs');
const path = require('path');
const axios = require('axios');
const { logger } = require('@librechat/data-schemas');
const { EModelEndpoint } = require('librechat-data-provider');
const { generateShortLivedToken } = require('~/server/services/AuthService');
const { getBufferMetadata } = require('~/server/utils');
const paths = require('~/config/paths');
const { logger } = require('~/config');
/**
* Saves a file to a specified output path with a new filename.
@@ -206,7 +207,7 @@ const deleteLocalFile = async (req, file) => {
const cleanFilepath = file.filepath.split('?')[0];
if (file.embedded && process.env.RAG_API_URL) {
const jwtToken = req.headers.authorization.split(' ')[1];
const jwtToken = generateShortLivedToken(req.user.id);
axios.delete(`${process.env.RAG_API_URL}/documents`, {
headers: {
Authorization: `Bearer ${jwtToken}`,

Some files were not shown because too many files have changed in this diff Show More