Compare commits

...

206 Commits

Author SHA1 Message Date
Dustin Healy
dcd943b370 fix: add missing appConfig reference that was causing error on upload as text OCR path 2025-08-28 23:27:31 -07:00
Danny Avila
62315be197 🔧 fix: Add missing configMiddleware to Convo Import Routes 2025-08-28 23:12:58 -04:00
Danny Avila
a26597a696 📇 refactor: Improve State mgmt. for File uploads and Tool Auth (#9359)
* 🔧 fix: Ensure loading state is correctly set when files are empty or in progress

* 🔧 fix: Update ephemeral agent state on file upload error for execute code tool resource

* 🔧 fix: Reset ephemeral agent state for tool when authentication fails

* refactor: Pass conversation prop to FileFormChat and AttachFileChat components
2025-08-28 23:11:16 -04:00
Danny Avila
8772b04d1d 🗃️ refactor: File Access via Agent; Deny Deletion if not Editor, Allow Viewer (#9357) 2025-08-28 21:16:23 -04:00
Danny Avila
7742b18c9c 🔧 fix: Upload Audio as Text missing Param (#9356) 2025-08-28 21:07:30 -04:00
Arthur Barrett
b75b799e34 🔧 fix: Handle Web API Streams in File Download Route for OpenAI Assistants (#9200) 2025-08-28 12:39:35 -04:00
Danny Avila
43add11b05 🎯 refactor: Custom Endpoint Request-based Header Resolution (#9344)
* refactor: resolve request-based headers for custom endpoints right before LLM request

* ci: clarify request-based header resolution in initializeClient test
2025-08-28 12:33:08 -04:00
Danny Avila
1764de53a5 fix: use appConfig correctly in getVoices 2025-08-28 00:51:22 -04:00
Danny Avila
c0511b9a5f 🔧 fix: MCP Selection Persist and UI Flicker Issues (#9324)
* refactor: useMCPSelect

    - Add useGetMCPTools to use in useMCPSelect and elsewhere hooks for fetching MCP tools
    - remove memoized key
    - remove use of `useChatContext` and require conversationId as prop

* feat: Add MCPPanelContext and integrate conversationId as prop for useMCPSelect across components

- Introduced MCPPanelContext to manage conversationId state.
- Updated MCPSelect, MCPSubMenu, and MCPConfigDialog to accept conversationId as a prop.
- Modified ToolsDropdown and BadgeRow to pass conversationId to relevant components.
- Refactored MCPPanel to utilize MCPPanelProvider for context management.

* fix: remove nested ternary in ServerInitializationSection

- Replaced conditional operator with if-else statements for better readability in determining button text based on server initialization state and reinitialization status.

* refactor: wrap setValueWrap in useCallback for performance optimization

* refactor: streamline useMCPSelect by consolidating storageKey definition

* fix: prevent clearing selections on page refresh by tracking initial load completion

* refactor: simplify concern of useMCPSelect hook

* refactor: move ConfigFieldDetail interface to common types for better reusability, isolate usage of `useGetMCPTools`

* refactor: integrate mcpServerNames into BadgeRowContext and update ToolsDropdown and MCPSelect components
2025-08-28 00:44:49 -04:00
Danny Avila
2483623c88 🔧 fix: type checking for process.browser in api-endpoints.ts 2025-08-27 20:27:57 -04:00
Danny Avila
229d6f2dfe 📦 chore: Update librechat-data-provider to v0.8.006 2025-08-27 20:23:18 -04:00
github-actions[bot]
d5ec838218 🌍 i18n: Update translation.json with latest translations (#9321)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-27 20:15:38 -04:00
Danny Avila
15d7a3d221 🎵 feat: Cumulative Transcription Support for External STT (#9318)
* 🔧 fix: TTS and STT Services to use AppConfig

- Updated `getProviderSchema` and `getProvider` methods to accept an optional `appConfig` parameter, allowing for more flexible configuration retrieval.
- Improved error handling by ensuring that the app configuration is checked before accessing TTS and STT schemas.
- Refactored `processTextToSpeech` and `streamAudio` methods to utilize the new `appConfig` parameter for better clarity and maintainability.

* feat: Cumulative Transcription Support for STT External

* style: fix medium-sized styling for admin settings dialogs
2025-08-27 18:56:04 -04:00
Danny Avila
c3e88b97c8 🎤 feat: Cumulative Transcription Support for AudioRecorder (#9316)
- Added useRef to maintain existing text during audio recording.
- Updated setText to prepend existing text to new transcriptions.
- Modified handleStartRecording and handleStopRecording to manage existing text state.
- Improved spinner icon styling for better visibility.
2025-08-27 18:00:59 -04:00
Danny Avila
ba424666f8 🔐 feat: Add Configurable Min. Password Length (#9315)
- Added support for a minimum password length defined by the MIN_PASSWORD_LENGTH environment variable.
- Updated login, registration, and reset password forms to utilize the configured minimum length.
- Enhanced validation schemas to reflect the new minimum password length requirement.
- Included tests to ensure the minimum password length functionality works as expected.
2025-08-27 16:30:56 -04:00
MarcAmick
ea3b671182 🔧 feat: Alternative DNS Lookup for AWS ElastiCache TLS Connections (#9264)
* added REDIS_USE_ALTERNATIVE_DNS_LOOKUP env variable to modify redis connection by adding dnsLookup
this is required when connecting to elasticache for ioredis
see "Special Note: Aws Elasticache Clusters with TLS" on this webpage:  https://www.npmjs.com/package/ioredis

* added REDIS_USE_ALTERNATIVE_DNS_LOOKUP env variable to modify redis connection by adding dnsLookup
this is required when connecting to elasticache for ioredis
see "Special Note: Aws Elasticache Clusters with TLS" on this webpage:  https://www.npmjs.com/package/ioredis

---------

Co-authored-by: Marc Amick <MarcAmick@jhu.edu>
2025-08-27 16:09:07 -04:00
Dustin Healy
f209f616c9 🌍 i18n: Add Slovenian Language (#9313) 2025-08-27 14:02:22 -04:00
colinlin-stripe
961af515d5 🧹 chore: [stripe] remove dangerously set html (#9288) 2025-08-27 13:58:07 -04:00
Danny Avila
a362963017 🐛 fix: String Interpolation in Messages Endpoint from #9155 (#9312)
* feat: move buildTree function for message hierarchy to data provider

* refactor: consolidate buildTree import from utils to data provider

* fix: correct string interpolation in messages function, which caused message search requests to fail
2025-08-27 13:48:48 -04:00
Danny Avila
78d735f35c 📧 fix: Missing Email fallback in openIdJwtLogin (#9311)
* 📧 fix: Missing Email fallback in `openIdJwtLogin`

* chore: Add auth module export to index
2025-08-27 12:59:40 -04:00
Danny Avila
48f6f8f2f8 📎 feat: Upload as Text Support for Plaintext, STT, RAG, and Token Limits (#8868)
* 🪶 feat: Add Support for Uploading Plaintext Files

feat: delineate between OCR and text handling in fileConfig field of config file

- also adds support for passing in mimetypes as just plain file extensions

feat: add showLabel bool to support future synthetic component DynamicDropdownInput

feat: add new combination dropdown-input component in params panel to support file type token limits

refactor: move hovercard to side to align with other hovercards

chore: clean up autogenerated comments

feat: add delineation to file upload path between text and ocr configured filetypes

feat: add token limit checks during file upload

refactor: move textParsing out of ocrEnabled logic

refactor: clean up types for filetype config

refactor: finish decoupling DynamicDropdownInput from fileTokenLimits

fix: move image token cost function into file to fix circular dependency causing unittest to fail and remove unused var for linter

chore: remove out of scope code following review

refactor: make fileTokenLimit conform to existing styles

chore: remove unused localization string

chore: undo changes to DynamicInput and other strays

feat: add fileTokenLimit to all provider config panels

fix: move textParsing back into ocr tool_resource block for now so that it doesn't interfere with other upload types

* 📤 feat: Add RAG API Endpoint Support for Text Parsing (#8849)

* feat: implement RAG API integration for text parsing with fallback to native parsing

* chore: remove TODO now that placeholder and fllback are implemented

* ✈️ refactor: Migrate Text Parsing to TS (#8892)

* refactor: move generateShortLivedToken to packages/api

* refactor: move textParsing logic into packages/api

* refactor: reduce nesting and dry code with createTextFile

* fix: add proper source handling

* fix: mock new parseText and parseTextNative functions in jest file

* ci: add test coverage for textParser

* 💬 feat: Add Audio File Support to Upload as Text (#8893)

* feat: add STT support for Upload as Text

* refactor: move processAudioFile to packages/api

* refactor: move textParsing from utils to files

* fix: remove audio/mp3 from unsupported mimetypes test since it is now supported

* ✂️ feat: Configurable File Token Limits and Truncation (#8911)

* feat: add configurable fileTokenLimit default value

* fix: add stt to fileConfig merge logic

* fix: add fileTokenLimit to mergeFileConfig logic so configurable value is actually respected from yaml

* feat: add token limiting to parsed text files

* fix: add extraction logic and update tests so fileTokenLimit isnt sent to LLM providers

* fix: address comments

* refactor: rename textTokenLimiter.ts to text.ts

* chore: update form-data package to address CVE-2025-7783 and update package-lock

* feat: use default supported mime types for ocr on frontend file validation

* fix: should be using logger.debug not console.debug

* fix: mock existsSync in text.spec.ts

* fix: mock logger rather than every one of its function calls

* fix: reorganize imports and streamline file upload processing logic

* refactor: update createTextFile function to use destructured parameters and improve readability

* chore: update file validation to use EToolResources for improved type safety

* chore: update import path for types in audio processing module

* fix: update file configuration access and replace console.debug with logger.debug for improved logging

---------

Co-authored-by: Dustin Healy <dustinhealy1@gmail.com>
Co-authored-by: Dustin Healy <54083382+dustinhealy@users.noreply.github.com>
2025-08-27 03:44:39 -04:00
ethanlaj
74bc0440f0 🐋 chore: switch from ankane/pgvector to pgvector/pgvector (#9245) 2025-08-27 02:04:58 -04:00
José Pedro Silva
18d5a75cdc 🌐 feat: Add support to SubDirectory hosting (#9155)
* feat: Add support to SubDirectory hosting

* fix: address linting and failing test

* fix: browser context validation

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-08-27 02:00:18 -04:00
Danny Avila
a820863e8b 💲 fix: Prevent Single-dollar LaTeX for abbrev. Currency (K, M, B) (#9293) 2025-08-26 23:33:56 -04:00
Danny Avila
9a210971f5 🛜 refactor: Streamline App Config Usage (#9234)
* WIP: app.locals refactoring

WIP: appConfig

fix: update memory configuration retrieval to use getAppConfig based on user role

fix: update comment for AppConfig interface to clarify purpose

🏷️ refactor: Update tests to use getAppConfig for endpoint configurations

ci: Update AppService tests to initialize app config instead of app.locals

ci: Integrate getAppConfig into remaining tests

refactor: Update multer storage destination to use promise-based getAppConfig and improve error handling in tests

refactor: Rename initializeAppConfig to setAppConfig and update related tests

ci: Mock getAppConfig in various tests to provide default configurations

refactor: Update convertMCPToolsToPlugins to use mcpManager for server configuration and adjust related tests

chore: rename `Config/getAppConfig` -> `Config/app`

fix: streamline OpenAI image tools configuration by removing direct appConfig dependency and using function parameters

chore: correct parameter documentation for imageOutputType in ToolService.js

refactor: remove `getCustomConfig` dependency in config route

refactor: update domain validation to use appConfig for allowed domains

refactor: use appConfig registration property

chore: remove app parameter from AppService invocation

refactor: update AppConfig interface to correct registration and turnstile configurations

refactor: remove getCustomConfig dependency and use getAppConfig in PluginController, multer, and MCP services

refactor: replace getCustomConfig with getAppConfig in STTService, TTSService, and related files

refactor: replace getCustomConfig with getAppConfig in Conversation and Message models, update tempChatRetention functions to use AppConfig type

refactor: update getAppConfig calls in Conversation and Message models to include user role for temporary chat expiration

ci: update related tests

refactor: update getAppConfig call in getCustomConfigSpeech to include user role

fix: update appConfig usage to access allowedDomains from actions instead of registration

refactor: enhance AppConfig to include fileStrategies and update related file strategy logic

refactor: update imports to use normalizeEndpointName from @librechat/api and remove redundant definitions

chore: remove deprecated unused RunManager

refactor: get balance config primarily from appConfig

refactor: remove customConfig dependency for appConfig and streamline loadConfigModels logic

refactor: remove getCustomConfig usage and use app config in file citations

refactor: consolidate endpoint loading logic into loadEndpoints function

refactor: update appConfig access to use endpoints structure across various services

refactor: implement custom endpoints configuration and streamline endpoint loading logic

refactor: update getAppConfig call to include user role parameter

refactor: streamline endpoint configuration and enhance appConfig usage across services

refactor: replace getMCPAuthMap with getUserMCPAuthMap and remove unused getCustomConfig file

refactor: add type annotation for loadedEndpoints in loadEndpoints function

refactor: move /services/Files/images/parse to TS API

chore: add missing FILE_CITATIONS permission to IRole interface

refactor: restructure toolkits to TS API

refactor: separate manifest logic into its own module

refactor: consolidate tool loading logic into a new tools module for startup logic

refactor: move interface config logic to TS API

refactor: migrate checkEmailConfig to TypeScript and update imports

refactor: add FunctionTool interface and availableTools to AppConfig

refactor: decouple caching and DB operations from AppService, make part of consolidated `getAppConfig`

WIP: fix tests

* fix: rebase conflicts

* refactor: remove app.locals references

* refactor: replace getBalanceConfig with getAppConfig in various strategies and middleware

* refactor: replace appConfig?.balance with getBalanceConfig in various controllers and clients

* test: add balance configuration to titleConvo method in AgentClient tests

* chore: remove unused `openai-chat-tokens` package

* chore: remove unused imports in initializeMCPs.js

* refactor: update balance configuration to use getAppConfig instead of getBalanceConfig

* refactor: integrate configMiddleware for centralized configuration handling

* refactor: optimize email domain validation by removing unnecessary async calls

* refactor: simplify multer storage configuration by removing async calls

* refactor: reorder imports for better readability in user.js

* refactor: replace getAppConfig calls with req.config for improved performance

* chore: replace getAppConfig calls with req.config in tests for centralized configuration handling

* chore: remove unused override config

* refactor: add configMiddleware to endpoint route and replace getAppConfig with req.config

* chore: remove customConfig parameter from TTSService constructor

* refactor: pass appConfig from request to processFileCitations for improved configuration handling

* refactor: remove configMiddleware from endpoint route and retrieve appConfig directly in getEndpointsConfig if not in `req.config`

* test: add mockAppConfig to processFileCitations tests for improved configuration handling

* fix: pass req.config to hasCustomUserVars and call without await after synchronous refactor

* fix: type safety in useExportConversation

* refactor: retrieve appConfig using getAppConfig in PluginController and remove configMiddleware from plugins route, to avoid always retrieving when plugins are cached

* chore: change `MongoUser` typedef to `IUser`

* fix: Add `user` and `config` fields to ServerRequest and update JSDoc type annotations from Express.Request to ServerRequest

* fix: remove unused setAppConfig mock from Server configuration tests
2025-08-26 12:10:18 -04:00
Danny Avila
e1ad235f17 🌍 i18n: Add missing com_ui_no_changes 2025-08-25 17:59:46 -04:00
Danny Avila
4a0b329e3e v0.8.0-rc3 (#9269)
* chore: bump data-provider to v0.8.004

* 📦 chore: bump @librechat/data-schemas version to 0.0.20

* 📦 chore: bump @librechat/api version to 1.3.4

*  v0.8.0-rc3

* docs: update README

* docs: update README

* docs: enhance multilingual UI section in README
2025-08-25 17:35:21 -04:00
github-actions[bot]
a22359de5e 🌍 i18n: Update translation.json with latest translations (#9267)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-25 17:14:51 -04:00
Danny Avila
bbfe4002eb 🏷️ chore: Add Missing Localizations for Agents, Categories, Bookmarks (#9266)
* fix: error when updating bookmarks if no query data

* feat: localize bookmark dialog, form labels and validation messages, also improve validation

* feat: add localization for EmptyPromptPreview component and update translation.json

* chore: add missing localizations for static UI text

* chore: update AgentPanelContextType and useGetAgentsConfig to support null configurations

* refactor: update agent categories to support localization and custom properties, improve related typing

* ci: add localization for 'All' category and update tab names in accessibility tests

* chore: remove unused AgentCategoryDisplay component and its tests

* chore: add localization handling for agent category selector

* chore: enhance AgentCard to support localized category labels and add related tests

* chore: enhance i18n unused keys detection to include additional source directories and improve handling for agent category keys
2025-08-25 13:54:13 -04:00
Marco Beretta
94426a3cae 🎭 refactor: Avatar Loading UX and Fix Initials Rendering Bugs (#9261)
Co-authored-by: Danny Avila <danny@librechat.ai>
2025-08-25 12:06:00 -04:00
Danny Avila
e559f0f4dc 📜 chore: Add Timestamp to Error logs (#9262) 2025-08-25 11:05:15 -04:00
github-actions[bot]
15c9c7e1f4 🌍 i18n: Update translation.json with latest translations (#9250)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-25 10:52:24 -04:00
Danny Avila
ac641e7cba 🗄️ refactor: Resource Migration Scripts for DocumentDB Compatibility (#9249)
* refactor: Resource Migration Scripts for DocumentDB compatibility

* fix: Correct type annotation for `db` parameter in ensureCollectionExists function
2025-08-25 03:01:50 -04:00
Danny Avila
1915d7b195 🧮 fix: Properly Escape Currency and Prevent Code Block LaTeX Bugs (#9248)
* fix(latex): prevent LaTeX conversion when closing $ is preceded by backtick

When text contained patterns like "$lookup namespace" followed by "`$lookup`",
the regex would match from the first $ to the backtick's $, treating the entire
span as a LaTeX expression. This caused programming constructs to be incorrectly
converted to double dollars.

- Added negative lookbehind (?<!`) to single dollar regex
- Prevents matching when closing $ immediately follows a backtick
- Fixes issues with inline code blocks containing $ symbols

* fix(latex): detect currency amounts with 4+ digits without commas

The currency regex pattern \d{1,3} only matched amounts with 1-3 initial digits,
causing amounts like $1157.90 to be interpreted as LaTeX instead of currency.
This resulted in text like "$1157.90 (text) + $500 (text) = $1657.90" being
incorrectly converted to a single LaTeX expression.

- Changed pattern from \d{1,3} to \d+ to match any number of initial digits
- Now properly escapes $1000, $10000, $123456, etc. without requiring commas
- Maintains support for comma-formatted amounts like $1,234.56

* fix(latex): support currency with unlimited decimal places

The currency regex limited decimal places to 1-2 digits (\.\d{1,2}), which
failed to properly escape amounts with more precision like cryptocurrency
values ($0.00001234), gas prices ($3.999), or exchange rates ($1.23456).

- Changed decimal pattern from \.\d{1,2} to \.\d+
- Now supports any number of decimal places
- Handles edge cases like scientific calculations and high-precision values
2025-08-25 02:44:13 -04:00
github-actions[bot]
c2f4b383f2 🌍 i18n: Update translation.json with latest translations (#9228)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-24 12:42:57 -04:00
Danny Avila
939af59950 🏄‍♂️ refactor: Improve Cancelled Stream Handling for Pending Authentication (#9235) 2025-08-24 12:42:34 -04:00
Danny Avila
7d08da1a8a 🪜 fix: userMCPAuthMap Sequential Agents assignment 2025-08-24 11:37:51 -04:00
Danny Avila
543b617e1c 📦 chore: bump cipher-base and sha.js (#9227) 2025-08-23 03:32:40 -04:00
Danny Avila
c827fdd10e 🚦 feat: Auto-reinitialize MCP Servers on Request (#9226) 2025-08-23 03:27:05 -04:00
Marco Beretta
ac608ded46 feat: Add cursor pagination utilities and refine user/group/role types in @librechat/data-schemas (#9218)
* feat: Add pagination interfaces and update user and group types for better data handling

* fix: Update data-schemas version to 0.0.19
2025-08-23 00:18:31 -04:00
mattmueller-stripe
0e00f357a6 🔗 chore: Remove <link href='#' /> from index.html (#9222) 2025-08-23 00:16:32 -04:00
Danny Avila
c465d7b732 🪪 fix: Preserve Existing Interface Permissions When Updating Config (#9199) 2025-08-21 12:49:45 -04:00
Danny Avila
aba0a93d1d 🧰 fix: Available Tools Retrieval with correct MCP Caching (#9181)
* fix: available tools retrieval with correct mcp caching and conversion

* test: Enhance PluginController tests with MCP tool mocking and conversion

* refactor: Simplify PluginController tests by removing unused mocks and enhancing test clarity
2025-08-20 22:33:54 -04:00
Danny Avila
a49b2b2833 🛣️ feat: directEndpoint Fetch Override for Custom Endpoints (#9179)
* feat: Add directEndpoint option to OpenAIConfigOptions and update fetch logic to override /chat/completions URL

* feat: Add directEndpoint support to fetchModels and update loadConfigModels logic
2025-08-20 15:55:32 -04:00
Danny Avila
e0ebb7097e fix: AbortSignal Cleanup Logic for New Chats (#9177)
* chore: import paths for isEnabled and logger in title.js

*  fix: `AbortSignal` Cleanup Logic for New Chats

* test: Add `isNewConvo` parameter to onStart expectation in BaseClient tests
2025-08-20 14:56:07 -04:00
Dustin Healy
9a79635012 🌍 i18n: Add Bosnian and Norsk Bokmål Languages (#9176) 2025-08-20 14:17:23 -04:00
Danny Avila
ce19abc968 🆔 feat: Add User ID to Anthropic API Payload as Metadata (#9174) 2025-08-20 13:42:08 -04:00
Danny Avila
49cd3894aa 🛠️ fix: Restrict Editable Content Types & Consolidate Typing (#9173)
* fix: only allow editing expected content types & align typing app-wide

* chore: update TPayload to use TEditedContent type for editedContent
2025-08-20 13:21:47 -04:00
Danny Avila
da4aa37493 🛠️ refactor: Consolidate MCP Tool Caching (#9172)
* 🛠️ refactor: Consolidate MCP Tool Caching

* 🐍 fix: Correctly mock and utilize updateMCPUserTools in MCP route tests
2025-08-20 12:19:29 -04:00
github-actions[bot]
5a14ee9c6a 🌍 i18n: Update translation.json with latest translations (#9151)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-19 12:05:49 -04:00
Danny Avila
3394aa5030 🗃️ fix: Only Unlink Temp files on Error for Firebase File Uploads (#9152) 2025-08-19 12:05:27 -04:00
Danny Avila
cee0579e0e 📜 chore: update librechat.example.yaml 2025-08-19 11:26:47 -04:00
Helge Wiethoff
d6c173c94b 🐍 fix: Use Standard Mongoose Module Resolution in Config Scripts (#9143) 2025-08-19 11:15:09 -04:00
Andrés Restrepo
beff848a3f 🛡️ fix: Add Null Checks to Parameter Settings to Prevent Undefined Access (#9108)
Fixes "Cannot read properties of undefined (reading 'key')" error in parameter panels by filtering out null/undefined values before mapping operations.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-19 11:12:30 -04:00
Ihsan Soydemir
b9bc3123d6 🐛 fix: Correct Next Refill Date Logic for Balance Settings (#9121) 2025-08-19 11:11:33 -04:00
Dustin Healy
639c7ad6ad 📬 feat: Agent Support Email Address Validation (#9128)
* fix: email-regex realtime updates and looser validation

* feat: add zod validation to email address input

* refactor: emailValidation to email
2025-08-19 11:07:01 -04:00
Danny Avila
822e2310ce 🚮 fix: Remove Filtering Logic Before MCP Initialization (#9149) 2025-08-19 11:03:11 -04:00
Federico Ruggi
2a0a8f6beb 📛 refactor: Decouple MCP Dialog UI from BadgeRowContext (#8920)
* decouple MCP dialog from BadgeRowContext

* chore: import order and style according to guidelines

---------

Co-authored-by: Dustin Healy <dustinhealy1@gmail.com>
2025-08-18 11:20:45 -04:00
Danny Avila
a6fd32a15a 🏷️ refactor: Normalize Request Headers in setRequestHeaders (#9106) 2025-08-17 13:23:46 -04:00
github-actions[bot]
80a1a57fde 🌍 i18n: Update translation.json with latest translations (#9104)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-17 12:25:49 -04:00
Danny Avila
3576391482 📦 chore: Bump Package Versions and Update Data Provider Workflow (#9103)
* 📦 chore: Bump `@librechat/api` to v1.3.3

* 📦 chore: Bump `librechat-data-provider` to v0.8.003

* 📦 chore: Bump `@librechat/data-schemas` to v0.0.18

* 📦 chore: Bump `@librechat/client` to v0.2.6

* 📦 chore: Update workflow name for `librechat-data-provider` and add manual dispatch

* 📦 chore: Update Node.js version to 20 in data provider workflow
2025-08-17 12:07:42 -04:00
Danny Avila
55557f7cc8 🌀 chore: Resolve primeResources Typescript Warning 2025-08-16 21:00:16 -04:00
Danny Avila
d7d02766ea 🏷️ feat: Request Placeholders for Custom Endpoint & MCP Headers (#9095)
* feat: Add conversation ID support to custom endpoint headers

- Add LIBRECHAT_CONVERSATION_ID to customUserVars when provided
- Pass conversation ID to header resolution for dynamic headers
- Add comprehensive test coverage

Enables custom endpoints to access conversation context using {{LIBRECHAT_CONVERSATION_ID}} placeholder.

* fix: filter out unresolved placeholders from headers (thanks @MrunmayS)

* feat: add support for request body placeholders in custom endpoint headers

- Add {{LIBRECHAT_BODY_*}} placeholders for conversationId, parentMessageId, messageId
- Update tests to reflect new body placeholder functionality

* refactor resolveHeaders

* style: minor styling cleanup

* fix: type error in unit test

* feat: add body to other endpoints

* feat: add body for mcp tool calls

* chore: remove changes that unnecessarily increase scope after clarification of requirements

* refactor: move http.ts to packages/api and have RequestBody intersect with Express request body

* refactor: processMCPEnv now uses single object argument pattern

* refactor: update processMCPEnv to use 'options' parameter and align types across MCP connection classes

* feat: enhance MCP connection handling with dynamic request headers to pass request body fields

---------

Co-authored-by: Gopal Sharma <gopalsharma@gopal.sharma1>
Co-authored-by: s10gopal <36487439+s10gopal@users.noreply.github.com>
Co-authored-by: Dustin Healy <dustinhealy1@gmail.com>
2025-08-16 20:45:55 -04:00
Marco Beretta
627f0bffe5 🔧 chore: Update package versions for data-provider and data-schemas (#9096) 2025-08-16 19:24:54 -04:00
Danny Avila
8d1d95371f ®️ chore: Remove Unnecessary sourcemapPathTransform in Rollup Config 2025-08-16 16:53:43 -04:00
Danny Avila
8bcdc041b2 ⚙️ refactor: Only register OpenID Strategy if Config Succeeded (#9094)
* fix: register openId the strategy if setupOpenId succeeded

* chore: linting and update imports

* refactor: extract OpenID configuration into a separate function

---------

Co-authored-by: Denis <denis.sheremetov@gmail.com>
2025-08-16 14:49:03 -04:00
Luís André
9b6395d955 🌐 feat: Configurable Redis Cluster Mode with Single URI Support (#9039)
*  feat: Add support for enabling Redis cluster configuration

*  feat: Enhance Redis client initialization to support cluster configuration without multiple URIs

*  feat: Add tests for USE_REDIS_CLUSTER configuration and validation

* 🐞 fix: Remove unnecessary blank line in cacheConfig tests
2025-08-16 14:41:53 -04:00
Danny Avila
ad1503abdc 🧪 feat: Claude Sonnet 4 - 1M Context Window (Beta Header) (#9093)
* adding beta header context-1m-2025-08-07 to claude sonnet 4 to increase contact window to 1M tokens

* adding context-1m beta header to test cases

* ci: Update Anthropic `getLLMConfig` tests and headers for model variations

- Refactored test cases to ensure proper handling of model variations for 'claude-sonnet-4'.
- Cleaned up unused mock implementations in tests for better clarity and performance.

* refactor: regex in header retrieval for 'claude-sonnet-4' models

* refactor: default tokens for 'claude-sonnet-4' to `1,000,000`

* refactor: add token value retrieval and pattern matching to model tests

---------

Co-authored-by: Dirk Petersen <no-reply@nowhere.com>
2025-08-16 13:36:46 -04:00
Jón Levy
a6d7ebf22e 🛠️ fix: Workaround for Federated OpenID Nonce Validation Issues (#9067)
* fix: generate nonce for federated providers

* fix: lint issues

* chore: comments

---------

Co-authored-by: Danny Avila <danacordially@gmail.com>
2025-08-16 13:14:01 -04:00
Danny Avila
cebf140bce 🔄 fix: Add Azure to Recognized and Content array providers for MCP Tool Calls (#9092) 2025-08-16 12:44:12 -04:00
Danny Avila
cc0cf359a2 🔧 fix: Remove Unused Duplicate Group Methods (#9091) 2025-08-16 12:17:12 -04:00
Danny Avila
3547873bc4 🧑‍💻 refactor: Secure Field Selection for 2FA & API Build Sourcemap (#9087)
* refactor: `packages/api` build scripts for better inline debugging

* refactor: Explicitly select secure fields as no longer returned by default, exclude backupCodes from user data retrieval in authentication and 2FA processes

* refactor: Backup Codes UI to not expect backup codes, only regeneration

* refactor: Ensure secure fields are deleted from user data in getUserController
2025-08-15 18:55:49 -04:00
Danny Avila
50b7bd6643 🔄 fix: Ensure lastRefill Date for Existing Users & Refactor Balance Middleware (#9086)
- Deleted setBalanceConfig middleware and its associated file.
- Introduced createSetBalanceConfig factory function to create middleware for synchronizing user balance settings.
- Updated auth and oauth routes to use the new balance configuration middleware.
- Added comprehensive tests for the new balance middleware functionality.
- Updated package versions and dependencies in package.json and package-lock.json.
- Added balance types and updated middleware index to export new balance middleware.
2025-08-15 17:02:49 -04:00
Danny Avila
81186312ef 🪙 refactor: Remove Title maxTokens & Support New LMStudio/Ollama Reasoning Format (#9085)
* 📦 chore: bump `@librechat/agents` to v2.4.76

* refactor: remove default maxTokens from title `clientOptions`
2025-08-15 15:29:16 -04:00
Marco Beretta
4ec7bcb60f 🪟 style: Agent Marketplace UI Responsiveness, a11y, and Navigation (#9068)
* refactor: Agent Marketplace Button with access control

* fix(agent-marketplace): update marketplace UI and access control

* fix(agent-card): handle optional agent description for accessibility

* fix(agent-card): remove unnecessary icon checks from tests

* chore: remove unused keys

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-08-15 14:59:10 -04:00
Marco Beretta
c78fd0fc83 🔐 feat: Group schema support, refine user schema security, and improve types (#9070)
* feat: Add groupSchema and update userSchema to hide sensitive fields

* chore: Bump version of @librechat/data-schemas to 0.0.16
2025-08-14 22:58:54 -04:00
Danny Avila
d711fc7852 🤝 refactor: Clarify labels for Sharing Permissions 2025-08-14 22:56:49 -04:00
Danny Avila
6af7efd0f4 🤖 fix: Active Tab Logic for Promoted Agents in Agent Marketplace (#9069)
* 🤖 fix: Active Tab Logic in AgentMarketplace for Promoted Agents

* test: increase render time threshold for Virtual Scrolling Performance test
2025-08-14 22:56:39 -04:00
Danny Avila
d57e7aec73 🥷 fix: Correct Agents Handling for Marketplace Users (#9065)
* refactor: Introduce ModelSelectorChatContext and integrate with ModelSelector

* fix: agents handling in ModelSelector to show expected agents if user has marketplace access
2025-08-14 19:13:48 -04:00
Danny Avila
e4e25aaf2b 🔎 feat: Add Prompt and Agent Permissions Migration Checks (#9063)
* chore: fix mock typing in packages/api tests

* chore: improve imports, type handling and method signatures for MCPServersRegistry

* chore: use enum in migration scripts

* chore: ParsedServerConfig type to enhance server configuration handling

* feat: Implement agent permissions migration check and logging

* feat: Integrate migration checks into server initialization process

* feat: Add prompt permissions migration check and logging to server initialization

* chore: move prompt formatting functions to dedicated prompts dir
2025-08-14 17:20:00 -04:00
Danny Avila
e8ddd279fd 🎏 refactor: Streamline Role Permissions from Interface Config 2025-08-14 02:15:33 -04:00
Danny Avila
b742c8c7f9 👁️‍🗨️ refactor: use PermissionBits.VIEW in useAgentsMap for requiredPermission 2025-08-13 16:24:26 -04:00
Danny Avila
803ade8601 🔄 refactor: Principal Type Handling in Search Principals to use Array 2025-08-13 16:24:26 -04:00
Danny Avila
dcd96c29c5 🗨️ refactor: Optimize Prompt Queries
feat: Refactor prompt and prompt group schemas; move types to separate file

feat: Implement paginated access to prompt groups with filtering and public visibility

refactor: Add PromptGroups context provider and integrate it into relevant components

refactor: Optimize filter change handling and query invalidation in usePromptGroupsNav hook

refactor: Simplify context usage in FilterPrompts and GroupSidePanel components
2025-08-13 16:24:25 -04:00
Danny Avila
53c31b85d0 🗞️ refactor: Apply Role Permissions at Startup only if Missing or Configured 2025-08-13 16:24:25 -04:00
Danny Avila
d07c2b3475 🛒 feat: Implement Marketplace Permissions Management UI
- Added MarketplaceAdminSettings component for managing marketplace permissions.
- Updated roles.js to include marketplace permissions in the API.
- Refactored interface.js to streamline marketplace permissions handling.
- Enhanced Marketplace component to integrate admin settings.
- Updated localization files to include new marketplace-related keys.
- Added new API endpoint for updating marketplace permissions in data-service.
2025-08-13 16:24:24 -04:00
Danny Avila
a434d28579 🧑‍🤝‍🧑 feat: Add People Picker Permissions Management UI 2025-08-13 16:24:24 -04:00
Marco Beretta
d82a63642d 🖼️ style: Improve Marketplace & Sharing Dialog UI
feat: Enhance CategoryTabs and Marketplace components for better responsiveness and navigation

feat: Refactor AgentCard and AgentGrid components for improved layout and accessibility

feat: Implement animated category transitions in AgentMarketplace and update NewChat component layout

feat: Refactor UI components for improved styling and accessibility in sharing dialogs

refactor: remove GenericManagePermissionsDialog and GrantAccessDialog components

- Deleted GenericManagePermissionsDialog and GrantAccessDialog components to streamline sharing functionality.
- Updated ManagePermissionsDialog to utilize AccessRolesPicker directly.
- Introduced UnifiedPeopleSearch for improved people selection experience.
- Enhanced PublicSharingToggle with InfoHoverCard for better user guidance.
- Adjusted AgentPanel to change error status to warning for duplicate agent versions.
- Updated translations to include new keys for search and access management.

feat: Add responsive design for SelectedPrincipalsList and improve layout in GenericGrantAccessDialog

feat: Enhance styling in SelectedPrincipalsList and SearchPicker components for improved UI consistency

feat: Improve PublicSharingToggle component with enhanced styling and accessibility features

feat: Introduce InfoHoverCard component and refactor enums for better organization

feat: Implement infinite scroll for agent grids and enhance performance

- Added `useInfiniteScroll` hook to manage infinite scrolling behavior in agent grids.
- Integrated infinite scroll functionality into `AgentGrid` and `VirtualizedAgentGrid` components.
- Updated `AgentMarketplace` to pass the scroll container to the agent grid components.
- Refactored loading indicators to show a spinner instead of a "Load More" button.
- Created `VirtualizedAgentGrid` component for optimized rendering of agent cards using virtualization.
- Added performance tests for `VirtualizedAgentGrid` to ensure efficient handling of large datasets.
- Updated translations to include new messages for end-of-results scenarios.

chore: Remove unused permission-related UI localization keys

ci: Update Agent model tests to handle duplicate support_contact updates

- Modified tests to ensure that updating an agent with the same support_contact does not create a new version and returns successfully.
- Enhanced verification for partial changes in support_contact, confirming no new version is created when content remains the same.

chore: Address ESLint, clean up unused imports and improve prop definitions in various components

ci: fix tests

ci: update tests

chore: remove unused search localization keys
2025-08-13 16:24:24 -04:00
Danny Avila
9585db14ba 🤖 fix: Edge Case for Agent List Access Control
- Refactored `getListAgentsByAccess` to streamline query construction for accessible agents.
- Added comprehensive security tests for `getListAgentsByAccess` and `getListAgentsHandler` to ensure proper access control and filtering based on user permissions.
- Enhanced test coverage for various scenarios, including pagination, category filtering, and handling of non-existent IDs.
2025-08-13 16:24:23 -04:00
Danny Avila
c191af6c9b 🎨 style: Theming in SharePointPickerDialog, PrincipalAvatar, and PeoplePickerSearchItem 2025-08-13 16:24:23 -04:00
Danny Avila
39346d6b8e 🛂 feat: Role as Permission Principal Type
WIP: Role as Permission Principal Type

WIP: add user role check optimization to user principal check, update type comparisons

WIP: cover edge cases for string vs ObjectId handling in permission granting and checking

chore: Update people picker access middleware to use PrincipalType constants

feat: Enhance people picker access control to include roles permissions

chore: add missing default role schema values for people picker perms, cleanup typing

feat: Enhance PeoplePicker component with role-specific UI and localization updates

chore: Add missing `VIEW_ROLES` permission to role schema
2025-08-13 16:24:23 -04:00
Danny Avila
28d63dab71 🔧 refactor: Integrate PrincipalModel Enum for Principal Handling
- Replaced string literals for principal models ('User', 'Group') with the new PrincipalModel enum across various models, services, and tests to enhance type safety and consistency.
- Updated permission handling in multiple files to utilize the PrincipalModel enum, improving maintainability and reducing potential errors.
- Ensured all relevant tests reflect these changes to maintain coverage and functionality.
2025-08-13 16:24:22 -04:00
Danny Avila
49d1cefe71 🔧 refactor: Add and use PrincipalType Enum
- Replaced string literals for principal types ('user', 'group', 'public') with the new PrincipalType enum across various models, services, and tests for improved type safety and consistency.
- Updated permission handling in multiple files to utilize the PrincipalType enum, enhancing maintainability and reducing potential errors.
- Ensured all relevant tests reflect these changes to maintain coverage and functionality.
2025-08-13 16:24:22 -04:00
Danny Avila
0262c25989 🌏 chore: Remove unused localization keys from translation.json
- Deleted keys related to sharing prompts and public access that are no longer in use, streamlining the localization file.
2025-08-13 16:24:21 -04:00
Danny Avila
90b037a67f 🧪 ci: Update PermissionService tests for PromptGroup resource type
- Refactor tests to use PromptGroup roles instead of Project roles.
- Initialize models and seed default roles in test setup.
- Update error handling for non-existent resource types.
- Ensure proper cleanup of test data while retaining seeded roles.
2025-08-13 16:24:21 -04:00
Danny Avila
fc8fd489d6 🔗 fix: File Citation Processing to Use Tool Artifacts 2025-08-13 16:24:21 -04:00
Danny Avila
81b32e400a 🔧 refactor: Organize Sharing/Agent Components and Improve Type Safety
refactor: organize Sharing/Agent components, improve type safety for resource types and access role ids, rename enums to PascalCase

refactor: organize Sharing/Agent components, improve type safety for resource types and access role ids

chore: move sharing related components to dedicated "Sharing" directory

chore: remove PublicSharingToggle component and update index exports

chore: move non-sidepanel agent components to `~/components/Agents`

chore: move AgentCategoryDisplay component with tests

chore: remove commented out code

refactor: change PERMISSION_BITS from const to enum for better type safety

refactor: reorganize imports in GenericGrantAccessDialog and update index exports for hooks

refactor: update type definitions to use ACCESS_ROLE_IDS for improved type safety

refactor: remove unused canAccessPromptResource middleware and related code

refactor: remove unused prompt access roles from createAccessRoleMethods

refactor: update resourceType in AclEntry type definition to remove unused 'prompt' value

refactor: introduce ResourceType enum and update resourceType usage across data provider files for improved type safety

refactor: update resourceType usage to ResourceType enum across sharing and permissions components for improved type safety

refactor: standardize resourceType usage to ResourceType enum across agent and prompt models, permissions controller, and middleware for enhanced type safety

refactor: update resourceType references from PROMPT_GROUP to PROMPTGROUP for consistency across models, middleware, and components

refactor: standardize access role IDs and resource type usage across agent, file, and prompt models for improved type safety and consistency

chore: add typedefs for TUpdateResourcePermissionsRequest and TUpdateResourcePermissionsResponse to enhance type definitions

chore: move SearchPicker to PeoplePicker dir

refactor: implement debouncing for query changes in SearchPicker for improved performance

chore: fix typing, import order for agent admin settings

fix: agent admin settings, prevent agent form submission

refactor: rename `ACCESS_ROLE_IDS` to `AccessRoleIds`

refactor: replace PermissionBits with PERMISSION_BITS

refactor: replace PERMISSION_BITS with PermissionBits
2025-08-13 16:24:20 -04:00
Danny Avila
ae732b2ebc 🗨️ feat: Granular Prompt Permissions via ACL and Permission Bits
feat: Implement prompt permissions management and access control middleware

fix: agent deletion process to remove associated permissions and ACL entries

fix: Import Permissions for enhanced access control in GrantAccessDialog

feat: use PromptGroup for access control

- Added migration script for PromptGroup permissions, categorizing groups into global view access and private groups.
- Created unit tests for the migration script to ensure correct categorization and permission granting.
- Introduced middleware for checking access permissions on PromptGroups and prompts via their groups.
- Updated routes to utilize new access control middleware for PromptGroups.
- Enhanced access role definitions to include roles specific to PromptGroups.
- Modified ACL entry schema and types to accommodate PromptGroup resource type.
- Updated data provider to include new access role identifiers for PromptGroups.

feat: add generic access management dialogs and hooks for resource permissions

fix: remove duplicate imports in FileContext component

fix: remove duplicate mongoose dependency in package.json

feat: add access permissions handling for dynamic resource types and add promptGroup roles

feat: implement centralized role localization and update access role types

refactor: simplify author handling in prompt group routes and enhance ACL checks

feat: implement addPromptToGroup functionality and update PromptForm to use it

feat: enhance permission handling in ChatGroupItem, DashGroupItem, and PromptForm components

chore: rename migration script for prompt group permissions and update package.json scripts

chore: update prompt tests
2025-08-13 16:24:20 -04:00
Danny Avila
7e7e75714e 🧹 chore: Add Back Agent-Specific File Retrieval and Deletion Permissions 2025-08-13 16:24:19 -04:00
“Praneeth
ff54cbffd9 🔒 feat: Implement Granular File Storage Strategies and Access Control Middleware 2025-08-13 16:24:19 -04:00
Danny Avila
74e029e78f 🧪 ci: Update Test Files & fix ESLint issues 2025-08-13 16:24:19 -04:00
Danny Avila
75324e1c7e 🎨 style: Theming and Consistency Improvements for Agent Marketplace
style: AccessRolesPicker to use DropdownPopup, theming, import order, localization

refactor: Update localization keys for Agent Marketplace in NewChat component, remove duplicate key

style: Adjust layout and font size in NewChat component for Agent Marketplace button

style: theming in AgentGrid

style: Update theming and text colors across Agent components for improved consistency

chore: import order

style: Replace Dialog with OGDialog and update content components in AgentDetail

refactor: Simplify AgentDetail component by removing dropdown menu and replacing it with a copy link button

style: Enhance scrollbar visibility and layout in AgentMarketplace and CategoryTabs components

style: Adjust layout in AgentMarketplace component by removing unnecessary padding from the container

style: Refactor layout in AgentMarketplace component by reorganizing hero section and sticky wrapper for improved structure with collapsible header effect

style: Improve responsiveness and layout in AgentMarketplace component by adjusting header visibility and modifying container styles based on screen size

fix: Update localization key for no categories message in CategoryTabs component and corresponding test

style: Add className prop to OpenSidebar component for improved styling flexibility and update Header to utilize it for responsive design

style: Enhance layout and scrolling behavior in CategoryTabs component by adding scroll snap properties and adjusting class names for improved user experience

style: Update AgentGrid component layout and skeleton structure for improved visual consistency and responsiveness
2025-08-13 16:24:18 -04:00
“Praneeth
949682ef0f 🏪 feat: Agent Marketplace
bugfix: Enhance Agent and AgentCategory schemas with new fields for category, support contact, and promotion status

refactored and moved agent category methods and schema to data-schema package

🔧 fix: Merge and Rebase Conflicts

- Move AgentCategory from api/models to @packages/data-schemas structure
  - Add schema, types, methods, and model following codebase conventions
  - Implement auto-seeding of default categories during AppService startup
  - Update marketplace controller to use new data-schemas methods
  - Remove old model file and standalone seed script

refactor: unify agent marketplace to single endpoint with cursor pagination

  - Replace multiple marketplace routes with unified /marketplace endpoint
  - Add query string controls: category, search, limit, cursor, promoted, requiredPermission
  - Implement cursor-based pagination replacing page-based system
  - Integrate ACL permissions for proper access control
  - Fix ObjectId constructor error in Agent model
  - Update React components to use unified useGetMarketplaceAgentsQuery hook
  - Enhance type safety and remove deprecated useDynamicAgentQuery
  - Update tests for new marketplace architecture
  -Known issues:
  see more button after category switching + Unit tests

feat: add icon property to ProcessedAgentCategory interface

- Add useMarketplaceAgentsInfiniteQuery and useGetAgentCategoriesQuery to client/src/data-provider/Agents/
  - Replace manual pagination in AgentGrid with infinite query pattern
  - Update imports to use local data provider instead of librechat-data-provider
  - Add proper permission handling with PERMISSION_BITS.VIEW/EDIT constants
  - Improve agent access control by adding requiredPermission validation in backend
  - Remove manual cursor/state management in favor of infinite query built-ins
  - Maintain existing search and category filtering functionality

refactor: consolidate agent marketplace endpoints into main agents API and improve data management consistency

  - Remove dedicated marketplace controller and routes, merging functionality into main agents v1 API
  - Add countPromotedAgents function to Agent model for promoted agents count
  - Enhance getListAgents handler with marketplace filtering (category, search, promoted status)
  - Move getAgentCategories from marketplace to v1 controller with same functionality
  - Update agent mutations to invalidate marketplace queries and handle multiple permission levels
  - Improve cache management by updating all agent query variants (VIEW/EDIT permissions)
  - Consolidate agent data access patterns for better maintainability and consistency
  - Remove duplicate marketplace route definitions and middleware

selected view only agents injected in the drop down

fix: remove minlength validation for support contact name in agent schema

feat: add validation and error messages for agent name in AgentConfig and AgentPanel

fix: update agent permission check logic in AgentPanel to simplify condition

Fix linting WIP

Fix Unit tests WIP

ESLint fixes

eslint fix

refactor: enhance isDuplicateVersion function in Agent model for improved comparison logic

- Introduced handling for undefined/null values in array and object comparisons.
- Normalized array comparisons to treat undefined/null as empty arrays.
- Added deep comparison for objects and improved handling of primitive values.
- Enhanced projectIds comparison to ensure consistent MongoDB ObjectId handling.

refactor: remove redundant properties from IAgent interface in agent schema

chore: update localization for agent detail component and clean up imports

ci: update access middleware tests

chore: remove unused PermissionTypes import from Role model

ci: update AclEntry model tests

ci: update button accessibility labels in AgentDetail tests

refactor: update exhaustive dep. lint warning

🔧 fix: Fixed agent actions access

feat: Add role-level permissions for agent sharing people picker

  - Add PEOPLE_PICKER permission type with VIEW_USERS and VIEW_GROUPS permissions
  - Create custom middleware for query-aware permission validation
  - Implement permission-based type filtering in PeoplePicker component
  - Hide people picker UI when user lacks permissions, show only public toggle
  - Support granular access: users-only, groups-only, or mixed search modes

refactor: Replace marketplace interface config with permission-based system

  - Add MARKETPLACE permission type to handle marketplace access control
  - Update interface configuration to use role-based marketplace settings (admin/user)
  - Replace direct marketplace boolean config with permission-based checks
  - Modify frontend components to use marketplace permissions instead of interface config
  - Update agent query hooks to use marketplace permissions for determining permission levels
  - Add marketplace configuration structure similar to peoplePicker in YAML config
  - Backend now sets MARKETPLACE permissions based on interface configuration
  - When marketplace enabled: users get agents with EDIT permissions in dropdown lists  (builder mode)
  - When marketplace disabled: users get agents with VIEW permissions  in dropdown lists (browse mode)

🔧 fix: Redirect to New Chat if No Marketplace Access and Required Agent Name Placeholder (#8213)

* Fix: Fix the redirect to new chat page if access to marketplace is denied

* Fixed the required agent name placeholder

---------

Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com>

chore: fix tests, remove unnecessary imports

refactor: Implement permission checks for file access via agents

- Updated `hasAccessToFilesViaAgent` to utilize permission checks for VIEW and EDIT access.
- Replaced project-based access validation with permission-based checks.
- Enhanced tests to cover new permission logic and ensure proper access control for files associated with agents.
- Cleaned up imports and initialized models in test files for consistency.

refactor: Enhance test setup and cleanup for file access control

- Introduced modelsToCleanup array to track models added during tests for proper cleanup.
- Updated afterAll hooks in test files to ensure all collections are cleared and only added models are deleted.
- Improved consistency in model initialization across test files.
- Added comments for clarity on cleanup processes and test data management.

chore: Update Jest configuration and test setup for improved timeout handling

- Added a global test timeout of 30 seconds in jest.config.js.
- Configured jest.setTimeout in jestSetup.js to allow individual test overrides if needed.
- Enhanced test reliability by ensuring consistent timeout settings across all tests.

refactor: Implement file access filtering based on agent permissions

- Introduced `filterFilesByAgentAccess` function to filter files based on user access through agents.
- Updated `getFiles` and `primeFiles` functions to utilize the new filtering logic.
- Moved `hasAccessToFilesViaAgent` function from the File model to permission services, adjusting imports accordingly
- Enhanced tests to ensure proper access control and filtering behavior for files associated with agents.

fix: make support_contact field a nested object rather than a sub-document

refactor: Update support_contact field initialization in agent model

- Removed handling for empty support_contact object in createAgent function.
- Changed default value of support_contact in agent schema to undefined.

test: Add comprehensive tests for support_contact field handling and versioning

refactor: remove unused avatar upload mutation field and add informational toast for success

chore: add missing SidePanelProvider for AgentMarketplace and organize imports

fix: resolve agent selection race condition in marketplace HandleStartChat
- Set agent in localStorage before newConversation to prevent useSelectorEffects from auto-selecting previous agent

fix: resolve agent dropdown showing raw ID instead of agent info from URL

  - Add proactive agent fetching when agent_id is present in URL parameters
  - Inject fetched agent into agents cache so dropdowns display proper name/avatar
  - Use useAgentsMap dependency to ensure proper cache initialization timing
  - Prevents raw agent IDs from showing in UI when visiting shared agent links

Fix: Agents endpoint renamed to "My Agent" for less confusion with the Marketplace agents.

chore: fix ESLint issues and Test Mocks

ci: update permissions structure in loadDefaultInterface tests

- Refactored permissions for MEMORY and added new permissions for MARKETPLACE and PEOPLE_PICKER.
- Ensured consistent structure for permissions across different types.

feat:  support_contact validation to allow empty email strings
2025-08-13 16:24:18 -04:00
Danny Avila
66bd419baa 🔐 feat: Granular Role-based Permissions + Entra ID Group Discovery (#7804)
WIP: pre-granular-permissions commit

feat: Add category and support contact fields to Agent schema and UI components

Revert "feat: Add category and support contact fields to Agent schema and UI components"

This reverts commit c43a52b4c9.

Fix: Update import for renderHook in useAgentCategories.spec.tsx

fix: Update icon rendering in AgentCategoryDisplay tests to use empty spans

refactor: Improve category synchronization logic and clean up AgentConfig component

refactor: Remove unused UI flow translations from translation.json

feat: agent marketplace features

🔐 feat: Granular Role-based Permissions + Entra ID Group Discovery (#7804)
2025-08-13 16:24:17 -04:00
Jordi Higuera
aa42759ffd 🍃 feat: Add MongoDB Connection Pool Configuration Options (#8537)
* 🔧 Feat: Add MongoDB connection pool configuration options to environment variables

* 🔧 feat: Add environment variables for automatic index creation and collection creation in MongoDB connection

---------

Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com>
2025-08-13 16:24:17 -04:00
Danny Avila
52e59e40be 📚 feat: Add Source Citations for File Search in Agents (#8652)
* feat: Source Citations for file_search in Agents

* Fix: Added citation limits and relevance score to app service. Removed duplicate tests

*  feat: implement Role-level toggle to optionally disable file Source Citation in Agents

* 🐛 fix: update mock for librechat-data-provider to include PermissionTypes and SystemRoles

---------

Co-authored-by: “Praneeth <praneeth.goparaju@slalom.com>
2025-08-13 16:24:16 -04:00
Danny Avila
a955097faf 📁 feat: Integrate SharePoint File Picker and Download Workflow (#8651)
* feat(sharepoint): integrate SharePoint file picker and download workflow
Introduces end‑to‑end SharePoint import support:
* Token exchange with Microsoft Graph and scope management (`useSharePointToken`)
* Re‑usable hooks: `useSharePointPicker`, `useSharePointDownload`,
  `useSharePointFileHandling`
* FileSearch dropdown now offers **From Local Machine** / **From SharePoint**
  sources and gracefully falls back when SharePoint is disabled
* Agent upload model, `AttachFileMenu`, and `DropdownPopup` extended for
  SharePoint files and sub‑menus
* Blurry overlay with progress indicator and `maxSelectionCount` limit during
  downloads
* Cache‑flush utility (`config/flush-cache.js`) supporting Redis & filesystem,
  with dry‑run and npm script
* Updated `SharePointIcon` (uses `currentColor`) and new i18n keys
* Bug fixes: placeholder syntax in progress message, picker event‑listener
  cleanup
* Misc style and performance optimizations

* Fix ESLint warnings

---------

Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com>
2025-08-13 16:24:16 -04:00
Theo N. Truong
b6413b06bc 🐞 fix: Update MCP server initialization to skip non-startup and oauth servers (#9049) 2025-08-13 16:19:55 -04:00
Danny Avila
e6cebdf2b6 🚌 fix: MCP Runtime Errors while Initializing (#9046)
* chore: Remove eslint-plugin-perfectionist from dependencies

* 🚌 fix: MCP Runtime Errors while Initializing

* chore: Bump @librechat/api version to 1.3.1

* chore: import order

* chore: import order
2025-08-13 14:41:38 -04:00
github-actions[bot]
3eb6debe6a 🌍 i18n: Update translation.json with latest translations (#9020)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-13 11:47:59 -04:00
Theo N. Truong
8780a78165 ♻️ refactor: MCPManager for Scalability, Fix App-Level Detection, Add Lazy Connections (#8930)
* feat: MCP Connection management overhaul - Making MCPManager manageable

Refactor the monolithic MCPManager into focused, single-responsibility classes:

• MCPServersRegistry: Server configuration discovery and metadata management
• UserConnectionManager: Manages user-level connections
• ConnectionsRepository: Low-level connection pool with lazy loading
• MCPConnectionFactory: Handles MCP connection creation with OAuth support

New Features:
• Lazy loading of app-level connections for horizontal scaling
• Automatic reconnection for app-level connections
• Enhanced OAuth detection with explicit requiresOAuth flag
• Centralized MCP configuration management

Bug Fixes:
• App-level connection detection in MCPManager.callTool
• MCP Connection Reinitialization route behavior

Optimizations:
• MCPConnection.isConnected() caching to reduce overhead
• Concurrent server metadata retrieval instead of sequential

This refactoring addresses scalability bottlenecks and improves reliability
while maintaining backward compatibility with existing configurations.

* feat: Enabled import order in eslint.

* # Moved tests to __tests__ folder
# added tests for MCPServersRegistry.ts

* # Add unit tests for ConnectionsRepository functionality

* # Add unit tests for MCPConnectionFactory functionality

* # Reorganize MCP connection tests and improve error handling

* # reordering imports

* # Update testPathIgnorePatterns in jest.config.mjs to exclude development TypeScript files

* # removed mcp/manager.ts
2025-08-13 11:45:06 -04:00
Joseph Licata
9dbf153489 🐞 fix: Prevent Type Error in Successful Bookmark Deletion (#9014) 2025-08-12 22:38:20 -04:00
Theo N. Truong
4799593e1a 🔧 fix: Redis cluster connection errors and configuration (#9016)
- Fix ioredis cluster initialization to use proper node array format
- Fix reconnectOnError to return numeric delay instead of boolean
- Fix keyv redis cluster configuration to use proper URL objects
- Resolves "undefinedms" reconnection delay errors
2025-08-12 22:23:29 -04:00
faustoFF
a199b87478 🐋 ci: Optimize Dockerfile Caching (#8480)
* 🐳 chore(docker): optimize Docker npm dependency caching

* Update Dockerfile

---------

Co-authored-by: Danny Avila <danacordially@gmail.com>
2025-08-12 20:00:59 -04:00
Danny Avila
007570b5c6 🎨 style: Add missing markdown font size variable to CSS (#9011) 2025-08-12 10:19:29 -04:00
Jeffrey Bulanadi
5943d5346c 📑 docs: Fix Typos in JSDoc and Doc Files (#8998)
- Fix grammar in translations README: 'if has not been ran'  'if it has not been run'
- Fix spacing in JSDoc comments: 'at theend'  'at the end' (2 instances)
2025-08-12 10:18:55 -04:00
Danny Avila
052e61b735 v0.8.0-rc2 (#9000) 2025-08-11 18:58:21 -04:00
Danny Avila
1ccac58403 🔒 fix: Provider Validation for Social, OpenID, SAML, and LDAP Logins (#8999)
* fix: social login provider crossover

* feat: Enhance OpenID login handling and add tests for provider validation

* refactor: authentication error handling to use ErrorTypes.AUTH_FAILED enum

* refactor: update authentication error handling in LDAP and SAML strategies to use ErrorTypes.AUTH_FAILED enum

* ci: Add validation for login with existing email and different provider in SAML strategy

chore: Add logging for existing users with different providers in LDAP, SAML, and Social Login strategies
2025-08-11 18:51:46 -04:00
Clay Rosenthal
04d74a7e07 🪖 ci: Helm OCI Publishing (#7256)
* Adding helm oci publishing (#3)

Signed-off-by: Clay Rosenthal <clayros@amazon.com>

* Update chart version

Signed-off-by: Clay Rosenthal <clayros@amazon.com>

* Update helm release

Signed-off-by: Clay Rosenthal <clayros@amazon.com>

* Update helm chart package path

Signed-off-by: Clay Rosenthal <clayros@amazon.com>

---------

Signed-off-by: Clay Rosenthal <clayros@amazon.com>
2025-08-11 16:21:05 -04:00
github-actions[bot]
0fdca8ddbd 🌍 i18n: Update translation.json with latest translations (#8996)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-11 16:19:37 -04:00
Danny Avila
c5ca621efd 🧑‍💼 feat: Add Agent Model Validation (#8995)
* fix: Update logger import to use data-schemas module

* feat: agent model validation

* fix: Remove invalid error messages from translation file
2025-08-11 14:26:28 -04:00
Danny Avila
8cefa566da 📸 fix: Avatar Handling for Social Login (#8993)
- Updated `handleExistingUser` function to improve avatar handling logic, including checks for manual flags and null/undefined avatars.
- Introduced a new test suite for `handleExistingUser` covering various scenarios, ensuring robust functionality for avatar updates in both local and non-local storage contexts.
2025-08-11 11:50:46 -04:00
Danny Avila
7e4c8a5d0d 🛡️ fix: OTP Verification For 2FA Disable Operation (#8975) 2025-08-10 15:05:16 -04:00
Danny Avila
edf33bedcb 🛂 feat: Payload limits and Validation for User-created Memories (#8974) 2025-08-10 14:46:16 -04:00
Dustin Healy
21e00168b1 🪙 fix: Max Output Tokens Refactor for Responses API (#8972)
🪙 fix: Max Output Tokens Refactor for Responses API (#8972)

chore: Remove `max_output_tokens` from model kwargs in `titleConvo` if provided
2025-08-10 14:08:35 -04:00
github-actions[bot]
da3730b7d6 🌍 i18n: Update translation.json with latest translations (#8957)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-09 12:04:56 -04:00
Danny Avila
770c766d50 🔧 refactor: Move Plugin-related Helpers to TS API and Add Tests (#8961) 2025-08-09 12:02:44 -04:00
alkshmir
5eb6926464 🪖 chore: Fix Typo in helm/librechat/values.yaml (#8960) 2025-08-09 12:00:37 -04:00
Danny Avila
e478ae1c28 📦 chore: Bump @librechat/agents to v2.4.75 (#8956) 2025-08-09 00:01:21 -04:00
Danny Avila
0c9284c8ae 🧠 refactor: Memory Timeout after Completion and Guarantee Stream Final Event (#8955) 2025-08-09 00:01:10 -04:00
Danny Avila
4eeadddfe6 🔮 fix: Artifacts readOnly to Re-render when Expected (#8954) 2025-08-08 22:44:58 -04:00
Dustin Healy
9ca1847535 🔧 refactor: customUserVar Error Normalization (#8950)
* fix: localization string had unused template var

* fix: add normalizeHttpError to hopefully stop UI hangs when an error is returned in UserController

- Ensures updateUserPluginsController always returns valid HTTP status codes instead of undefined
- Add normalizeHttpError() helper to safely extract status/message from errors
- Default to 400 status code when Error.status is undefined/invalid

* refactor: move normalizeHttpError to packages/api
2025-08-08 15:53:04 -04:00
Danny Avila
5d0bc95193 🧪 fix: Editor Styling, Incomplete Artifact Editing, Optimize Artifact Context (#8953)
* refactor: optimize artifacts context for improved performance

* fix: layout classes for artifacts editor

* chore: linting

* fix: enhance artifact mutation handling in CodeEditor to prevent infinite retries

* fix: handle incomplete artifacts in replaceArtifactContent and add regression tests
2025-08-08 15:49:58 -04:00
github-actions[bot]
e7d6100fe4 🌍 i18n: Update translation.json with latest translations (#8934)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-08 12:21:27 -04:00
Danny Avila
01a95229f2 📦 chore: Bump @librechat/agents to v2.4.73 (#8949) 2025-08-08 12:19:36 -04:00
Danny Avila
0939250f07 🛣️ fix: Remove Title Tokens Limit for GPT-5 Models (#8948)
* 🛣️ fix: Remove Title Tokens Limit for GPT-5 Models

* 🛣️ fix: Remove max_completion_tokens from modelKwargs when maxTokens is disabled

* chore: Add test-image* to .gitignore for CI/CD data
2025-08-08 11:15:29 -04:00
Danny Avila
7147bce3c3 feat: Add OpenAI Verbosity Parameter (#8929)
* WIP: Verbosity OpenAI Parameter

* 🔧 chore: remove unused import of extractEnvVariable from parsers.ts

*  feat: add comprehensive tests for getOpenAIConfig and enhance verbosity handling

* fix: Handling for maxTokens in GPT-5+ models and add corresponding tests

* feat: Implement GPT-5+ model handling in processMemory function
2025-08-07 20:49:40 -04:00
github-actions[bot]
486fe34a2b 🌍 i18n: Update translation.json with latest translations (#8924)
* 🌍 i18n: Update translation.json with latest translations

* Update translation.json

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Danny Avila <danny@librechat.ai>
2025-08-07 20:42:44 -04:00
github-actions[bot]
922f43f520 🌍 i18n: Update translation.json with latest translations (#8907)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-07 16:25:24 -04:00
Marco Beretta
e6fa01d514 💬 style: Enhance Tooltip with HTML support and Improve Styling (#8915)
*  feat: Enhance Tooltip component with HTML support and styling improvements

*  feat: Integrate DOMPurify for HTML sanitization in Tooltip component
2025-08-07 16:24:42 -04:00
Danny Avila
8238fb49e0 📦 chore: bump @librechat/agents to v2.4.72 2025-08-07 16:23:12 -04:00
Danny Avila
430557676d feat: GPT-5 Token Limits, Rates, Icon, Reasoning Support 2025-08-07 16:23:11 -04:00
Danny Avila
8a5047c456 📦 chore: bump @librechat/agents to v2.4.71 2025-08-07 16:23:11 -04:00
Danny Avila
c787515894 🧠 feat: Add minimal Reasoning Effort option 2025-08-07 16:23:11 -04:00
Danny Avila
d95d8032cc feat: GPT-OSS models Token Limits & Rates 2025-08-07 16:23:10 -04:00
Joseph Licata
b9f72f4869 🎚️ refactor: Update Min. Values for OpenAI Parameters (#8922) 2025-08-07 14:38:08 -04:00
Danny Avila
429bb6653a 📦 chore: bump @librechat/agents to v2.4.70 (#8923) 2025-08-07 14:36:10 -04:00
Dustin Healy
47caafa8f8 🔧 fix: MCP Queries and Connections (#8870)
* fix: add refetchQueries on connection success so ToolSelectDialog doesn't require hard refresh

* fix: change hook so we only query connection status when mcpServers are configured

* fix: change refetchQueries to invalidateQueries for tools after server connection update

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-08-07 02:31:05 -04:00
Dustin Healy
8530594f37 🟢 fix: Incorrect customUserVars Set States (#8905) 2025-08-07 02:19:06 -04:00
Sebastien Bruel
0b071c06f6 🥞 refactor: Duplicate Agent Versions as Informational Instead of Errors (#8881)
* Fix error when updating an agent with no changes

* Add tests

* Revert translation file changes
2025-08-07 02:12:05 -04:00
Danny Avila
1092392ed8 📂 fix: File Cleanup for Uploaded "Agent" Files (#8900) 2025-08-06 19:45:57 -04:00
Danny Avila
36c8947029 🔄 refactor: Select OpenRouter LLM Class Dynamically by baseURL (#8898) 2025-08-06 19:26:40 -04:00
Danny Avila
4175a3ea19 v0.8.0-rc1 (#8846) 2025-08-04 15:25:07 -04:00
github-actions[bot]
02dc71f4b7 🌍 i18n: Update translation.json with latest translations (#8845)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-04 15:24:24 -04:00
github-actions[bot]
a6c99a3267 🌍 i18n: Update translation.json with latest translations (#8828)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-04 14:54:07 -04:00
SollalF
fcefc6eedf feat: Add OpenID Audience Parameter (#8837)
*  feat: Add OpenID audience parameter support in authorization requests

* Updated .env.example to include OPENID_AUDIENCE variable for configuration.
* Enhanced openidStrategy to set the audience parameter in authorization requests if specified, improving OpenID integration.

* Update .env.example

* Update openidStrategy.js

---------

Co-authored-by: Danny Avila <danacordially@gmail.com>
2025-08-04 14:49:36 -04:00
Marco Beretta
dfdafdbd09 🖌️ feat: add animation styles for popovers and tooltips (#8831)
* feat: Update client version to 0.2.2 and add animation styles for popovers and tooltips

* refactor: Remove focus outline styles from Dropdown component

* feat: Update client version to 0.2.3 and add Select component export

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-08-04 14:44:00 -04:00
Danny Avila
33834cd484 🧹 feat: Automatic File Cleanup for Mistral OCR Uploads (#8827)
* chore: Handle optional token_endpoint in OAuth metadata discovery

* chore: Simplify permission typing logic in checkAccess function

* feat: Implement `deleteMistralFile` function and integrate file cleanup in `uploadMistralOCR`
2025-08-03 17:11:14 -04:00
Danny Avila
7ef2c626e2 🛠️ feat: Add Reset-Meili-Sync Script for MongoDB Flags (#8823) 2025-08-02 18:04:04 -04:00
Danny Avila
bc43423f58 🌍 i18n: Add Tibetan and Ukrainian languages to localization (#8819)
* 🌍 i18n: Add Tibetan and Ukrainian languages to localization

* feat: Update language selector to include Tibetan and Ukrainian options
* feat: Add translation files for Tibetan and Ukrainian languages
* chore: Update English translation.json with new language keys
* docs: Create localization guide for adding new languages

* Update README.md
2025-08-02 12:37:18 -04:00
Danny Avila
863401bcdf 🔧 fix: Assistants API SDK calls to match Updated Arguments (#8818)
* chore: remove comments in agents endpoint error handler

* chore: improve openai sdk typing

* chore: improve typing for azure asst init

* 🔧 fix: Assistants API SDK calls to match Updated Arguments
2025-08-02 12:19:58 -04:00
Danny Avila
33c8b87edd 📦 chore: Bump @modelcontextprotocol/sdk to v1.17.1 (#8809) 2025-08-01 16:10:12 -04:00
github-actions[bot]
077248a8a7 🌍 i18n: Update translation.json with latest translations (#8808)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-01 15:55:42 -04:00
Danny Avila
c6fb4686ef 🌐 ci: Bump Locize i18n Action Version 2025-08-01 15:52:00 -04:00
Dustin Healy
f1c6e4d55e 🧪 ci: Unit Tests for MCP Routes (#8803)
* refactor: remove unreachable if checks from mcp routes

* test: add tests for mcp routes
2025-08-01 14:47:00 -04:00
William Kim
e192c99c7d 🔧 fix: Apply Convo Export filename sanitization at export, not input (#8779)
Co-authored-by: Woosub Kim <woosub.kim@navercorp.com>
2025-07-31 07:28:33 -04:00
wartek69
056172f007 🔒 feat: MCP OAuth Config for Metadata Parameters (#8691)
* fix(mcp): add default metadata for pre-configured oauth

* removed lingering comment

* added configurable options & jest unit tests

* Update handler.test.ts

* Update handler.ts

---------

Co-authored-by: Alex <aleksander.chernyavskiy@seafar.eu>
Co-authored-by: Danny Avila <danacordially@gmail.com>
2025-07-31 07:24:49 -04:00
github-actions[bot]
5eed5009e9 🌍 i18n: Update translation.json with latest translations (#8771)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-07-30 15:59:27 -04:00
Danny Avila
6fc9abd4ad ✂️ fix: Remove Image Payloads from Memory Processing (#8770) 2025-07-30 15:54:33 -04:00
github-actions[bot]
03a924eaca 🌍 i18n: Update translation.json with latest translations (#8739)
* 🌍 i18n: Update translation.json with latest translations

* chore: Remove unused translation keys for MCP custom variables

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Danny Avila <danny@librechat.ai>
2025-07-30 15:23:12 -04:00
Danny Avila
25c993d93e 📦 chore: bump @librechat/agents to v2.4.69 (#8769) 2025-07-30 15:18:17 -04:00
Marco Beretta
09659c1040 🔨 style: Improve MCP UI (#8745)
* refactor: Enhance MCP components with improved UI elements and localization updates

* refactor: Clean up MCP components by removing unused imports and improving layout

* refactor: Update server status badge styling for improved UI consistency

* refactor: Move group up a level so 'X' and background highlight occur at same time for cancellation button

* refactor: Remove unused translation keys from the localization file

---------

Co-authored-by: Dustin Healy <dustinhealy1@gmail.com>
2025-07-30 14:56:22 -04:00
Josh Mullin
19a8f5c545 🐦 fix: Prioritize OIDC Username Claims to Prevent First Name Usernames (#8695)
Now prioritizes preferred_username claim, then the nonstandard
username claim, then email.

Removed given_name as a possible username choice to avoid exposing users’ first names as
usernames.

Updated openidStrategy.spec.js to reflect the new claim order.

Fixed mock OpenID server behavior where preferred_username was always
hardcoded, causing test failures.

Adjusted OpenID setup test to align with new username parameter
behavior.
2025-07-30 14:43:42 -04:00
Dustin Healy
1050346915 feat: Add Support for customUserVar Replacement in 'args' Field (#8743) 2025-07-30 14:37:56 -04:00
Marco Beretta
8a1a38f346 🔑 fix: Update Conversation Mutation to use ID from Payload (#8758) 2025-07-30 14:34:30 -04:00
Danny Avila
32081245da 🪵 chore: Remove Unnecessary Comments 2025-07-29 14:59:58 -04:00
Dustin Healy
6fd3b569ac ⚒️ fix: MCP Initialization Flows (#8734)
* fix: add OAuth flow back in to success state

* feat: disable server clicks during initialization to prevent spam

* fix: correct new tab behavior for OAuth between one-click and normal initialization flows

* fix: stop polling on error during oauth (was infinite popping toasts because we didn't clear interval)

* fix: cleanupServerState should be called after successful cancelOauth, not before

* fix: change from completeFlow to failFlow to avoid stale client IDs on OAuth after cancellation

* fix: add logic to differentiate between cancelled and failed flows when checking status for indicators (so error triangle indicator doesn't show up on cancellaiton)
2025-07-29 14:54:07 -04:00
Jakub Hrozek
6671fcb714 🛂 refactor: Use discoverAuthorizationServerMetadata for MCP OAuth (#8723)
* Use discoverAuthorizationServerMetadata instead of discoverMetadata

Uses the discoverAuthorizationServerMetadata function from the upstream
TS SDK. This has the advantage of falling back to OIDC discovery
metadata if the OAuth discovery metadata doesn't exist which is the case
with e.g. keycloak.

* chore: import order

---------

Co-authored-by: Danny Avila <danacordially@gmail.com>
2025-07-29 09:09:52 -04:00
Dustin Healy
c4677ab3fb 🔑 refactor: MCP Settings Rendering Logic for OAuth Servers (#8718)
* feat: add OAuth servers to conditional rendering logic for MCPPanel in SideNav

* feat: add startup flag check to conditional rendering logic

* fix: correct improper handling of failure state in reinitialize endpoint

* fix: change MCP config components to better handle servers without customUserVars

- removes the subtle reinitialize button from config components of servers without customUserVars or OAuth
- adds a placeholder message for components where servers have no customUserVars configured

* style: swap CustomUserVarsSection and ServerInitializationSection positions

* style: fix coloring for light mode and align more with existing design patterns

* chore: remove extraneous comments

* chore: reorder imports and `isEnabled` from api package

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-07-29 09:08:46 -04:00
github-actions[bot]
ef9d9b1276 🌍 i18n: Update translation.json with latest translations (#8676)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-07-28 15:15:06 -04:00
Danny Avila
a4ca4b7d9d 🔀 fix: Rerender Edge Cases After Migration to Shared Package (#8713)
* fix: render issues in PromptForm by decoupling nested dependencies as a result of @librechat/client components

* fix: MemoryViewer flicker by moving EditMemoryButton and DeleteMemoryButton outside of rendering

* fix: CategorySelector to use DropdownPopup for improved mobile compatibility

* chore: imports
2025-07-28 15:14:37 -04:00
Danny Avila
8e6eef04ab 🔧 fix: Update Proxy Config for OpenAI Image Tools (#8712)
- Replaced HttpsProxyAgent with ProxyAgent from undici for improved proxy handling in DALLE3.js and OpenAIImageTools.js.
- Updated fetchOptions to use dispatcher for proxy configuration.
- Added new test suite for DALLE3 to verify proxy configuration behavior based on environment variables.
2025-07-28 15:12:29 -04:00
Danny Avila
ec3cbca6e3 feat: Enhance Redis Config and Error Handling (#8709)
*  feat: Enhance Redis Config and Error Handling

- Added new Redis configuration options: `REDIS_RETRY_MAX_DELAY`, `REDIS_RETRY_MAX_ATTEMPTS`, `REDIS_CONNECT_TIMEOUT`, and `REDIS_ENABLE_OFFLINE_QUEUE` to improve connection resilience.
- Implemented error handling for Redis cache creation and session store initialization in `cacheFactory.js`.
- Enhanced logging for Redis client events and errors in `redisClients.js`.
- Updated `README.md` to document new Redis configuration options.

* chore: Add JSDoc comments to Redis configuration options in cacheConfig.js for improved clarity and documentation

* ci: update cacheFactory tests

* refactor: remove fallback

* fix: Improve error handling in Redis cache creation, re-throw errors when expected
2025-07-28 14:21:39 -04:00
Dustin Healy
4639dc3255 🐜 fix: Forward Ref to MCPSubMenu and ArtifactsSubMenu (#8696)
ToolsDropdown uses a menu library that passes refs to submenu items. Function components can't receive refs by default though, so we get  "Function components cannot be given refs" warnings in the console. React.forwardRef() allows them to properly handle ref forwarding by wrapping the component and attaching the ref to the outer div element.
2025-07-28 12:26:11 -04:00
Dustin Healy
0ef3fefaec 🏹 feat: Concurrent MCP Initialization Support (#8677)
*  feat: Enhance MCP Connection Status Management

- Introduced new functions to retrieve and manage connection status for multiple MCP servers, including OAuth flow checks and server-specific status retrieval.
- Refactored the MCP connection status endpoints to support both all servers and individual server queries.
- Replaced the old server initialization hook with a new `useMCPServerManager` hook for improved state management and handling of multiple OAuth flows.
- Updated the MCPPanel component to utilize the new context provider for better state handling and UI updates.
- Fixed a number of UI bugs when initializing servers

* 🗣️ i18n: Remove unused strings from translation.json

* refactor: move helper functions out of the route module into mcp service file

* ci: add tests for newly added functions in mcp service file

* fix: memoize setMCPValues to avoid render loop
2025-07-28 12:25:34 -04:00
ryanh-ai
37aba18a96 🪟 feat: Context Window for amazon.nova-premier (#8689) 2025-07-28 12:24:08 -04:00
Marco Beretta
2ce6ac74f4 📻 feat: radio component (#8692)
* 📦 feat: Add Radio component

* 📦 feat: Integrate localization for 'No options available' message in Radio component

* 📦 feat: Bump version to 0.2.0 in package.json

* 📦 feat: Update client package version to 0.2.0 in package-lock.json
2025-07-28 12:18:59 -04:00
Danny Avila
9fddb0ff6a 🐋 ci: Include packages/client source files in Dockerfile.multi Client Build 2025-07-27 15:11:42 -04:00
Danny Avila
32f7dbd11f 🐳 ci: Build client package for Dockerfile.multi 2025-07-27 13:29:43 -04:00
Danny Avila
79197454f8 📦 feat: Move Shared Components to @librechat/client (#8685)
* feat: init @librechat/client

* feat: Add common types and interfaces for accessibility, agents, artifacts, assistants, and tools

* feat: Add jotai as a peer dependency

* fix build client package

* feat: cleanup unused types from common/index.ts

- Remove 104 unused type exports from packages/client/src/common/index.ts
- Keep only 7 actually used exports (93% reduction)
- Add cleanup script with enhanced import pattern detection
- Support both named imports and namespace imports (* as t)
- Create automatic backups and comprehensive documentation
- Maintain type safety with build verification
- No breaking changes to existing code

Kept exports:
- TShowToast, Option, OptionWithIcon, DropdownValueSetter
- MentionOption, NotificationSeverity, MenuItemProps

Scripts: cleanup-common-types-safe.js, README-CLEANUP.md

* fix: cleanup

* fix: package; refactor: tsconfig

* feat: add back `recoil`

* fix: move dependencies to peerDependencies in client package

* feat: add @librechat/client as a dependency in package.json and package-lock.json

* feat: update client package configuration and dependencies

- Added new dependencies for Rollup plugins and updated existing ones in package.json and package-lock.json.
- Introduced a new Rollup configuration file for building the client package.
- Refactored build scripts to include a dedicated build command for the client.
- Updated TypeScript configuration for improved module resolution and type declaration output.
- Integrated a Toast component from the client package into the main App component.

* feat: enhance Rollup configuration for client package

- Updated terser plugin settings to preserve directives like 'use client'.
- Added custom warning handler to ignore "use client" directive warnings during the build process.

* chore: rename package/client build script command

* feat: update client package dependencies and Rollup configuration

- Added rollup-plugin-postcss to package.json and updated package-lock.json.
- Enhanced Rollup configuration to include postcss plugin for CSS handling.
- Updated index.ts to export all components from the components directory for better modularity.

* feat: add client package directory to update configuration

- Included the 'client' package directory in the update.js configuration to ensure it is recognized during updates.

* feat: export Toast component in client package

- Added export for the Toast component in index.ts to enhance modularity and accessibility of components.

* feat: /client transition to @librechat/client

* chore: fixed formatting issues

* fix: update peer dependencies in @librechat/client to prevent bundling them

* fix: correct useSprings implementation in SplitText component

* fix: circular dependencies in DataTable

* fix: add remaining peer dependencies and match actual versions previously used in `client/package.json`

* fix: correct frontend:ci script to include client package build

* chore: enhance unused package detection for @librechat/client and improve dependency extraction

* fix: add missing peer dependency for @radix-ui/react-collapsible

* chore: include "packages/client" in unused i18next keys detection

* test: update AgentFooter tests to use document.querySelector for spinner checks
test: mock window.matchMedia in setupTests.js for consistent test environment

* feat: add react-hook-form dependency and update FormInput component to use its types

* chore: linting

* refactor: remove unused defaultSelectedValues prop from MCPSelect and MultiSelect components

* chore: linting

* feat: update GitHub Actions workflow to publish @librechat/client

* chore: update GitHub Actions workflow to install and build data-provider and client dependencies

* chore: add missing @testing-library/react dependency to client package

* chore: update tsconfig.json to exclude additional test files

* chore: fix build issues, resolve latest LC changes

* chore: move MCP components outside of `~/components/ui`

* feat: implement dynamic theme system with environment variable support and Tailwind CSS integration

* chore: remove unnecessary logging of sttExternal and ttsExternal in Speech component

* chore: squashed cleanup commits

chore: move @tanstack/react-virtual to dependencies and remove recoil from package.json

chore: move dependencies to peerDependencies in package.json

feat: update package.json and rollup.config.js to include jotai and enhance bundling configuration

feat: update package.json and rollup.config.js to include jotai and enhance bundling configuration

refactor: reorganize exports in index.ts for improved clarity

refactor: remove unused types and interfaces from common files

refactor: update peer dependencies and improve component typings

- Removed duplicate peer dependencies from package.json and organized them.
- Updated rollup.config.js to disable TypeScript checking during the build process.
- Modified AnimatedTabs component to use React.ReactNode for label and content types, and added TypeScript workarounds for compatibility.
- Enhanced Label and Separator components to accept an optional className prop and improved prop spreading.
- Updated Slider component to include an optional className prop and refined prop handling for better type safety.

refactor: clean up client workflow and update package dependencies

refactor: update package dependencies and improve PostCSS and Rollup configurations

chore: bump version to 0.1.2 in package.json

chore: bump client version to 0.1.2 in package-lock.json

chore: bump client version to 0.1.3 and update dependencies

chore: bump client version to 0.1.4 and update @react-spring dependencies

chore: update package version to 0.1.5 and adjust peer dependencies

- Bump version in package.json from 0.1.4 to 0.1.5.
- Update peer dependency for @tanstack/react-query to allow version 5.0.0.
- Add @tanstack/react-table and @tanstack/react-virtual as dependencies.
- Update various dependencies to their latest compatible versions.
- Simplify postcss.config.js by removing unnecessary options.
- Clean up rollup.config.js by removing ignored PostCSS warnings.
- Update CheckboxButton component to cast icon as React JSX element.
- Adjust Combobox component's class names for better styling.
- Change DropdownPopup component to use React's namespace import.
- Modify InputOTP component to use 'any' type for OTPInputContext.
- Ensure displayLabel and value in ModelParameters are converted to strings.
- Update MultiSearch component's placeholder to ensure it's a string.
- Cast selectIcon in MultiSelect as React JSX element for consistency.
- Update OGDialogTemplate to cast selectText as React JSX element.
- Initialize animationRef in PixelCard with undefined for clarity.
- Add TypeScript ignore comments in Select and SelectDropDown components for Radix UI type conflicts.
- Ensure title in SelectDropDown is a string and adjust rendering of options.
- Update useLocalize hook to cast options as any for compatibility.

refactor: code structure; chore: translations cleanup

chore: remove unused imports and clean up code in NewChat component

refactor: enhance Menu component to support custom render functions for menu items

style: update itemClassName in ToolsDropdown for improved UI consistency

fix: merge conflicts

chore: update @radix-ui/react-accordion to version 1.2.11

* refactor: remove unnecessary TypeScript type assertions in AnimatedTabs, Label, Separator, and Slider components

* feat: enhance theme system with localStorage persistence and new theme atoms

* chore: bump version of @librechat/client to 0.1.7

* chore: fix ci/cd warnings/errors related to linting and unused localization keys

* chore: update dependencies for class-variance-authority, clsx, and match-sorter

* chore: bump @librechat/client to v0.1.8

* feat: add utility colors for theme customization and remove unused tailwindConfig

* v0.1.9

---------

Co-authored-by: Marco Beretta <81851188+berry-13@users.noreply.github.com>
2025-07-27 12:19:01 -04:00
Danny Avila
97e1cdd224 🧗 refactor: Replace traverse package with Minimal Traversal for Logging (#8687)
* 📦 chore: Remove `keyv` from peerDependencies in package.json and package-lock.json for data-schemas

* refactor: replace traverse import with custom object-traverse utility for better control and error handling during logging

* chore(data-schemas): bump version to 0.0.15 and remove unused dependencies

* refactor: optimize message construction in debugTraverse

* chore: update Node.js version to 20.x in data-schemas workflow
2025-07-27 11:47:37 -04:00
Dustin Healy
d6a65f5a08 🐛 fix: Temporary Chats Still Visible in Sidebar (#8688)
* 🐛 fix: Fix import error causing temporary chats to still display in sidebar

* refactor: Update import path for `getCustomConfig` in Conversation and Message models

* chore: eslint warnings

* ci: add tests for Conversation and Message models

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-07-27 11:42:35 -04:00
Danny Avila
f4facb7d35 🪵 refactor: Dynamic getLogDirectory utility for Loggers (#8686) 2025-07-26 20:11:20 -04:00
Dustin Healy
545a909953 🗂️ refactor: Make MCPSubMenu consistent with MCPSelect (#8650)
- Refactored MCPSelect and MCPSubMenu components to utilize a new custom hook, `useMCPServerManager`, for improved state management and server initialization logic.
- Added functionality to handle simultaneous MCP server initialization requests, including cancellation and user notifications.
- Updated translation files to include new messages for initialization cancellation.
- Improved the configuration dialog handling for MCP servers, streamlining the user experience when managing server settings.
2025-07-25 14:51:42 -04:00
Danny Avila
cd436dc6a8 📦 chore: Update @modelcontextprotocol/sdk to v1.17.0 (#8674)
* 📦 chore: Update `@modelcontextprotocol/sdk` to v1.17.0

* refactor: unused package detection by extracting workspace dependencies in GitHub Actions workflow

* chore: Enhance unused package detection by including peerDependencies extraction in GitHub Actions workflow

* fix: Ensure safe extraction of dependencies and peerDependencies in unused package detection workflow
2025-07-25 14:06:16 -04:00
Danny Avila
e75beb92b3 🗑️ chore: Remove Workflows for Changelogs (#8673) 2025-07-25 13:45:22 -04:00
Danny Avila
5251246313 📱 refactor: Redis Client Error Logging and Ping only when Ready (#8671)
* 📱 refactor: Redis Client Error Logging and Ping only when Ready

* chore: intellisense for warning comment for Keyv Redis client regarding prefix support
2025-07-25 12:33:05 -04:00
Danny Avila
26f23c6aaf 📦 chore: Bump @node-saml/passport-saml to v5.1.0 (#8670) 2025-07-25 11:26:20 -04:00
Danny Avila
1636af1f27 📦 chore: Bump mongodb-memory-server to v10.1.4 (#8669) 2025-07-25 11:23:38 -04:00
Theo N. Truong
b050a0bf1e feat: Add Redis Ping Interval Configuration (#8648)
Co-authored-by: Danny Avila <danny@librechat.ai>
2025-07-25 11:00:02 -04:00
github-actions[bot]
deb928bf80 🌍 i18n: Update translation.json with latest translations (#8664)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-07-25 10:36:14 -04:00
Theo N. Truong
21005b66cc feat: Add support for forced in-memory cache namespaces configuration (#8586)
*  feat: Add support for forced in-memory cache keys configuration

* refactor: Update cache keys to use uppercase constants and moved cache for `librechat.yaml` into its own cache namespace (STATIC_CONFIG) and with a more descriptive key (LIBRECHAT_YAML_CONFIG)
2025-07-25 10:32:55 -04:00
Dustin Healy
3dc9e85fab 🐛 fix: Display OAuth MCP servers according to Chat Menu Setting (#8643)
* fix: chatMenu not being respected in MCPSelect

* fix: chatMenu not being respected in MCPSubMenu
2025-07-25 10:21:10 -04:00
Sebastien Bruel
ec67cf2d3a 🚇 chore: Remove Overridden Transport Error Listener (#8656) 2025-07-25 10:17:33 -04:00
Dustin Healy
1fe977e48f 🐛 fix: MCP Name Normalization breaking User Provided Variables (#8644) 2025-07-24 10:44:58 -04:00
Danny Avila
01470ef9fd 🔄 refactor: Default Completion Title Prompt and Title Model Selection (#8646)
* refactor: prefer `agent.model` (user-facing value) over `agent.model_parameters.model` to ensure Azure mapping

* chore: update @librechat/agents to version 2.4.68 to use new default title prompt for completion title method
2025-07-24 10:38:26 -04:00
1231 changed files with 64149 additions and 13834 deletions

View File

@@ -15,6 +15,20 @@ HOST=localhost
PORT=3080
MONGO_URI=mongodb://127.0.0.1:27017/LibreChat
#The maximum number of connections in the connection pool. */
MONGO_MAX_POOL_SIZE=
#The minimum number of connections in the connection pool. */
MONGO_MIN_POOL_SIZE=
#The maximum number of connections that may be in the process of being established concurrently by the connection pool. */
MONGO_MAX_CONNECTING=
#The maximum number of milliseconds that a connection can remain idle in the pool before being removed and closed. */
MONGO_MAX_IDLE_TIME_MS=
#The maximum time in milliseconds that a thread can wait for a connection to become available. */
MONGO_WAIT_QUEUE_TIMEOUT_MS=
# Set to false to disable automatic index creation for all models associated with this connection. */
MONGO_AUTO_INDEX=
# Set to `false` to disable Mongoose automatically calling `createCollection()` on every model created on this connection. */
MONGO_AUTO_CREATE=
DOMAIN_CLIENT=http://localhost:3080
DOMAIN_SERVER=http://localhost:3080
@@ -26,6 +40,13 @@ NO_INDEX=true
# Defaulted to 1.
TRUST_PROXY=1
# Minimum password length for user authentication
# Default: 8
# Note: When using LDAP authentication, you may want to set this to 1
# to bypass local password validation, as LDAP servers handle their own
# password policies.
# MIN_PASSWORD_LENGTH=8
#===============#
# JSON Logging #
#===============#
@@ -442,6 +463,8 @@ OPENID_REQUIRED_ROLE_PARAMETER_PATH=
OPENID_USERNAME_CLAIM=
# Set to determine which user info property returned from OpenID Provider to store as the User's name
OPENID_NAME_CLAIM=
# Optional audience parameter for OpenID authorization requests
OPENID_AUDIENCE=
OPENID_BUTTON_LABEL=
OPENID_IMAGE_URL=
@@ -463,6 +486,21 @@ OPENID_ON_BEHALF_FLOW_USERINFO_SCOPE="user.read" # example for Scope Needed for
# Set to true to use the OpenID Connect end session endpoint for logout
OPENID_USE_END_SESSION_ENDPOINT=
#========================#
# SharePoint Integration #
#========================#
# Requires Entra ID (OpenID) authentication to be configured
# Enable SharePoint file picker in chat and agent panels
# ENABLE_SHAREPOINT_FILEPICKER=true
# SharePoint tenant base URL (e.g., https://yourtenant.sharepoint.com)
# SHAREPOINT_BASE_URL=https://yourtenant.sharepoint.com
# Microsoft Graph API And SharePoint scopes for file picker
# SHAREPOINT_PICKER_SHAREPOINT_SCOPE==https://yourtenant.sharepoint.com/AllSites.Read
# SHAREPOINT_PICKER_GRAPH_SCOPE=Files.Read.All
#========================#
# SAML
# Note: If OpenID is enabled, SAML authentication will be automatically disabled.
@@ -490,6 +528,21 @@ SAML_IMAGE_URL=
# SAML_USE_AUTHN_RESPONSE_SIGNED=
#===============================================#
# Microsoft Graph API / Entra ID Integration #
#===============================================#
# Enable Entra ID people search integration in permissions/sharing system
# When enabled, the people picker will search both local database and Entra ID
USE_ENTRA_ID_FOR_PEOPLE_SEARCH=false
# When enabled, entra id groups owners will be considered as members of the group
ENTRA_ID_INCLUDE_OWNERS_AS_MEMBERS=false
# Microsoft Graph API scopes needed for people/group search
# Default scopes provide access to user profiles and group memberships
OPENID_GRAPH_SCOPES=User.Read,People.Read,GroupMember.Read.All
# LDAP
LDAP_URL=
LDAP_BIND_DN=
@@ -614,6 +667,10 @@ HELP_AND_FAQ_URL=https://librechat.ai
# REDIS_URI=rediss://127.0.0.1:6380
# REDIS_CA=/path/to/ca-cert.pem
# Elasticache may need to use an alternate dnsLookup for TLS connections. see "Special Note: Aws Elasticache Clusters with TLS" on this webpage: https://www.npmjs.com/package/ioredis
# Enable alternative dnsLookup for redis
# REDIS_USE_ALTERNATIVE_DNS_LOOKUP=true
# Redis authentication (if required)
# REDIS_USERNAME=your_redis_username
# REDIS_PASSWORD=your_redis_password
@@ -627,6 +684,15 @@ HELP_AND_FAQ_URL=https://librechat.ai
# Redis connection limits
# REDIS_MAX_LISTENERS=40
# Redis ping interval in seconds (0 = disabled, >0 = enabled)
# When set to a positive integer, Redis clients will ping the server at this interval to keep connections alive
# When unset or 0, no pinging is performed (recommended for most use cases)
# REDIS_PING_INTERVAL=300
# Force specific cache namespaces to use in-memory storage even when Redis is enabled
# Comma-separated list of CacheKeys (e.g., STATIC_CONFIG,ROLES,MESSAGES)
# FORCED_IN_MEMORY_CACHE_NAMESPACES=STATIC_CONFIG,ROLES
#==================================================#
# Others #
#==================================================#
@@ -687,3 +753,16 @@ OPENWEATHER_API_KEY=
# JINA_API_KEY=your_jina_api_key
# or
# COHERE_API_KEY=your_cohere_api_key
#======================#
# MCP Configuration #
#======================#
# Treat 401/403 responses as OAuth requirement when no oauth metadata found
# MCP_OAUTH_ON_AUTH_ERROR=true
# Timeout for OAuth detection requests in milliseconds
# MCP_OAUTH_DETECTION_TIMEOUT=5000
# Cache connection status checks for this many milliseconds to avoid expensive verification
# MCP_CONNECTION_CHECK_TTL=60000

View File

@@ -147,7 +147,7 @@ Apply the following naming conventions to branches, labels, and other Git-relate
## 8. Module Import Conventions
- `npm` packages first,
- from shortest line (top) to longest (bottom)
- from longest line (top) to shortest (bottom)
- Followed by typescript types (pertains to data-provider and client workspaces)
- longest line (top) to shortest (bottom)
@@ -157,6 +157,8 @@ Apply the following naming conventions to branches, labels, and other Git-relate
- longest line (top) to shortest (bottom)
- imports with alias `~` treated the same as relative import with respect to line length
**Note:** ESLint will automatically enforce these import conventions when you run `npm run lint --fix` or through pre-commit hooks.
---
Please ensure that you adapt this summary to fit the specific context and nuances of your project.

View File

@@ -1,6 +1,11 @@
name: Publish `@librechat/client` to NPM
on:
push:
branches:
- main
paths:
- 'packages/client/package.json'
workflow_dispatch:
inputs:
reason:
@@ -17,16 +22,37 @@ jobs:
- name: Use Node.js
uses: actions/setup-node@v4
with:
node-version: '18.x'
- name: Check if client package exists
node-version: '20.x'
- name: Install client dependencies
run: cd packages/client && npm ci
- name: Build client
run: cd packages/client && npm run build
- name: Set up npm authentication
run: echo "//registry.npmjs.org/:_authToken=${{ secrets.PUBLISH_NPM_TOKEN }}" > ~/.npmrc
- name: Check version change
id: check
working-directory: packages/client
run: |
if [ -d "packages/client" ]; then
echo "Client package directory found"
PACKAGE_VERSION=$(node -p "require('./package.json').version")
PUBLISHED_VERSION=$(npm view @librechat/client version 2>/dev/null || echo "0.0.0")
if [ "$PACKAGE_VERSION" = "$PUBLISHED_VERSION" ]; then
echo "No version change, skipping publish"
echo "skip=true" >> $GITHUB_OUTPUT
else
echo "Client package directory not found - workflow ready for future use"
exit 0
echo "Version changed, proceeding with publish"
echo "skip=false" >> $GITHUB_OUTPUT
fi
- name: Placeholder for future publishing
run: echo "Client package publishing workflow is ready"
- name: Pack package
if: steps.check.outputs.skip != 'true'
working-directory: packages/client
run: npm pack
- name: Publish
if: steps.check.outputs.skip != 'true'
working-directory: packages/client
run: npm publish *.tgz --access public

View File

@@ -1,4 +1,4 @@
name: Node.js Package
name: Publish `librechat-data-provider` to NPM
on:
push:
@@ -6,6 +6,12 @@ on:
- main
paths:
- 'packages/data-provider/package.json'
workflow_dispatch:
inputs:
reason:
description: 'Reason for manual trigger'
required: false
default: 'Manual publish requested'
jobs:
build:
@@ -14,7 +20,7 @@ jobs:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 16
node-version: 20
- run: cd packages/data-provider && npm ci
- run: cd packages/data-provider && npm run build
@@ -25,7 +31,7 @@ jobs:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 16
node-version: 20
registry-url: 'https://registry.npmjs.org'
- run: cd packages/data-provider && npm ci
- run: cd packages/data-provider && npm run build

View File

@@ -22,7 +22,7 @@ jobs:
- name: Use Node.js
uses: actions/setup-node@v4
with:
node-version: '18.x'
node-version: '20.x'
- name: Install dependencies
run: cd packages/data-schemas && npm ci

View File

@@ -1,95 +0,0 @@
name: Generate Release Changelog PR
on:
push:
tags:
- 'v*.*.*'
workflow_dispatch:
jobs:
generate-release-changelog-pr:
permissions:
contents: write # Needed for pushing commits and creating branches.
pull-requests: write
runs-on: ubuntu-latest
steps:
# 1. Checkout the repository (with full history).
- name: Checkout Repository
uses: actions/checkout@v4
with:
fetch-depth: 0
# 2. Generate the release changelog using our custom configuration.
- name: Generate Release Changelog
id: generate_release
uses: mikepenz/release-changelog-builder-action@v5.1.0
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
configuration: ".github/configuration-release.json"
owner: ${{ github.repository_owner }}
repo: ${{ github.event.repository.name }}
outputFile: CHANGELOG-release.md
# 3. Update the main CHANGELOG.md:
# - If it doesn't exist, create it with a basic header.
# - Remove the "Unreleased" section (if present).
# - Prepend the new release changelog above previous releases.
# - Remove all temporary files before committing.
- name: Update CHANGELOG.md
run: |
# Determine the release tag, e.g. "v1.2.3"
TAG=${GITHUB_REF##*/}
echo "Using release tag: $TAG"
# Ensure CHANGELOG.md exists; if not, create a basic header.
if [ ! -f CHANGELOG.md ]; then
echo "# Changelog" > CHANGELOG.md
echo "" >> CHANGELOG.md
echo "All notable changes to this project will be documented in this file." >> CHANGELOG.md
echo "" >> CHANGELOG.md
fi
echo "Updating CHANGELOG.md…"
# Remove the "Unreleased" section (from "## [Unreleased]" until the first occurrence of '---') if it exists.
if grep -q "^## \[Unreleased\]" CHANGELOG.md; then
awk '/^## \[Unreleased\]/{flag=1} flag && /^---/{flag=0; next} !flag' CHANGELOG.md > CHANGELOG.cleaned
else
cp CHANGELOG.md CHANGELOG.cleaned
fi
# Split the cleaned file into:
# - header.md: content before the first release header ("## [v...").
# - tail.md: content from the first release header onward.
awk '/^## \[v/{exit} {print}' CHANGELOG.cleaned > header.md
awk 'f{print} /^## \[v/{f=1; print}' CHANGELOG.cleaned > tail.md
# Combine header, the new release changelog, and the tail.
echo "Combining updated changelog parts..."
cat header.md CHANGELOG-release.md > CHANGELOG.md.new
echo "" >> CHANGELOG.md.new
cat tail.md >> CHANGELOG.md.new
mv CHANGELOG.md.new CHANGELOG.md
# Remove temporary files.
rm -f CHANGELOG.cleaned header.md tail.md CHANGELOG-release.md
echo "Final CHANGELOG.md content:"
cat CHANGELOG.md
# 4. Create (or update) the Pull Request with the updated CHANGELOG.md.
- name: Create Pull Request
uses: peter-evans/create-pull-request@v7
with:
token: ${{ secrets.GITHUB_TOKEN }}
sign-commits: true
commit-message: "chore: update CHANGELOG for release ${{ github.ref_name }}"
base: main
branch: "changelog/${{ github.ref_name }}"
reviewers: danny-avila
title: "📜 docs: Changelog for release ${{ github.ref_name }}"
body: |
**Description**:
- This PR updates the CHANGELOG.md by removing the "Unreleased" section and adding new release notes for release ${{ github.ref_name }} above previous releases.

View File

@@ -1,107 +0,0 @@
name: Generate Unreleased Changelog PR
on:
schedule:
- cron: "0 0 * * 1" # Runs every Monday at 00:00 UTC
workflow_dispatch:
jobs:
generate-unreleased-changelog-pr:
permissions:
contents: write # Needed for pushing commits and creating branches.
pull-requests: write
runs-on: ubuntu-latest
steps:
# 1. Checkout the repository on main.
- name: Checkout Repository on Main
uses: actions/checkout@v4
with:
ref: main
fetch-depth: 0
# 4. Get the latest version tag.
- name: Get Latest Tag
id: get_latest_tag
run: |
LATEST_TAG=$(git describe --tags $(git rev-list --tags --max-count=1) || echo "none")
echo "Latest tag: $LATEST_TAG"
echo "tag=$LATEST_TAG" >> $GITHUB_OUTPUT
# 5. Generate the Unreleased changelog.
- name: Generate Unreleased Changelog
id: generate_unreleased
uses: mikepenz/release-changelog-builder-action@v5.1.0
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
configuration: ".github/configuration-unreleased.json"
owner: ${{ github.repository_owner }}
repo: ${{ github.event.repository.name }}
outputFile: CHANGELOG-unreleased.md
fromTag: ${{ steps.get_latest_tag.outputs.tag }}
toTag: main
# 7. Update CHANGELOG.md with the new Unreleased section.
- name: Update CHANGELOG.md
id: update_changelog
run: |
# Create CHANGELOG.md if it doesn't exist.
if [ ! -f CHANGELOG.md ]; then
echo "# Changelog" > CHANGELOG.md
echo "" >> CHANGELOG.md
echo "All notable changes to this project will be documented in this file." >> CHANGELOG.md
echo "" >> CHANGELOG.md
fi
echo "Updating CHANGELOG.md…"
# Extract content before the "## [Unreleased]" (or first version header if missing).
if grep -q "^## \[Unreleased\]" CHANGELOG.md; then
awk '/^## \[Unreleased\]/{exit} {print}' CHANGELOG.md > CHANGELOG_TMP.md
else
awk '/^## \[v/{exit} {print}' CHANGELOG.md > CHANGELOG_TMP.md
fi
# Append the generated Unreleased changelog.
echo "" >> CHANGELOG_TMP.md
cat CHANGELOG-unreleased.md >> CHANGELOG_TMP.md
echo "" >> CHANGELOG_TMP.md
# Append the remainder of the original changelog (starting from the first version header).
awk 'f{print} /^## \[v/{f=1; print}' CHANGELOG.md >> CHANGELOG_TMP.md
# Replace the old file with the updated file.
mv CHANGELOG_TMP.md CHANGELOG.md
# Remove the temporary generated file.
rm -f CHANGELOG-unreleased.md
echo "Final CHANGELOG.md:"
cat CHANGELOG.md
# 8. Check if CHANGELOG.md has any updates.
- name: Check for CHANGELOG.md changes
id: changelog_changes
run: |
if git diff --quiet CHANGELOG.md; then
echo "has_changes=false" >> $GITHUB_OUTPUT
else
echo "has_changes=true" >> $GITHUB_OUTPUT
fi
# 9. Create (or update) the Pull Request only if there are changes.
- name: Create Pull Request
if: steps.changelog_changes.outputs.has_changes == 'true'
uses: peter-evans/create-pull-request@v7
with:
token: ${{ secrets.GITHUB_TOKEN }}
base: main
branch: "changelog/unreleased-update"
sign-commits: true
commit-message: "action: update Unreleased changelog"
title: "📜 docs: Unreleased Changelog"
body: |
**Description**:
- This PR updates the Unreleased section in CHANGELOG.md.
- It compares the current main branch with the latest version tag (determined as ${{ steps.get_latest_tag.outputs.tag }}),
regenerates the Unreleased changelog, removes any old Unreleased block, and inserts the new content.

View File

@@ -4,12 +4,13 @@ name: Build Helm Charts on Tag
on:
push:
tags:
- "*"
- "chart-*"
jobs:
release:
permissions:
contents: write
packages: write
runs-on: ubuntu-latest
steps:
- name: Checkout
@@ -26,15 +27,49 @@ jobs:
uses: azure/setup-helm@v4
env:
GITHUB_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
- name: Build Subchart Deps
run: |
cd helm/librechat-rag-api
helm dependency build
cd helm/librechat
helm dependency build
cd ../librechat-rag-api
helm dependency build
- name: Run chart-releaser
uses: helm/chart-releaser-action@v1.6.0
- name: Get Chart Version
id: chart-version
run: |
CHART_VERSION=$(echo "${{ github.ref_name }}" | cut -d'-' -f2)
echo "CHART_VERSION=${CHART_VERSION}" >> "$GITHUB_OUTPUT"
# Log in to GitHub Container Registry
- name: Log in to GitHub Container Registry
uses: docker/login-action@v3
with:
charts_dir: helm
skip_existing: true
env:
CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
# Run Helm OCI Charts Releaser
# This is for the librechat chart
- name: Release Helm OCI Charts for librechat
uses: appany/helm-oci-chart-releaser@v0.4.2
with:
name: librechat
repository: ${{ github.actor }}/librechat-chart
tag: ${{ steps.chart-version.outputs.CHART_VERSION }}
path: helm/librechat
registry: ghcr.io
registry_username: ${{ github.actor }}
registry_password: ${{ secrets.GITHUB_TOKEN }}
# this is for the librechat-rag-api chart
- name: Release Helm OCI Charts for librechat-rag-api
uses: appany/helm-oci-chart-releaser@v0.4.2
with:
name: librechat-rag-api
repository: ${{ github.actor }}/librechat-chart
tag: ${{ steps.chart-version.outputs.CHART_VERSION }}
path: helm/librechat-rag-api
registry: ghcr.io
registry_username: ${{ github.actor }}
registry_password: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,11 +1,18 @@
name: Detect Unused i18next Strings
# This workflow checks for unused i18n keys in translation files.
# It has special handling for:
# - com_ui_special_var_* keys that are dynamically constructed
# - com_agents_category_* keys that are stored in the database and used dynamically
on:
pull_request:
paths:
- "client/src/**"
- "api/**"
- "packages/data-provider/src/**"
- "packages/client/**"
- "packages/data-schemas/src/**"
jobs:
detect-unused-i18n-keys:
@@ -23,7 +30,7 @@ jobs:
# Define paths
I18N_FILE="client/src/locales/en/translation.json"
SOURCE_DIRS=("client/src" "api" "packages/data-provider/src")
SOURCE_DIRS=("client/src" "api" "packages/data-provider/src" "packages/client" "packages/data-schemas/src")
# Check if translation file exists
if [[ ! -f "$I18N_FILE" ]]; then
@@ -51,6 +58,31 @@ jobs:
fi
done
# Also check if the key is directly used somewhere
if [[ "$FOUND" == false ]]; then
for DIR in "${SOURCE_DIRS[@]}"; do
if grep -r --include=\*.{js,jsx,ts,tsx} -q "$KEY" "$DIR"; then
FOUND=true
break
fi
done
fi
# Special case for agent category keys that are dynamically used from database
elif [[ "$KEY" == com_agents_category_* ]]; then
# Check if agent category localization is being used
for DIR in "${SOURCE_DIRS[@]}"; do
# Check for dynamic category label/description usage
if grep -r --include=\*.{js,jsx,ts,tsx} -E "category\.(label|description).*startsWith.*['\"]com_" "$DIR" > /dev/null 2>&1 || \
# Check for the method that defines these keys
grep -r --include=\*.{js,jsx,ts,tsx} "ensureDefaultCategories" "$DIR" > /dev/null 2>&1 || \
# Check for direct usage in agentCategory.ts
grep -r --include=\*.ts -E "label:.*['\"]$KEY['\"]" "$DIR" > /dev/null 2>&1 || \
grep -r --include=\*.ts -E "description:.*['\"]$KEY['\"]" "$DIR" > /dev/null 2>&1; then
FOUND=true
break
fi
done
# Also check if the key is directly used somewhere
if [[ "$FOUND" == false ]]; then
for DIR in "${SOURCE_DIRS[@]}"; do

View File

@@ -48,7 +48,7 @@ jobs:
# 2. Download translation files from locize.
- name: Download Translations from locize
uses: locize/download@v1
uses: locize/download@v2
with:
project-id: ${{ secrets.LOCIZE_PROJECT_ID }}
path: "client/src/locales"

View File

@@ -7,6 +7,7 @@ on:
- 'package-lock.json'
- 'client/**'
- 'api/**'
- 'packages/client/**'
jobs:
detect-unused-packages:
@@ -28,7 +29,7 @@ jobs:
- name: Validate JSON files
run: |
for FILE in package.json client/package.json api/package.json; do
for FILE in package.json client/package.json api/package.json packages/client/package.json; do
if [[ -f "$FILE" ]]; then
jq empty "$FILE" || (echo "::error title=Invalid JSON::$FILE is invalid" && exit 1)
fi
@@ -63,12 +64,31 @@ jobs:
local folder=$1
local output_file=$2
if [[ -d "$folder" ]]; then
grep -rEho "require\\(['\"]([a-zA-Z0-9@/._-]+)['\"]\\)" "$folder" --include=\*.{js,ts,mjs,cjs} | \
# Extract require() statements
grep -rEho "require\\(['\"]([a-zA-Z0-9@/._-]+)['\"]\\)" "$folder" --include=\*.{js,ts,tsx,jsx,mjs,cjs} | \
sed -E "s/require\\(['\"]([a-zA-Z0-9@/._-]+)['\"]\\)/\1/" > "$output_file"
grep -rEho "import .* from ['\"]([a-zA-Z0-9@/._-]+)['\"]" "$folder" --include=\*.{js,ts,mjs,cjs} | \
# Extract ES6 imports - various patterns
# import x from 'module'
grep -rEho "import .* from ['\"]([a-zA-Z0-9@/._-]+)['\"]" "$folder" --include=\*.{js,ts,tsx,jsx,mjs,cjs} | \
sed -E "s/import .* from ['\"]([a-zA-Z0-9@/._-]+)['\"]/\1/" >> "$output_file"
# import 'module' (side-effect imports)
grep -rEho "import ['\"]([a-zA-Z0-9@/._-]+)['\"]" "$folder" --include=\*.{js,ts,tsx,jsx,mjs,cjs} | \
sed -E "s/import ['\"]([a-zA-Z0-9@/._-]+)['\"]/\1/" >> "$output_file"
# export { x } from 'module' or export * from 'module'
grep -rEho "export .* from ['\"]([a-zA-Z0-9@/._-]+)['\"]" "$folder" --include=\*.{js,ts,tsx,jsx,mjs,cjs} | \
sed -E "s/export .* from ['\"]([a-zA-Z0-9@/._-]+)['\"]/\1/" >> "$output_file"
# import type { x } from 'module' (TypeScript)
grep -rEho "import type .* from ['\"]([a-zA-Z0-9@/._-]+)['\"]" "$folder" --include=\*.{ts,tsx} | \
sed -E "s/import type .* from ['\"]([a-zA-Z0-9@/._-]+)['\"]/\1/" >> "$output_file"
# Remove subpath imports but keep the base package
# e.g., '@tanstack/react-query/devtools' becomes '@tanstack/react-query'
sed -i -E 's|^(@?[a-zA-Z0-9-]+(/[a-zA-Z0-9-]+)?)/.*|\1|' "$output_file"
sort -u "$output_file" -o "$output_file"
else
touch "$output_file"
@@ -78,13 +98,80 @@ jobs:
extract_deps_from_code "." root_used_code.txt
extract_deps_from_code "client" client_used_code.txt
extract_deps_from_code "api" api_used_code.txt
# Extract dependencies used by @librechat/client package
extract_deps_from_code "packages/client" packages_client_used_code.txt
- name: Get @librechat/client dependencies
id: get-librechat-client-deps
run: |
if [[ -f "packages/client/package.json" ]]; then
# Get all dependencies from @librechat/client (dependencies, devDependencies, and peerDependencies)
DEPS=$(jq -r '.dependencies // {} | keys[]' packages/client/package.json 2>/dev/null || echo "")
DEV_DEPS=$(jq -r '.devDependencies // {} | keys[]' packages/client/package.json 2>/dev/null || echo "")
PEER_DEPS=$(jq -r '.peerDependencies // {} | keys[]' packages/client/package.json 2>/dev/null || echo "")
# Combine all dependencies
echo "$DEPS" > librechat_client_deps.txt
echo "$DEV_DEPS" >> librechat_client_deps.txt
echo "$PEER_DEPS" >> librechat_client_deps.txt
# Also include dependencies that are imported in packages/client
cat packages_client_used_code.txt >> librechat_client_deps.txt
# Remove empty lines and sort
grep -v '^$' librechat_client_deps.txt | sort -u > temp_deps.txt
mv temp_deps.txt librechat_client_deps.txt
else
touch librechat_client_deps.txt
fi
- name: Extract Workspace Dependencies
id: extract-workspace-deps
run: |
# Function to get dependencies from a workspace package that are used by another package
get_workspace_package_deps() {
local package_json=$1
local output_file=$2
# Get all workspace dependencies (starting with @librechat/)
if [[ -f "$package_json" ]]; then
local workspace_deps=$(jq -r '.dependencies // {} | to_entries[] | select(.key | startswith("@librechat/")) | .key' "$package_json" 2>/dev/null || echo "")
# For each workspace dependency, get its dependencies
for dep in $workspace_deps; do
# Convert @librechat/api to packages/api
local workspace_path=$(echo "$dep" | sed 's/@librechat\//packages\//')
local workspace_package_json="${workspace_path}/package.json"
if [[ -f "$workspace_package_json" ]]; then
# Extract all dependencies from the workspace package
jq -r '.dependencies // {} | keys[]' "$workspace_package_json" 2>/dev/null >> "$output_file"
# Also extract peerDependencies
jq -r '.peerDependencies // {} | keys[]' "$workspace_package_json" 2>/dev/null >> "$output_file"
fi
done
fi
if [[ -f "$output_file" ]]; then
sort -u "$output_file" -o "$output_file"
else
touch "$output_file"
fi
}
# Get workspace dependencies for each package
get_workspace_package_deps "package.json" root_workspace_deps.txt
get_workspace_package_deps "client/package.json" client_workspace_deps.txt
get_workspace_package_deps "api/package.json" api_workspace_deps.txt
- name: Run depcheck for root package.json
id: check-root
run: |
if [[ -f "package.json" ]]; then
UNUSED=$(depcheck --json | jq -r '.dependencies | join("\n")' || echo "")
UNUSED=$(comm -23 <(echo "$UNUSED" | sort) <(cat root_used_deps.txt root_used_code.txt | sort) || echo "")
# Exclude dependencies used in scripts, code, and workspace packages
UNUSED=$(comm -23 <(echo "$UNUSED" | sort) <(cat root_used_deps.txt root_used_code.txt root_workspace_deps.txt | sort) || echo "")
echo "ROOT_UNUSED<<EOF" >> $GITHUB_ENV
echo "$UNUSED" >> $GITHUB_ENV
echo "EOF" >> $GITHUB_ENV
@@ -97,7 +184,8 @@ jobs:
chmod -R 755 client
cd client
UNUSED=$(depcheck --json | jq -r '.dependencies | join("\n")' || echo "")
UNUSED=$(comm -23 <(echo "$UNUSED" | sort) <(cat ../client_used_deps.txt ../client_used_code.txt | sort) || echo "")
# Exclude dependencies used in scripts, code, and workspace packages
UNUSED=$(comm -23 <(echo "$UNUSED" | sort) <(cat ../client_used_deps.txt ../client_used_code.txt ../client_workspace_deps.txt | sort) || echo "")
# Filter out false positives
UNUSED=$(echo "$UNUSED" | grep -v "^micromark-extension-llm-math$" || echo "")
echo "CLIENT_UNUSED<<EOF" >> $GITHUB_ENV
@@ -113,7 +201,8 @@ jobs:
chmod -R 755 api
cd api
UNUSED=$(depcheck --json | jq -r '.dependencies | join("\n")' || echo "")
UNUSED=$(comm -23 <(echo "$UNUSED" | sort) <(cat ../api_used_deps.txt ../api_used_code.txt | sort) || echo "")
# Exclude dependencies used in scripts, code, and workspace packages
UNUSED=$(comm -23 <(echo "$UNUSED" | sort) <(cat ../api_used_deps.txt ../api_used_code.txt ../api_workspace_deps.txt | sort) || echo "")
echo "API_UNUSED<<EOF" >> $GITHUB_ENV
echo "$UNUSED" >> $GITHUB_ENV
echo "EOF" >> $GITHUB_ENV

4
.gitignore vendored
View File

@@ -13,6 +13,9 @@ pids
*.seed
.git
# CI/CD data
test-image*
# Directory for instrumented libs generated by jscoverage/JSCover
lib-cov
@@ -134,3 +137,4 @@ helm/**/.values.yaml
/.openai/
/.tabnine/
/.codeium
*.local.md

3
.vscode/launch.json vendored
View File

@@ -8,7 +8,8 @@
"skipFiles": ["<node_internals>/**"],
"program": "${workspaceFolder}/api/server/index.js",
"env": {
"NODE_ENV": "production"
"NODE_ENV": "production",
"NODE_TLS_REJECT_UNAUTHORIZED": "0"
},
"console": "integratedTerminal",
"envFile": "${workspaceFolder}/.env"

View File

@@ -1,4 +1,4 @@
# v0.7.9
# v0.8.0-rc3
# Base node image
FROM node:20-alpine AS node
@@ -19,7 +19,12 @@ WORKDIR /app
USER node
COPY --chown=node:node . .
COPY --chown=node:node package.json package-lock.json ./
COPY --chown=node:node api/package.json ./api/package.json
COPY --chown=node:node client/package.json ./client/package.json
COPY --chown=node:node packages/data-provider/package.json ./packages/data-provider/package.json
COPY --chown=node:node packages/data-schemas/package.json ./packages/data-schemas/package.json
COPY --chown=node:node packages/api/package.json ./packages/api/package.json
RUN \
# Allow mounting of these files, which have no default
@@ -29,7 +34,11 @@ RUN \
npm config set fetch-retry-maxtimeout 600000 ; \
npm config set fetch-retries 5 ; \
npm config set fetch-retry-mintimeout 15000 ; \
npm install --no-audit; \
npm ci --no-audit
COPY --chown=node:node . .
RUN \
# React client build
NODE_OPTIONS="--max-old-space-size=2048" npm run frontend; \
npm prune --production; \
@@ -47,4 +56,4 @@ CMD ["npm", "run", "backend"]
# WORKDIR /usr/share/nginx/html
# COPY --from=node /app/client/dist /usr/share/nginx/html
# COPY client/nginx.conf /etc/nginx/conf.d/default.conf
# ENTRYPOINT ["nginx", "-g", "daemon off;"]
# ENTRYPOINT ["nginx", "-g", "daemon off;"]

View File

@@ -1,5 +1,5 @@
# Dockerfile.multi
# v0.7.9
# v0.8.0-rc3
# Base for all builds
FROM node:20-alpine AS base-min
@@ -16,6 +16,7 @@ COPY package*.json ./
COPY packages/data-provider/package*.json ./packages/data-provider/
COPY packages/api/package*.json ./packages/api/
COPY packages/data-schemas/package*.json ./packages/data-schemas/
COPY packages/client/package*.json ./packages/client/
COPY client/package*.json ./client/
COPY api/package*.json ./api/
@@ -45,11 +46,19 @@ COPY --from=data-provider-build /app/packages/data-provider/dist /app/packages/d
COPY --from=data-schemas-build /app/packages/data-schemas/dist /app/packages/data-schemas/dist
RUN npm run build
# Build `client` package
FROM base AS client-package-build
WORKDIR /app/packages/client
COPY packages/client ./
RUN npm run build
# Client build
FROM base AS client-build
WORKDIR /app/client
COPY client ./
COPY --from=data-provider-build /app/packages/data-provider/dist /app/packages/data-provider/dist
COPY --from=client-package-build /app/packages/client/dist /app/packages/client/dist
COPY --from=client-package-build /app/packages/client/src /app/packages/client/src
ENV NODE_OPTIONS="--max-old-space-size=2048"
RUN npm run build

View File

@@ -65,8 +65,10 @@
- 🔦 **Agents & Tools Integration**:
- **[LibreChat Agents](https://www.librechat.ai/docs/features/agents)**:
- No-Code Custom Assistants: Build specialized, AI-driven helpers without coding
- Flexible & Extensible: Use MCP Servers, tools, file search, code execution, and more
- No-Code Custom Assistants: Build specialized, AI-driven helpers
- Agent Marketplace: Discover and deploy community-built agents
- Collaborative Sharing: Share agents with specific users and groups
- Flexible & Extensible: Use MCP Servers, tools, file search, code execution, and more
- Compatible with Custom Endpoints, OpenAI, Azure, Anthropic, AWS Bedrock, Google, Vertex AI, Responses API, and more
- [Model Context Protocol (MCP) Support](https://modelcontextprotocol.io/clients#librechat) for Tools
@@ -87,15 +89,18 @@
- Create, Save, & Share Custom Presets
- Switch between AI Endpoints and Presets mid-chat
- Edit, Resubmit, and Continue Messages with Conversation branching
- Create and share prompts with specific users and groups
- [Fork Messages & Conversations](https://www.librechat.ai/docs/features/fork) for Advanced Context control
- 💬 **Multimodal & File Interactions**:
- Upload and analyze images with Claude 3, GPT-4.5, GPT-4o, o1, Llama-Vision, and Gemini 📸
- Chat with Files using Custom Endpoints, OpenAI, Azure, Anthropic, AWS Bedrock, & Google 🗃️
- 🌎 **Multilingual UI**:
- English, 中文, Deutsch, Español, Français, Italiano, Polski, Português Brasileiro
- Русский, 日本語, Svenska, 한국어, Tiếng Việt, 繁體中文, العربية, Türkçe, Nederlands, עברית
- 🌎 **Multilingual UI**:
- English, 中文 (简体), 中文 (繁體), العربية, Deutsch, Español, Français, Italiano
- Polski, Português (PT), Português (BR), Русский, 日本語, Svenska, 한국어, Tiếng Việt
- Türkçe, Nederlands, עברית, Català, Čeština, Dansk, Eesti, فارسی
- Suomi, Magyar, Հայերեն, Bahasa Indonesia, ქართული, Latviešu, ไทย, ئۇيغۇرچە
- 🧠 **Reasoning UI**:
- Dynamic Reasoning UI for Chain-of-Thought/Reasoning AI models like DeepSeek-R1

View File

@@ -1,5 +1,7 @@
const crypto = require('crypto');
const fetch = require('node-fetch');
const { logger } = require('@librechat/data-schemas');
const { getBalanceConfig } = require('@librechat/api');
const {
supportsBalanceCheck,
isAgentsEndpoint,
@@ -15,7 +17,6 @@ const { checkBalance } = require('~/models/balanceMethods');
const { truncateToolCallOutputs } = require('./prompts');
const { getFiles } = require('~/models/File');
const TextStream = require('./TextStream');
const { logger } = require('~/config');
class BaseClient {
constructor(apiKey, options = {}) {
@@ -37,6 +38,8 @@ class BaseClient {
this.conversationId;
/** @type {string} */
this.responseMessageId;
/** @type {string} */
this.parentMessageId;
/** @type {TAttachment[]} */
this.attachments;
/** The key for the usage object's input tokens
@@ -110,13 +113,15 @@ class BaseClient {
* If a correction to the token usage is needed, the method should return an object with the corrected token counts.
* Should only be used if `recordCollectedUsage` was not used instead.
* @param {string} [model]
* @param {AppConfig['balance']} [balance]
* @param {number} promptTokens
* @param {number} completionTokens
* @returns {Promise<void>}
*/
async recordTokenUsage({ model, promptTokens, completionTokens }) {
async recordTokenUsage({ model, balance, promptTokens, completionTokens }) {
logger.debug('[BaseClient] `recordTokenUsage` not implemented.', {
model,
balance,
promptTokens,
completionTokens,
});
@@ -185,7 +190,8 @@ class BaseClient {
this.user = user;
const saveOptions = this.getSaveOptions();
this.abortController = opts.abortController ?? new AbortController();
const conversationId = overrideConvoId ?? opts.conversationId ?? crypto.randomUUID();
const requestConvoId = overrideConvoId ?? opts.conversationId;
const conversationId = requestConvoId ?? crypto.randomUUID();
const parentMessageId = opts.parentMessageId ?? Constants.NO_PARENT;
const userMessageId =
overrideUserMessageId ?? opts.overrideParentMessageId ?? crypto.randomUUID();
@@ -210,11 +216,12 @@ class BaseClient {
...opts,
user,
head,
saveOptions,
userMessageId,
requestConvoId,
conversationId,
parentMessageId,
userMessageId,
responseMessageId,
saveOptions,
};
}
@@ -233,11 +240,12 @@ class BaseClient {
const {
user,
head,
saveOptions,
userMessageId,
requestConvoId,
conversationId,
parentMessageId,
userMessageId,
responseMessageId,
saveOptions,
} = await this.setMessageOptions(opts);
const userMessage = opts.isEdited
@@ -259,7 +267,8 @@ class BaseClient {
}
if (typeof opts?.onStart === 'function') {
opts.onStart(userMessage, responseMessageId);
const isNewConvo = !requestConvoId && parentMessageId === Constants.NO_PARENT;
opts.onStart(userMessage, responseMessageId, isNewConvo);
}
return {
@@ -565,6 +574,7 @@ class BaseClient {
}
async sendMessage(message, opts = {}) {
const appConfig = this.options.req?.config;
/** @type {Promise<TMessage>} */
let userMessagePromise;
const { user, head, isEdited, conversationId, responseMessageId, saveOptions, userMessage } =
@@ -614,15 +624,19 @@ class BaseClient {
this.currentMessages.push(userMessage);
}
/**
* When the userMessage is pushed to currentMessages, the parentMessage is the userMessageId.
* this only matters when buildMessages is utilizing the parentMessageId, and may vary on implementation
*/
const parentMessageId = isEdited ? head : userMessage.messageId;
this.parentMessageId = parentMessageId;
let {
prompt: payload,
tokenCountMap,
promptTokens,
} = await this.buildMessages(
this.currentMessages,
// When the userMessage is pushed to currentMessages, the parentMessage is the userMessageId.
// this only matters when buildMessages is utilizing the parentMessageId, and may vary on implementation
isEdited ? head : userMessage.messageId,
parentMessageId,
this.getBuildMessagesOptions(opts),
opts,
);
@@ -647,9 +661,9 @@ class BaseClient {
}
}
const balance = this.options.req?.app?.locals?.balance;
const balanceConfig = getBalanceConfig(appConfig);
if (
balance?.enabled &&
balanceConfig?.enabled &&
supportsBalanceCheck[this.options.endpointType ?? this.options.endpoint]
) {
await checkBalance({
@@ -748,6 +762,7 @@ class BaseClient {
usage,
promptTokens,
completionTokens,
balance: balanceConfig,
model: responseMessage.model,
});
}

View File

@@ -36,11 +36,11 @@ const { encodeAndFormat } = require('~/server/services/Files/images/encode');
const { addSpaceIfNeeded, sleep } = require('~/server/utils');
const { spendTokens } = require('~/models/spendTokens');
const { handleOpenAIErrors } = require('./tools/util');
const { createLLM, RunManager } = require('./llm');
const { summaryBuffer } = require('./memory');
const { runTitleChain } = require('./chains');
const { tokenSplit } = require('./document');
const BaseClient = require('./BaseClient');
const { createLLM } = require('./llm');
const { logger } = require('~/config');
class OpenAIClient extends BaseClient {
@@ -618,10 +618,6 @@ class OpenAIClient extends BaseClient {
temperature = 0.2,
max_tokens,
streaming,
context,
tokenBuffer,
initialMessageCount,
conversationId,
}) {
const modelOptions = {
modelName: modelName ?? model,
@@ -653,8 +649,10 @@ class OpenAIClient extends BaseClient {
if (headers && typeof headers === 'object' && !Array.isArray(headers)) {
configOptions.baseOptions = {
headers: resolveHeaders({
...headers,
...configOptions?.baseOptions?.headers,
headers: {
...headers,
...configOptions?.baseOptions?.headers,
},
}),
};
}
@@ -664,22 +662,12 @@ class OpenAIClient extends BaseClient {
configOptions.httpsAgent = new HttpsProxyAgent(this.options.proxy);
}
const { req, res, debug } = this.options;
const runManager = new RunManager({ req, res, debug, abortController: this.abortController });
this.runManager = runManager;
const llm = createLLM({
modelOptions,
configOptions,
openAIApiKey: this.apiKey,
azure: this.azure,
streaming,
callbacks: runManager.createCallbacks({
context,
tokenBuffer,
conversationId: this.conversationId ?? conversationId,
initialMessageCount,
}),
});
return llm;
@@ -700,6 +688,7 @@ class OpenAIClient extends BaseClient {
* In case of failure, it will return the default title, "New Chat".
*/
async titleConvo({ text, conversationId, responseText = '' }) {
const appConfig = this.options.req?.config;
this.conversationId = conversationId;
if (this.options.attachments) {
@@ -728,8 +717,7 @@ class OpenAIClient extends BaseClient {
max_tokens: 16,
};
/** @type {TAzureConfig | undefined} */
const azureConfig = this.options?.req?.app?.locals?.[EModelEndpoint.azureOpenAI];
const azureConfig = appConfig?.endpoints?.[EModelEndpoint.azureOpenAI];
const resetTitleOptions = !!(
(this.azure && azureConfig) ||
@@ -749,7 +737,7 @@ class OpenAIClient extends BaseClient {
groupMap,
});
this.options.headers = resolveHeaders(headers);
this.options.headers = resolveHeaders({ headers });
this.options.reverseProxyUrl = baseURL ?? null;
this.langchainProxy = extractBaseURL(this.options.reverseProxyUrl);
this.apiKey = azureOptions.azureOpenAIApiKey;
@@ -1118,6 +1106,7 @@ ${convo}
}
async chatCompletion({ payload, onProgress, abortController = null }) {
const appConfig = this.options.req?.config;
let error = null;
let intermediateReply = [];
const errorCallback = (err) => (error = err);
@@ -1163,8 +1152,7 @@ ${convo}
opts.fetchOptions.agent = new HttpsProxyAgent(this.options.proxy);
}
/** @type {TAzureConfig | undefined} */
const azureConfig = this.options?.req?.app?.locals?.[EModelEndpoint.azureOpenAI];
const azureConfig = appConfig?.endpoints?.[EModelEndpoint.azureOpenAI];
if (
(this.azure && this.isVisionModel && azureConfig) ||
@@ -1181,7 +1169,7 @@ ${convo}
modelGroupMap,
groupMap,
});
opts.defaultHeaders = resolveHeaders(headers);
opts.defaultHeaders = resolveHeaders({ headers });
this.langchainProxy = extractBaseURL(baseURL);
this.apiKey = azureOptions.azureOpenAIApiKey;
@@ -1222,7 +1210,9 @@ ${convo}
}
if (this.isOmni === true && modelOptions.max_tokens != null) {
modelOptions.max_completion_tokens = modelOptions.max_tokens;
const paramName =
modelOptions.useResponsesApi === true ? 'max_output_tokens' : 'max_completion_tokens';
modelOptions[paramName] = modelOptions.max_tokens;
delete modelOptions.max_tokens;
}
if (this.isOmni === true && modelOptions.temperature != null) {

View File

@@ -1,95 +0,0 @@
const { promptTokensEstimate } = require('openai-chat-tokens');
const { EModelEndpoint, supportsBalanceCheck } = require('librechat-data-provider');
const { formatFromLangChain } = require('~/app/clients/prompts');
const { getBalanceConfig } = require('~/server/services/Config');
const { checkBalance } = require('~/models/balanceMethods');
const { logger } = require('~/config');
const createStartHandler = ({
context,
conversationId,
tokenBuffer = 0,
initialMessageCount,
manager,
}) => {
return async (_llm, _messages, runId, parentRunId, extraParams) => {
const { invocation_params } = extraParams;
const { model, functions, function_call } = invocation_params;
const messages = _messages[0].map(formatFromLangChain);
logger.debug(`[createStartHandler] handleChatModelStart: ${context}`, {
model,
function_call,
});
if (context !== 'title') {
logger.debug(`[createStartHandler] handleChatModelStart: ${context}`, {
functions,
});
}
const payload = { messages };
let prelimPromptTokens = 1;
if (functions) {
payload.functions = functions;
prelimPromptTokens += 2;
}
if (function_call) {
payload.function_call = function_call;
prelimPromptTokens -= 5;
}
prelimPromptTokens += promptTokensEstimate(payload);
logger.debug('[createStartHandler]', {
prelimPromptTokens,
tokenBuffer,
});
prelimPromptTokens += tokenBuffer;
try {
const balance = await getBalanceConfig();
if (balance?.enabled && supportsBalanceCheck[EModelEndpoint.openAI]) {
const generations =
initialMessageCount && messages.length > initialMessageCount
? messages.slice(initialMessageCount)
: null;
await checkBalance({
req: manager.req,
res: manager.res,
txData: {
user: manager.user,
tokenType: 'prompt',
amount: prelimPromptTokens,
debug: manager.debug,
generations,
model,
endpoint: EModelEndpoint.openAI,
},
});
}
} catch (err) {
logger.error(`[createStartHandler][${context}] checkBalance error`, err);
manager.abortController.abort();
if (context === 'summary' || context === 'plugins') {
manager.addRun(runId, { conversationId, error: err.message });
throw new Error(err);
}
return;
}
manager.addRun(runId, {
model,
messages,
functions,
function_call,
runId,
parentRunId,
conversationId,
prelimPromptTokens,
});
};
};
module.exports = createStartHandler;

View File

@@ -1,5 +0,0 @@
const createStartHandler = require('./createStartHandler');
module.exports = {
createStartHandler,
};

View File

@@ -1,105 +0,0 @@
const { createStartHandler } = require('~/app/clients/callbacks');
const { spendTokens } = require('~/models/spendTokens');
const { logger } = require('~/config');
class RunManager {
constructor(fields) {
const { req, res, abortController, debug } = fields;
this.abortController = abortController;
this.user = req.user.id;
this.req = req;
this.res = res;
this.debug = debug;
this.runs = new Map();
this.convos = new Map();
}
addRun(runId, runData) {
if (!this.runs.has(runId)) {
this.runs.set(runId, runData);
if (runData.conversationId) {
this.convos.set(runData.conversationId, runId);
}
return runData;
} else {
const existingData = this.runs.get(runId);
const update = { ...existingData, ...runData };
this.runs.set(runId, update);
if (update.conversationId) {
this.convos.set(update.conversationId, runId);
}
return update;
}
}
removeRun(runId) {
if (this.runs.has(runId)) {
this.runs.delete(runId);
} else {
logger.error(`[api/app/clients/llm/RunManager] Run with ID ${runId} does not exist.`);
}
}
getAllRuns() {
return Array.from(this.runs.values());
}
getRunById(runId) {
return this.runs.get(runId);
}
getRunByConversationId(conversationId) {
const runId = this.convos.get(conversationId);
return { run: this.runs.get(runId), runId };
}
createCallbacks(metadata) {
return [
{
handleChatModelStart: createStartHandler({ ...metadata, manager: this }),
handleLLMEnd: async (output, runId, _parentRunId) => {
const { llmOutput, ..._output } = output;
logger.debug(`[RunManager] handleLLMEnd: ${JSON.stringify(metadata)}`, {
runId,
_parentRunId,
llmOutput,
});
if (metadata.context !== 'title') {
logger.debug('[RunManager] handleLLMEnd:', {
output: _output,
});
}
const { tokenUsage } = output.llmOutput;
const run = this.getRunById(runId);
this.removeRun(runId);
const txData = {
user: this.user,
model: run?.model ?? 'gpt-3.5-turbo',
...metadata,
};
await spendTokens(txData, tokenUsage);
},
handleLLMError: async (err) => {
logger.error(`[RunManager] handleLLMError: ${JSON.stringify(metadata)}`, err);
if (metadata.context === 'title') {
return;
} else if (metadata.context === 'plugins') {
throw new Error(err);
}
const { conversationId } = metadata;
const { run } = this.getRunByConversationId(conversationId);
if (run && run.error) {
const { error } = run;
throw new Error(error);
}
},
},
];
}
}
module.exports = RunManager;

View File

@@ -1,9 +1,7 @@
const createLLM = require('./createLLM');
const RunManager = require('./RunManager');
const createCoherePayload = require('./createCoherePayload');
module.exports = {
createLLM,
RunManager,
createCoherePayload,
};

View File

@@ -1,7 +1,6 @@
const axios = require('axios');
const { isEnabled } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { generateShortLivedToken } = require('~/server/services/AuthService');
const { isEnabled, generateShortLivedToken } = require('@librechat/api');
const footer = `Use the context as your learned knowledge to better answer the user.

View File

@@ -245,7 +245,7 @@ describe('AnthropicClient', () => {
});
describe('Claude 4 model headers', () => {
it('should add "prompt-caching" beta header for claude-sonnet-4 model', () => {
it('should add "prompt-caching" and "context-1m" beta headers for claude-sonnet-4 model', () => {
const client = new AnthropicClient('test-api-key');
const modelOptions = {
model: 'claude-sonnet-4-20250514',
@@ -255,10 +255,30 @@ describe('AnthropicClient', () => {
expect(anthropicClient._options.defaultHeaders).toBeDefined();
expect(anthropicClient._options.defaultHeaders).toHaveProperty('anthropic-beta');
expect(anthropicClient._options.defaultHeaders['anthropic-beta']).toBe(
'prompt-caching-2024-07-31',
'prompt-caching-2024-07-31,context-1m-2025-08-07',
);
});
it('should add "prompt-caching" and "context-1m" beta headers for claude-sonnet-4 model formats', () => {
const client = new AnthropicClient('test-api-key');
const modelVariations = [
'claude-sonnet-4-20250514',
'claude-sonnet-4-latest',
'anthropic/claude-sonnet-4-20250514',
];
modelVariations.forEach((model) => {
const modelOptions = { model };
client.setOptions({ modelOptions, promptCache: true });
const anthropicClient = client.getClient(modelOptions);
expect(anthropicClient._options.defaultHeaders).toBeDefined();
expect(anthropicClient._options.defaultHeaders).toHaveProperty('anthropic-beta');
expect(anthropicClient._options.defaultHeaders['anthropic-beta']).toBe(
'prompt-caching-2024-07-31,context-1m-2025-08-07',
);
});
});
it('should add "prompt-caching" beta header for claude-opus-4 model', () => {
const client = new AnthropicClient('test-api-key');
const modelOptions = {
@@ -273,20 +293,6 @@ describe('AnthropicClient', () => {
);
});
it('should add "prompt-caching" beta header for claude-4-sonnet model', () => {
const client = new AnthropicClient('test-api-key');
const modelOptions = {
model: 'claude-4-sonnet-20250514',
};
client.setOptions({ modelOptions, promptCache: true });
const anthropicClient = client.getClient(modelOptions);
expect(anthropicClient._options.defaultHeaders).toBeDefined();
expect(anthropicClient._options.defaultHeaders).toHaveProperty('anthropic-beta');
expect(anthropicClient._options.defaultHeaders['anthropic-beta']).toBe(
'prompt-caching-2024-07-31',
);
});
it('should add "prompt-caching" beta header for claude-4-opus model', () => {
const client = new AnthropicClient('test-api-key');
const modelOptions = {

View File

@@ -2,6 +2,14 @@ const { Constants } = require('librechat-data-provider');
const { initializeFakeClient } = require('./FakeClient');
jest.mock('~/db/connect');
jest.mock('~/server/services/Config', () => ({
getAppConfig: jest.fn().mockResolvedValue({
// Default app config for tests
paths: { uploads: '/tmp' },
fileStrategy: 'local',
memory: { disabled: false },
}),
}));
jest.mock('~/models', () => ({
User: jest.fn(),
Key: jest.fn(),
@@ -579,6 +587,8 @@ describe('BaseClient', () => {
expect(onStart).toHaveBeenCalledWith(
expect.objectContaining({ text: 'Hello, world!' }),
expect.any(String),
/** `isNewConvo` */
true,
);
});

View File

@@ -1,4 +1,4 @@
const availableTools = require('./manifest.json');
const manifest = require('./manifest');
// Structured Tools
const DALLE3 = require('./structured/DALLE3');
@@ -13,23 +13,8 @@ const TraversaalSearch = require('./structured/TraversaalSearch');
const createOpenAIImageTools = require('./structured/OpenAIImageTools');
const TavilySearchResults = require('./structured/TavilySearchResults');
/** @type {Record<string, TPlugin | undefined>} */
const manifestToolMap = {};
/** @type {Array<TPlugin>} */
const toolkits = [];
availableTools.forEach((tool) => {
manifestToolMap[tool.pluginKey] = tool;
if (tool.toolkit === true) {
toolkits.push(tool);
}
});
module.exports = {
toolkits,
availableTools,
manifestToolMap,
...manifest,
// Structured Tools
DALLE3,
FluxAPI,

View File

@@ -0,0 +1,20 @@
const availableTools = require('./manifest.json');
/** @type {Record<string, TPlugin | undefined>} */
const manifestToolMap = {};
/** @type {Array<TPlugin>} */
const toolkits = [];
availableTools.forEach((tool) => {
manifestToolMap[tool.pluginKey] = tool;
if (tool.toolkit === true) {
toolkits.push(tool);
}
});
module.exports = {
toolkits,
availableTools,
manifestToolMap,
};

View File

@@ -49,7 +49,7 @@
"pluginKey": "image_gen_oai",
"toolkit": true,
"description": "Image Generation and Editing using OpenAI's latest state-of-the-art models",
"icon": "/assets/image_gen_oai.png",
"icon": "assets/image_gen_oai.png",
"authConfig": [
{
"authField": "IMAGE_GEN_OAI_API_KEY",
@@ -75,7 +75,7 @@
"name": "Browser",
"pluginKey": "web-browser",
"description": "Scrape and summarize webpage data",
"icon": "/assets/web-browser.svg",
"icon": "assets/web-browser.svg",
"authConfig": [
{
"authField": "OPENAI_API_KEY",
@@ -170,7 +170,7 @@
"name": "OpenWeather",
"pluginKey": "open_weather",
"description": "Get weather forecasts and historical data from the OpenWeather API",
"icon": "/assets/openweather.png",
"icon": "assets/openweather.png",
"authConfig": [
{
"authField": "OPENWEATHER_API_KEY",

View File

@@ -3,12 +3,12 @@ const path = require('path');
const OpenAI = require('openai');
const fetch = require('node-fetch');
const { v4: uuidv4 } = require('uuid');
const { ProxyAgent } = require('undici');
const { Tool } = require('@langchain/core/tools');
const { HttpsProxyAgent } = require('https-proxy-agent');
const { logger } = require('@librechat/data-schemas');
const { getImageBasename } = require('@librechat/api');
const { FileContext, ContentTypes } = require('librechat-data-provider');
const { getImageBasename } = require('~/server/services/Files/images');
const extractBaseURL = require('~/utils/extractBaseURL');
const logger = require('~/config/winston');
const displayMessage =
"DALL-E displayed an image. All generated images are already plainly visible, so don't repeat the descriptions in detail. Do not list download links as they are available in the UI already. The user may download the images by clicking on them, but do not mention anything about downloading to the user.";
@@ -46,7 +46,10 @@ class DALLE3 extends Tool {
}
if (process.env.PROXY) {
config.httpAgent = new HttpsProxyAgent(process.env.PROXY);
const proxyAgent = new ProxyAgent(process.env.PROXY);
config.fetchOptions = {
dispatcher: proxyAgent,
};
}
/** @type {OpenAI} */
@@ -163,7 +166,8 @@ Error Message: ${error.message}`);
if (this.isAgent) {
let fetchOptions = {};
if (process.env.PROXY) {
fetchOptions.agent = new HttpsProxyAgent(process.env.PROXY);
const proxyAgent = new ProxyAgent(process.env.PROXY);
fetchOptions.dispatcher = proxyAgent;
}
const imageResponse = await fetch(theImageUrl, fetchOptions);
const arrayBuffer = await imageResponse.arrayBuffer();

View File

@@ -1,69 +1,16 @@
const { z } = require('zod');
const axios = require('axios');
const { v4 } = require('uuid');
const OpenAI = require('openai');
const FormData = require('form-data');
const { ProxyAgent } = require('undici');
const { tool } = require('@langchain/core/tools');
const { logAxiosError } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { HttpsProxyAgent } = require('https-proxy-agent');
const { logAxiosError, oaiToolkit } = require('@librechat/api');
const { ContentTypes, EImageOutputType } = require('librechat-data-provider');
const { getStrategyFunctions } = require('~/server/services/Files/strategies');
const { extractBaseURL } = require('~/utils');
const extractBaseURL = require('~/utils/extractBaseURL');
const { getFiles } = require('~/models/File');
/** Default descriptions for image generation tool */
const DEFAULT_IMAGE_GEN_DESCRIPTION = `
Generates high-quality, original images based solely on text, not using any uploaded reference images.
When to use \`image_gen_oai\`:
- To create entirely new images from detailed text descriptions that do NOT reference any image files.
When NOT to use \`image_gen_oai\`:
- If the user has uploaded any images and requests modifications, enhancements, or remixing based on those uploads → use \`image_edit_oai\` instead.
Generated image IDs will be returned in the response, so you can refer to them in future requests made to \`image_edit_oai\`.
`.trim();
/** Default description for image editing tool */
const DEFAULT_IMAGE_EDIT_DESCRIPTION =
`Generates high-quality, original images based on text and one or more uploaded/referenced images.
When to use \`image_edit_oai\`:
- The user wants to modify, extend, or remix one **or more** uploaded images, either:
- Previously generated, or in the current request (both to be included in the \`image_ids\` array).
- Always when the user refers to uploaded images for editing, enhancement, remixing, style transfer, or combining elements.
- Any current or existing images are to be used as visual guides.
- If there are any files in the current request, they are more likely than not expected as references for image edit requests.
When NOT to use \`image_edit_oai\`:
- Brand-new generations that do not rely on an existing image → use \`image_gen_oai\` instead.
Both generated and referenced image IDs will be returned in the response, so you can refer to them in future requests made to \`image_edit_oai\`.
`.trim();
/** Default prompt descriptions */
const DEFAULT_IMAGE_GEN_PROMPT_DESCRIPTION = `Describe the image you want in detail.
Be highly specific—break your idea into layers:
(1) main concept and subject,
(2) composition and position,
(3) lighting and mood,
(4) style, medium, or camera details,
(5) important features (age, expression, clothing, etc.),
(6) background.
Use positive, descriptive language and specify what should be included, not what to avoid.
List number and characteristics of people/objects, and mention style/technical requirements (e.g., "DSLR photo, 85mm lens, golden hour").
Do not reference any uploaded images—use for new image creation from text only.`;
const DEFAULT_IMAGE_EDIT_PROMPT_DESCRIPTION = `Describe the changes, enhancements, or new ideas to apply to the uploaded image(s).
Be highly specific—break your request into layers:
(1) main concept or transformation,
(2) specific edits/replacements or composition guidance,
(3) desired style, mood, or technique,
(4) features/items to keep, change, or add (such as objects, people, clothing, lighting, etc.).
Use positive, descriptive language and clarify what should be included or changed, not what to avoid.
Always base this prompt on the most recently uploaded reference images.`;
const displayMessage =
"The tool displayed an image. All generated images are already plainly visible, so don't repeat the descriptions in detail. Do not list download links as they are available in the UI already. The user may download the images by clicking on them, but do not mention anything about downloading to the user.";
@@ -91,22 +38,6 @@ function returnValue(value) {
return value;
}
const getImageGenDescription = () => {
return process.env.IMAGE_GEN_OAI_DESCRIPTION || DEFAULT_IMAGE_GEN_DESCRIPTION;
};
const getImageEditDescription = () => {
return process.env.IMAGE_EDIT_OAI_DESCRIPTION || DEFAULT_IMAGE_EDIT_DESCRIPTION;
};
const getImageGenPromptDescription = () => {
return process.env.IMAGE_GEN_OAI_PROMPT_DESCRIPTION || DEFAULT_IMAGE_GEN_PROMPT_DESCRIPTION;
};
const getImageEditPromptDescription = () => {
return process.env.IMAGE_EDIT_OAI_PROMPT_DESCRIPTION || DEFAULT_IMAGE_EDIT_PROMPT_DESCRIPTION;
};
function createAbortHandler() {
return function () {
logger.debug('[ImageGenOAI] Image generation aborted');
@@ -121,7 +52,9 @@ function createAbortHandler() {
* @param {string} fields.IMAGE_GEN_OAI_API_KEY - The OpenAI API key
* @param {boolean} [fields.override] - Whether to override the API key check, necessary for app initialization
* @param {MongoFile[]} [fields.imageFiles] - The images to be used for editing
* @returns {Array} - Array of image tools
* @param {string} [fields.imageOutputType] - The image output type configuration
* @param {string} [fields.fileStrategy] - The file storage strategy
* @returns {Array<ReturnType<tool>>} - Array of image tools
*/
function createOpenAIImageTools(fields = {}) {
/** @type {boolean} Used to initialize the Tool without necessary variables. */
@@ -131,8 +64,8 @@ function createOpenAIImageTools(fields = {}) {
throw new Error('This tool is only available for agents.');
}
const { req } = fields;
const imageOutputType = req?.app.locals.imageOutputType || EImageOutputType.PNG;
const appFileStrategy = req?.app.locals.fileStrategy;
const imageOutputType = fields.imageOutputType || EImageOutputType.PNG;
const appFileStrategy = fields.fileStrategy;
const getApiKey = () => {
const apiKey = process.env.IMAGE_GEN_OAI_API_KEY ?? '';
@@ -189,7 +122,10 @@ function createOpenAIImageTools(fields = {}) {
}
const clientConfig = { ...closureConfig };
if (process.env.PROXY) {
clientConfig.httpAgent = new HttpsProxyAgent(process.env.PROXY);
const proxyAgent = new ProxyAgent(process.env.PROXY);
clientConfig.fetchOptions = {
dispatcher: proxyAgent,
};
}
/** @type {OpenAI} */
@@ -282,46 +218,7 @@ Error Message: ${error.message}`);
];
return [response, { content, file_ids }];
},
{
name: 'image_gen_oai',
description: getImageGenDescription(),
schema: z.object({
prompt: z.string().max(32000).describe(getImageGenPromptDescription()),
background: z
.enum(['transparent', 'opaque', 'auto'])
.optional()
.describe(
'Sets transparency for the background. Must be one of transparent, opaque or auto (default). When transparent, the output format should be png or webp.',
),
/*
n: z
.number()
.int()
.min(1)
.max(10)
.optional()
.describe('The number of images to generate. Must be between 1 and 10.'),
output_compression: z
.number()
.int()
.min(0)
.max(100)
.optional()
.describe('The compression level (0-100%) for webp or jpeg formats. Defaults to 100.'),
*/
quality: z
.enum(['auto', 'high', 'medium', 'low'])
.optional()
.describe('The quality of the image. One of auto (default), high, medium, or low.'),
size: z
.enum(['auto', '1024x1024', '1536x1024', '1024x1536'])
.optional()
.describe(
'The size of the generated image. One of 1024x1024, 1536x1024 (landscape), 1024x1536 (portrait), or auto (default).',
),
}),
responseFormat: 'content_and_artifact',
},
oaiToolkit.image_gen_oai,
);
/**
@@ -335,7 +232,10 @@ Error Message: ${error.message}`);
const clientConfig = { ...closureConfig };
if (process.env.PROXY) {
clientConfig.httpAgent = new HttpsProxyAgent(process.env.PROXY);
const proxyAgent = new ProxyAgent(process.env.PROXY);
clientConfig.fetchOptions = {
dispatcher: proxyAgent,
};
}
const formData = new FormData();
@@ -447,6 +347,19 @@ Error Message: ${error.message}`);
baseURL,
};
if (process.env.PROXY) {
try {
const url = new URL(process.env.PROXY);
axiosConfig.proxy = {
host: url.hostname.replace(/^\[|\]$/g, ''),
port: url.port ? parseInt(url.port, 10) : undefined,
protocol: url.protocol.replace(':', ''),
};
} catch (error) {
logger.error('Error parsing proxy URL:', error);
}
}
if (process.env.IMAGE_GEN_OAI_AZURE_API_VERSION && process.env.IMAGE_GEN_OAI_BASEURL) {
axiosConfig.params = {
'api-version': process.env.IMAGE_GEN_OAI_AZURE_API_VERSION,
@@ -498,48 +411,7 @@ Error Message: ${error.message || 'Unknown error'}`);
}
}
},
{
name: 'image_edit_oai',
description: getImageEditDescription(),
schema: z.object({
image_ids: z
.array(z.string())
.min(1)
.describe(
`
IDs (image ID strings) of previously generated or uploaded images that should guide the edit.
Guidelines:
- If the user's request depends on any prior image(s), copy their image IDs into the \`image_ids\` array (in the same order the user refers to them).
- Never invent or hallucinate IDs; only use IDs that are still visible in the conversation context.
- If no earlier image is relevant, omit the field entirely.
`.trim(),
),
prompt: z.string().max(32000).describe(getImageEditPromptDescription()),
/*
n: z
.number()
.int()
.min(1)
.max(10)
.optional()
.describe('The number of images to generate. Must be between 1 and 10. Defaults to 1.'),
*/
quality: z
.enum(['auto', 'high', 'medium', 'low'])
.optional()
.describe(
'The quality of the image. One of auto (default), high, medium, or low. High/medium/low only supported for gpt-image-1.',
),
size: z
.enum(['auto', '1024x1024', '1536x1024', '1024x1536', '256x256', '512x512'])
.optional()
.describe(
'The size of the generated images. For gpt-image-1: auto (default), 1024x1024, 1536x1024, 1024x1536. For dall-e-2: 256x256, 512x512, 1024x1024.',
),
}),
responseFormat: 'content_and_artifact',
},
oaiToolkit.image_edit_oai,
);
return [imageGenTool, imageEditTool];

View File

@@ -11,14 +11,14 @@ const paths = require('~/config/paths');
const { logger } = require('~/config');
const displayMessage =
'Stable Diffusion displayed an image. All generated images are already plainly visible, so don\'t repeat the descriptions in detail. Do not list download links as they are available in the UI already. The user may download the images by clicking on them, but do not mention anything about downloading to the user.';
"Stable Diffusion displayed an image. All generated images are already plainly visible, so don't repeat the descriptions in detail. Do not list download links as they are available in the UI already. The user may download the images by clicking on them, but do not mention anything about downloading to the user.";
class StableDiffusionAPI extends Tool {
constructor(fields) {
super();
/** @type {string} User ID */
this.userId = fields.userId;
/** @type {Express.Request | undefined} Express Request object, only provided by ToolService */
/** @type {ServerRequest | undefined} Express Request object, only provided by ToolService */
this.req = fields.req;
/** @type {boolean} Used to initialize the Tool without necessary variables. */
this.override = fields.override ?? false;
@@ -44,7 +44,7 @@ class StableDiffusionAPI extends Tool {
// "negative_prompt":"semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, out of frame, low quality, ugly, mutation, deformed"
// - Generate images only once per human query unless explicitly requested by the user`;
this.description =
'You can generate images using text with \'stable-diffusion\'. This tool is exclusively for visual content.';
"You can generate images using text with 'stable-diffusion'. This tool is exclusively for visual content.";
this.schema = z.object({
prompt: z
.string()

View File

@@ -1,9 +1,9 @@
const { z } = require('zod');
const { ytToolkit } = require('@librechat/api');
const { tool } = require('@langchain/core/tools');
const { youtube } = require('@googleapis/youtube');
const { logger } = require('@librechat/data-schemas');
const { YoutubeTranscript } = require('youtube-transcript');
const { getApiKey } = require('./credentials');
const { logger } = require('~/config');
function extractVideoId(url) {
const rawIdRegex = /^[a-zA-Z0-9_-]{11}$/;
@@ -29,7 +29,7 @@ function parseTranscript(transcriptResponse) {
.map((entry) => entry.text.trim())
.filter((text) => text)
.join(' ')
.replaceAll('&amp;#39;', '\'');
.replaceAll('&amp;#39;', "'");
}
function createYouTubeTools(fields = {}) {
@@ -42,160 +42,94 @@ function createYouTubeTools(fields = {}) {
auth: apiKey,
});
const searchTool = tool(
async ({ query, maxResults = 5 }) => {
const response = await youtubeClient.search.list({
part: 'snippet',
q: query,
type: 'video',
maxResults: maxResults || 5,
});
const result = response.data.items.map((item) => ({
title: item.snippet.title,
description: item.snippet.description,
url: `https://www.youtube.com/watch?v=${item.id.videoId}`,
}));
return JSON.stringify(result, null, 2);
},
{
name: 'youtube_search',
description: `Search for YouTube videos by keyword or phrase.
- Required: query (search terms to find videos)
- Optional: maxResults (number of videos to return, 1-50, default: 5)
- Returns: List of videos with titles, descriptions, and URLs
- Use for: Finding specific videos, exploring content, research
Example: query="cooking pasta tutorials" maxResults=3`,
schema: z.object({
query: z.string().describe('Search query terms'),
maxResults: z.number().int().min(1).max(50).optional().describe('Number of results (1-50)'),
}),
},
);
const searchTool = tool(async ({ query, maxResults = 5 }) => {
const response = await youtubeClient.search.list({
part: 'snippet',
q: query,
type: 'video',
maxResults: maxResults || 5,
});
const result = response.data.items.map((item) => ({
title: item.snippet.title,
description: item.snippet.description,
url: `https://www.youtube.com/watch?v=${item.id.videoId}`,
}));
return JSON.stringify(result, null, 2);
}, ytToolkit.youtube_search);
const infoTool = tool(
async ({ url }) => {
const videoId = extractVideoId(url);
if (!videoId) {
throw new Error('Invalid YouTube URL or video ID');
}
const infoTool = tool(async ({ url }) => {
const videoId = extractVideoId(url);
if (!videoId) {
throw new Error('Invalid YouTube URL or video ID');
}
const response = await youtubeClient.videos.list({
part: 'snippet,statistics',
id: videoId,
});
const response = await youtubeClient.videos.list({
part: 'snippet,statistics',
id: videoId,
});
if (!response.data.items?.length) {
throw new Error('Video not found');
}
const video = response.data.items[0];
if (!response.data.items?.length) {
throw new Error('Video not found');
}
const video = response.data.items[0];
const result = {
title: video.snippet.title,
description: video.snippet.description,
views: video.statistics.viewCount,
likes: video.statistics.likeCount,
comments: video.statistics.commentCount,
};
return JSON.stringify(result, null, 2);
},
{
name: 'youtube_info',
description: `Get detailed metadata and statistics for a specific YouTube video.
- Required: url (full YouTube URL or video ID)
- Returns: Video title, description, view count, like count, comment count
- Use for: Getting video metrics and basic metadata
- DO NOT USE FOR VIDEO SUMMARIES, USE TRANSCRIPTS FOR COMPREHENSIVE ANALYSIS
- Accepts both full URLs and video IDs
Example: url="https://youtube.com/watch?v=abc123" or url="abc123"`,
schema: z.object({
url: z.string().describe('YouTube video URL or ID'),
}),
},
);
const result = {
title: video.snippet.title,
description: video.snippet.description,
views: video.statistics.viewCount,
likes: video.statistics.likeCount,
comments: video.statistics.commentCount,
};
return JSON.stringify(result, null, 2);
}, ytToolkit.youtube_info);
const commentsTool = tool(
async ({ url, maxResults = 10 }) => {
const videoId = extractVideoId(url);
if (!videoId) {
throw new Error('Invalid YouTube URL or video ID');
}
const commentsTool = tool(async ({ url, maxResults = 10 }) => {
const videoId = extractVideoId(url);
if (!videoId) {
throw new Error('Invalid YouTube URL or video ID');
}
const response = await youtubeClient.commentThreads.list({
part: 'snippet',
videoId,
maxResults: maxResults || 10,
});
const response = await youtubeClient.commentThreads.list({
part: 'snippet',
videoId,
maxResults: maxResults || 10,
});
const result = response.data.items.map((item) => ({
author: item.snippet.topLevelComment.snippet.authorDisplayName,
text: item.snippet.topLevelComment.snippet.textDisplay,
likes: item.snippet.topLevelComment.snippet.likeCount,
}));
return JSON.stringify(result, null, 2);
},
{
name: 'youtube_comments',
description: `Retrieve top-level comments from a YouTube video.
- Required: url (full YouTube URL or video ID)
- Optional: maxResults (number of comments, 1-50, default: 10)
- Returns: Comment text, author names, like counts
- Use for: Sentiment analysis, audience feedback, engagement review
Example: url="abc123" maxResults=20`,
schema: z.object({
url: z.string().describe('YouTube video URL or ID'),
maxResults: z
.number()
.int()
.min(1)
.max(50)
.optional()
.describe('Number of comments to retrieve'),
}),
},
);
const result = response.data.items.map((item) => ({
author: item.snippet.topLevelComment.snippet.authorDisplayName,
text: item.snippet.topLevelComment.snippet.textDisplay,
likes: item.snippet.topLevelComment.snippet.likeCount,
}));
return JSON.stringify(result, null, 2);
}, ytToolkit.youtube_comments);
const transcriptTool = tool(
async ({ url }) => {
const videoId = extractVideoId(url);
if (!videoId) {
throw new Error('Invalid YouTube URL or video ID');
const transcriptTool = tool(async ({ url }) => {
const videoId = extractVideoId(url);
if (!videoId) {
throw new Error('Invalid YouTube URL or video ID');
}
try {
try {
const transcript = await YoutubeTranscript.fetchTranscript(videoId, { lang: 'en' });
return parseTranscript(transcript);
} catch (e) {
logger.error(e);
}
try {
try {
const transcript = await YoutubeTranscript.fetchTranscript(videoId, { lang: 'en' });
return parseTranscript(transcript);
} catch (e) {
logger.error(e);
}
try {
const transcript = await YoutubeTranscript.fetchTranscript(videoId, { lang: 'de' });
return parseTranscript(transcript);
} catch (e) {
logger.error(e);
}
const transcript = await YoutubeTranscript.fetchTranscript(videoId);
const transcript = await YoutubeTranscript.fetchTranscript(videoId, { lang: 'de' });
return parseTranscript(transcript);
} catch (error) {
throw new Error(`Failed to fetch transcript: ${error.message}`);
} catch (e) {
logger.error(e);
}
},
{
name: 'youtube_transcript',
description: `Fetch and parse the transcript/captions of a YouTube video.
- Required: url (full YouTube URL or video ID)
- Returns: Full video transcript as plain text
- Use for: Content analysis, summarization, translation reference
- This is the "Go-to" tool for analyzing actual video content
- Attempts to fetch English first, then German, then any available language
Example: url="https://youtube.com/watch?v=abc123"`,
schema: z.object({
url: z.string().describe('YouTube video URL or ID'),
}),
},
);
const transcript = await YoutubeTranscript.fetchTranscript(videoId);
return parseTranscript(transcript);
} catch (error) {
throw new Error(`Failed to fetch transcript: ${error.message}`);
}
}, ytToolkit.youtube_transcript);
return [searchTool, infoTool, commentsTool, transcriptTool];
}

View File

@@ -0,0 +1,60 @@
const DALLE3 = require('../DALLE3');
const { ProxyAgent } = require('undici');
jest.mock('tiktoken');
const processFileURL = jest.fn();
describe('DALLE3 Proxy Configuration', () => {
let originalEnv;
beforeAll(() => {
originalEnv = { ...process.env };
});
beforeEach(() => {
jest.resetModules();
process.env = { ...originalEnv };
});
afterEach(() => {
process.env = originalEnv;
});
it('should configure ProxyAgent in fetchOptions.dispatcher when PROXY env is set', () => {
// Set proxy environment variable
process.env.PROXY = 'http://proxy.example.com:8080';
process.env.DALLE_API_KEY = 'test-api-key';
// Create instance
const dalleWithProxy = new DALLE3({ processFileURL });
// Check that the openai client exists
expect(dalleWithProxy.openai).toBeDefined();
// Check that _options exists and has fetchOptions with a dispatcher
expect(dalleWithProxy.openai._options).toBeDefined();
expect(dalleWithProxy.openai._options.fetchOptions).toBeDefined();
expect(dalleWithProxy.openai._options.fetchOptions.dispatcher).toBeDefined();
expect(dalleWithProxy.openai._options.fetchOptions.dispatcher).toBeInstanceOf(ProxyAgent);
});
it('should not configure ProxyAgent when PROXY env is not set', () => {
// Ensure PROXY is not set
delete process.env.PROXY;
process.env.DALLE_API_KEY = 'test-api-key';
// Create instance
const dalleWithoutProxy = new DALLE3({ processFileURL });
// Check that the openai client exists
expect(dalleWithoutProxy.openai).toBeDefined();
// Check that _options exists but fetchOptions either doesn't exist or doesn't have a dispatcher
expect(dalleWithoutProxy.openai._options).toBeDefined();
// fetchOptions should either not exist or not have a dispatcher
if (dalleWithoutProxy.openai._options.fetchOptions) {
expect(dalleWithoutProxy.openai._options.fetchOptions.dispatcher).toBeUndefined();
}
});
});

View File

@@ -1,9 +1,8 @@
const OpenAI = require('openai');
const { logger } = require('@librechat/data-schemas');
const DALLE3 = require('../DALLE3');
const logger = require('~/config/winston');
jest.mock('openai');
jest.mock('@librechat/data-schemas', () => {
return {
logger: {
@@ -26,25 +25,6 @@ jest.mock('tiktoken', () => {
const processFileURL = jest.fn();
jest.mock('~/server/services/Files/images', () => ({
getImageBasename: jest.fn().mockImplementation((url) => {
// Split the URL by '/'
const parts = url.split('/');
// Get the last part of the URL
const lastPart = parts.pop();
// Check if the last part of the URL matches the image extension regex
const imageExtensionRegex = /\.(jpg|jpeg|png|gif|bmp|tiff|svg)$/i;
if (imageExtensionRegex.test(lastPart)) {
return lastPart;
}
// If the regex test fails, return an empty string
return '';
}),
}));
const generate = jest.fn();
OpenAI.mockImplementation(() => ({
images: {

View File

@@ -2,8 +2,9 @@ const { z } = require('zod');
const axios = require('axios');
const { tool } = require('@langchain/core/tools');
const { logger } = require('@librechat/data-schemas');
const { generateShortLivedToken } = require('@librechat/api');
const { Tools, EToolResources } = require('librechat-data-provider');
const { generateShortLivedToken } = require('~/server/services/AuthService');
const { filterFilesByAgentAccess } = require('~/server/services/Files/permissions');
const { getFiles } = require('~/models/File');
/**
@@ -22,14 +23,24 @@ const primeFiles = async (options) => {
const file_ids = tool_resources?.[EToolResources.file_search]?.file_ids ?? [];
const agentResourceIds = new Set(file_ids);
const resourceFiles = tool_resources?.[EToolResources.file_search]?.files ?? [];
const dbFiles = (
(await getFiles(
{ file_id: { $in: file_ids } },
null,
{ text: 0 },
{ userId: req?.user?.id, agentId },
)) ?? []
).concat(resourceFiles);
// Get all files first
const allFiles = (await getFiles({ file_id: { $in: file_ids } }, null, { text: 0 })) ?? [];
// Filter by access if user and agent are provided
let dbFiles;
if (req?.user?.id && agentId) {
dbFiles = await filterFilesByAgentAccess({
files: allFiles,
userId: req.user.id,
role: req.user.role,
agentId,
});
} else {
dbFiles = allFiles;
}
dbFiles = dbFiles.concat(resourceFiles);
let toolContext = `- Note: Semantic search is available through the ${Tools.file_search} tool but no files are currently loaded. Request the user to upload documents to search through.`;
@@ -114,11 +125,13 @@ const createFileSearchTool = async ({ req, files, entity_id }) => {
}
const formattedResults = validResults
.flatMap((result) =>
.flatMap((result, fileIndex) =>
result.data.map(([docInfo, distance]) => ({
filename: docInfo.metadata.source.split('/').pop(),
content: docInfo.page_content,
distance,
file_id: files[fileIndex]?.file_id,
page: docInfo.metadata.page || null,
})),
)
// TODO: results should be sorted by relevance, not distance
@@ -128,18 +141,37 @@ const createFileSearchTool = async ({ req, files, entity_id }) => {
const formattedString = formattedResults
.map(
(result) =>
`File: ${result.filename}\nRelevance: ${1.0 - result.distance.toFixed(4)}\nContent: ${
(result, index) =>
`File: ${result.filename}\nAnchor: \\ue202turn0file${index} (${result.filename})\nRelevance: ${(1.0 - result.distance).toFixed(4)}\nContent: ${
result.content
}\n`,
)
.join('\n---\n');
return formattedString;
const sources = formattedResults.map((result) => ({
type: 'file',
fileId: result.file_id,
content: result.content,
fileName: result.filename,
relevance: 1.0 - result.distance,
pages: result.page ? [result.page] : [],
pageRelevance: result.page ? { [result.page]: 1.0 - result.distance } : {},
}));
return [formattedString, { [Tools.file_search]: { sources } }];
},
{
name: Tools.file_search,
description: `Performs semantic search across attached "${Tools.file_search}" documents using natural language queries. This tool analyzes the content of uploaded files to find relevant information, quotes, and passages that best match your query. Use this to extract specific information or find relevant sections within the available documents.`,
responseFormat: 'content_and_artifact',
description: `Performs semantic search across attached "${Tools.file_search}" documents using natural language queries. This tool analyzes the content of uploaded files to find relevant information, quotes, and passages that best match your query. Use this to extract specific information or find relevant sections within the available documents.
**CITE FILE SEARCH RESULTS:**
Use anchor markers immediately after statements derived from file content. Reference the filename in your text:
- File citation: "The document.pdf states that... \\ue202turn0file0"
- Page reference: "According to report.docx... \\ue202turn0file1"
- Multi-file: "Multiple sources confirm... \\ue200\\ue202turn0file0\\ue202turn0file1\\ue201"
**ALWAYS mention the filename in your text before the citation marker. NEVER use markdown links or footnotes.**`,
schema: z.object({
query: z
.string()

View File

@@ -3,7 +3,7 @@ const { SerpAPI } = require('@langchain/community/tools/serpapi');
const { Calculator } = require('@langchain/community/tools/calculator');
const { mcpToolPattern, loadWebSearchAuth } = require('@librechat/api');
const { EnvVar, createCodeExecutionTool, createSearchTool } = require('@librechat/agents');
const { Tools, EToolResources, replaceSpecialVars } = require('librechat-data-provider');
const { Tools, Constants, EToolResources, replaceSpecialVars } = require('librechat-data-provider');
const {
availableTools,
manifestToolMap,
@@ -24,9 +24,9 @@ const {
const { primeFiles: primeCodeFiles } = require('~/server/services/Files/Code/process');
const { createFileSearchTool, primeFiles: primeSearchFiles } = require('./fileSearch');
const { getUserPluginAuthValue } = require('~/server/services/PluginService');
const { createMCPTool, createMCPTools } = require('~/server/services/MCP');
const { loadAuthValues } = require('~/server/services/Tools/credentials');
const { getCachedTools } = require('~/server/services/Config');
const { createMCPTool } = require('~/server/services/MCP');
/**
* Validates the availability and authentication of tools for a user based on environment variables or user-specific plugin authentication values.
@@ -121,27 +121,37 @@ const getAuthFields = (toolKey) => {
/**
*
* @param {object} object
* @param {string} object.user
* @param {Pick<Agent, 'id' | 'provider' | 'model'>} [object.agent]
* @param {string} [object.model]
* @param {EModelEndpoint} [object.endpoint]
* @param {LoadToolOptions} [object.options]
* @param {boolean} [object.useSpecs]
* @param {Array<string>} object.tools
* @param {boolean} [object.functions]
* @param {boolean} [object.returnMap]
* @param {object} params
* @param {string} params.user
* @param {Record<string, Record<string, string>>} [object.userMCPAuthMap]
* @param {AbortSignal} [object.signal]
* @param {Pick<Agent, 'id' | 'provider' | 'model'>} [params.agent]
* @param {string} [params.model]
* @param {EModelEndpoint} [params.endpoint]
* @param {LoadToolOptions} [params.options]
* @param {boolean} [params.useSpecs]
* @param {Array<string>} params.tools
* @param {boolean} [params.functions]
* @param {boolean} [params.returnMap]
* @param {AppConfig['webSearch']} [params.webSearch]
* @param {AppConfig['fileStrategy']} [params.fileStrategy]
* @param {AppConfig['imageOutputType']} [params.imageOutputType]
* @returns {Promise<{ loadedTools: Tool[], toolContextMap: Object<string, any> } | Record<string,Tool>>}
*/
const loadTools = async ({
user,
agent,
model,
signal,
endpoint,
userMCPAuthMap,
tools = [],
options = {},
functions = true,
returnMap = false,
webSearch,
fileStrategy,
imageOutputType,
}) => {
const toolConstructors = {
flux: FluxAPI,
@@ -200,6 +210,8 @@ const loadTools = async ({
...authValues,
isAgent: !!agent,
req: options.req,
imageOutputType,
fileStrategy,
imageFiles,
});
},
@@ -215,7 +227,7 @@ const loadTools = async ({
const imageGenOptions = {
isAgent: !!agent,
req: options.req,
fileStrategy: options.fileStrategy,
fileStrategy,
processFileURL: options.processFileURL,
returnMetadata: options.returnMetadata,
uploadImageBuffer: options.uploadImageBuffer,
@@ -231,6 +243,7 @@ const loadTools = async ({
/** @type {Record<string, string>} */
const toolContextMap = {};
const cachedTools = (await getCachedTools({ userId: user, includeGlobal: true })) ?? {};
const requestedMCPTools = {};
for (const tool of tools) {
if (tool === Tools.execute_code) {
@@ -272,11 +285,10 @@ const loadTools = async ({
};
continue;
} else if (tool === Tools.web_search) {
const webSearchConfig = options?.req?.app?.locals?.webSearch;
const result = await loadWebSearchAuth({
userId: user,
loadAuthValues,
webSearchConfig,
webSearchConfig: webSearch,
});
const { onSearchResults, onGetHighlights } = options?.[Tools.web_search] ?? {};
requestedTools[tool] = async () => {
@@ -299,14 +311,35 @@ Current Date & Time: ${replaceSpecialVars({ text: '{{iso_datetime}}' })}
};
continue;
} else if (tool && cachedTools && mcpToolPattern.test(tool)) {
requestedTools[tool] = async () =>
const [toolName, serverName] = tool.split(Constants.mcp_delimiter);
if (toolName === Constants.mcp_all) {
const currentMCPGenerator = async (index) =>
createMCPTools({
req: options.req,
res: options.res,
index,
serverName,
userMCPAuthMap,
model: agent?.model ?? model,
provider: agent?.provider ?? endpoint,
signal,
});
requestedMCPTools[serverName] = [currentMCPGenerator];
continue;
}
const currentMCPGenerator = async (index) =>
createMCPTool({
index,
req: options.req,
res: options.res,
toolKey: tool,
userMCPAuthMap,
model: agent?.model ?? model,
provider: agent?.provider ?? endpoint,
signal,
});
requestedMCPTools[serverName] = requestedMCPTools[serverName] || [];
requestedMCPTools[serverName].push(currentMCPGenerator);
continue;
}
@@ -346,6 +379,34 @@ Current Date & Time: ${replaceSpecialVars({ text: '{{iso_datetime}}' })}
}
const loadedTools = (await Promise.all(toolPromises)).flatMap((plugin) => plugin || []);
const mcpToolPromises = [];
/** MCP server tools are initialized sequentially by server */
let index = -1;
for (const [serverName, generators] of Object.entries(requestedMCPTools)) {
index++;
for (const generator of generators) {
try {
if (generator && generators.length === 1) {
mcpToolPromises.push(
generator(index).catch((error) => {
logger.error(`Error loading ${serverName} tools:`, error);
return null;
}),
);
continue;
}
const mcpTool = await generator(index);
if (Array.isArray(mcpTool)) {
loadedTools.push(...mcpTool);
} else if (mcpTool) {
loadedTools.push(mcpTool);
}
} catch (error) {
logger.error(`Error loading MCP tool for server ${serverName}:`, error);
}
}
}
loadedTools.push(...(await Promise.all(mcpToolPromises)).flatMap((plugin) => plugin || []));
return { loadedTools, toolContextMap };
};

View File

@@ -9,6 +9,27 @@ const mockPluginService = {
jest.mock('~/server/services/PluginService', () => mockPluginService);
jest.mock('~/server/services/Config', () => ({
getAppConfig: jest.fn().mockResolvedValue({
// Default app config for tool tests
paths: { uploads: '/tmp' },
fileStrategy: 'local',
filteredTools: [],
includedTools: [],
}),
getCachedTools: jest.fn().mockResolvedValue({
// Default cached tools for tests
dalle: {
type: 'function',
function: {
name: 'dalle',
description: 'DALL-E image generation',
parameters: {},
},
},
}),
}));
const { BaseLLM } = require('@langchain/openai');
const { Calculator } = require('@langchain/community/tools/calculator');

View File

@@ -1,5 +1,6 @@
const fs = require('fs');
const { math, isEnabled } = require('@librechat/api');
const { CacheKeys } = require('librechat-data-provider');
// To ensure that different deployments do not interfere with each other's cache, we use a prefix for the Redis keys.
// This prefix is usually the deployment ID, which is often passed to the container or pod as an env var.
@@ -15,7 +16,26 @@ if (USE_REDIS && !process.env.REDIS_URI) {
throw new Error('USE_REDIS is enabled but REDIS_URI is not set.');
}
// Comma-separated list of cache namespaces that should be forced to use in-memory storage
// even when Redis is enabled. This allows selective performance optimization for specific caches.
const FORCED_IN_MEMORY_CACHE_NAMESPACES = process.env.FORCED_IN_MEMORY_CACHE_NAMESPACES
? process.env.FORCED_IN_MEMORY_CACHE_NAMESPACES.split(',').map((key) => key.trim())
: [];
// Validate against CacheKeys enum
if (FORCED_IN_MEMORY_CACHE_NAMESPACES.length > 0) {
const validKeys = Object.values(CacheKeys);
const invalidKeys = FORCED_IN_MEMORY_CACHE_NAMESPACES.filter((key) => !validKeys.includes(key));
if (invalidKeys.length > 0) {
throw new Error(
`Invalid cache keys in FORCED_IN_MEMORY_CACHE_NAMESPACES: ${invalidKeys.join(', ')}. Valid keys: ${validKeys.join(', ')}`,
);
}
}
const cacheConfig = {
FORCED_IN_MEMORY_CACHE_NAMESPACES,
USE_REDIS,
REDIS_URI: process.env.REDIS_URI,
REDIS_USERNAME: process.env.REDIS_USERNAME,
@@ -23,7 +43,20 @@ const cacheConfig = {
REDIS_CA: process.env.REDIS_CA ? fs.readFileSync(process.env.REDIS_CA, 'utf8') : null,
REDIS_KEY_PREFIX: process.env[REDIS_KEY_PREFIX_VAR] || REDIS_KEY_PREFIX || '',
REDIS_MAX_LISTENERS: math(process.env.REDIS_MAX_LISTENERS, 40),
REDIS_PING_INTERVAL: math(process.env.REDIS_PING_INTERVAL, 0),
/** Max delay between reconnection attempts in ms */
REDIS_RETRY_MAX_DELAY: math(process.env.REDIS_RETRY_MAX_DELAY, 3000),
/** Max number of reconnection attempts (0 = infinite) */
REDIS_RETRY_MAX_ATTEMPTS: math(process.env.REDIS_RETRY_MAX_ATTEMPTS, 10),
/** Connection timeout in ms */
REDIS_CONNECT_TIMEOUT: math(process.env.REDIS_CONNECT_TIMEOUT, 10000),
/** Queue commands when disconnected */
REDIS_ENABLE_OFFLINE_QUEUE: isEnabled(process.env.REDIS_ENABLE_OFFLINE_QUEUE ?? 'true'),
/** flag to modify redis connection by adding dnsLookup this is required when connecting to elasticache for ioredis
* see "Special Note: Aws Elasticache Clusters with TLS" on this webpage: https://www.npmjs.com/package/ioredis **/
REDIS_USE_ALTERNATIVE_DNS_LOOKUP: isEnabled(process.env.REDIS_USE_ALTERNATIVE_DNS_LOOKUP),
/** Enable redis cluster without the need of multiple URIs */
USE_REDIS_CLUSTER: isEnabled(process.env.USE_REDIS_CLUSTER ?? 'false'),
CI: isEnabled(process.env.CI),
DEBUG_MEMORY_CACHE: isEnabled(process.env.DEBUG_MEMORY_CACHE),

View File

@@ -14,6 +14,9 @@ describe('cacheConfig', () => {
delete process.env.REDIS_KEY_PREFIX_VAR;
delete process.env.REDIS_KEY_PREFIX;
delete process.env.USE_REDIS;
delete process.env.USE_REDIS_CLUSTER;
delete process.env.REDIS_PING_INTERVAL;
delete process.env.FORCED_IN_MEMORY_CACHE_NAMESPACES;
// Clear require cache
jest.resetModules();
@@ -99,10 +102,89 @@ describe('cacheConfig', () => {
});
});
describe('USE_REDIS_CLUSTER configuration', () => {
test('should default to false when USE_REDIS_CLUSTER is not set', () => {
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.USE_REDIS_CLUSTER).toBe(false);
});
test('should be false when USE_REDIS_CLUSTER is set to false', () => {
process.env.USE_REDIS_CLUSTER = 'false';
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.USE_REDIS_CLUSTER).toBe(false);
});
test('should be true when USE_REDIS_CLUSTER is set to true', () => {
process.env.USE_REDIS_CLUSTER = 'true';
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.USE_REDIS_CLUSTER).toBe(true);
});
test('should work with USE_REDIS enabled and REDIS_URI set', () => {
process.env.USE_REDIS_CLUSTER = 'true';
process.env.USE_REDIS = 'true';
process.env.REDIS_URI = 'redis://localhost:6379';
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.USE_REDIS_CLUSTER).toBe(true);
expect(cacheConfig.USE_REDIS).toBe(true);
expect(cacheConfig.REDIS_URI).toBe('redis://localhost:6379');
});
});
describe('REDIS_CA file reading', () => {
test('should be null when REDIS_CA is not set', () => {
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.REDIS_CA).toBeNull();
});
});
describe('REDIS_PING_INTERVAL configuration', () => {
test('should default to 0 when REDIS_PING_INTERVAL is not set', () => {
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.REDIS_PING_INTERVAL).toBe(0);
});
test('should use provided REDIS_PING_INTERVAL value', () => {
process.env.REDIS_PING_INTERVAL = '300';
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.REDIS_PING_INTERVAL).toBe(300);
});
});
describe('FORCED_IN_MEMORY_CACHE_NAMESPACES validation', () => {
test('should parse comma-separated cache keys correctly', () => {
process.env.FORCED_IN_MEMORY_CACHE_NAMESPACES = ' ROLES, STATIC_CONFIG ,MESSAGES ';
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.FORCED_IN_MEMORY_CACHE_NAMESPACES).toEqual([
'ROLES',
'STATIC_CONFIG',
'MESSAGES',
]);
});
test('should throw error for invalid cache keys', () => {
process.env.FORCED_IN_MEMORY_CACHE_NAMESPACES = 'INVALID_KEY,ROLES';
expect(() => {
require('./cacheConfig');
}).toThrow('Invalid cache keys in FORCED_IN_MEMORY_CACHE_NAMESPACES: INVALID_KEY');
});
test('should handle empty string gracefully', () => {
process.env.FORCED_IN_MEMORY_CACHE_NAMESPACES = '';
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.FORCED_IN_MEMORY_CACHE_NAMESPACES).toEqual([]);
});
test('should handle undefined env var gracefully', () => {
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.FORCED_IN_MEMORY_CACHE_NAMESPACES).toEqual([]);
});
});
});

View File

@@ -1,12 +1,13 @@
const KeyvRedis = require('@keyv/redis').default;
const { Keyv } = require('keyv');
const { cacheConfig } = require('./cacheConfig');
const { keyvRedisClient, ioredisClient, GLOBAL_PREFIX_SEPARATOR } = require('./redisClients');
const { RedisStore } = require('rate-limit-redis');
const { Time } = require('librechat-data-provider');
const { logger } = require('@librechat/data-schemas');
const { RedisStore: ConnectRedis } = require('connect-redis');
const MemoryStore = require('memorystore')(require('express-session'));
const { keyvRedisClient, ioredisClient, GLOBAL_PREFIX_SEPARATOR } = require('./redisClients');
const { cacheConfig } = require('./cacheConfig');
const { violationFile } = require('./keyvFiles');
const { RedisStore } = require('rate-limit-redis');
/**
* Creates a cache instance using Redis or a fallback store. Suitable for general caching needs.
@@ -16,12 +17,25 @@ const { RedisStore } = require('rate-limit-redis');
* @returns {Keyv} Cache instance.
*/
const standardCache = (namespace, ttl = undefined, fallbackStore = undefined) => {
if (cacheConfig.USE_REDIS) {
const keyvRedis = new KeyvRedis(keyvRedisClient);
const cache = new Keyv(keyvRedis, { namespace, ttl });
keyvRedis.namespace = cacheConfig.REDIS_KEY_PREFIX;
keyvRedis.keyPrefixSeparator = GLOBAL_PREFIX_SEPARATOR;
return cache;
if (
cacheConfig.USE_REDIS &&
!cacheConfig.FORCED_IN_MEMORY_CACHE_NAMESPACES?.includes(namespace)
) {
try {
const keyvRedis = new KeyvRedis(keyvRedisClient);
const cache = new Keyv(keyvRedis, { namespace, ttl });
keyvRedis.namespace = cacheConfig.REDIS_KEY_PREFIX;
keyvRedis.keyPrefixSeparator = GLOBAL_PREFIX_SEPARATOR;
cache.on('error', (err) => {
logger.error(`Cache error in namespace ${namespace}:`, err);
});
return cache;
} catch (err) {
logger.error(`Failed to create Redis cache for namespace ${namespace}:`, err);
throw err;
}
}
if (fallbackStore) return new Keyv({ store: fallbackStore, namespace, ttl });
return new Keyv({ namespace, ttl });
@@ -47,7 +61,13 @@ const violationCache = (namespace, ttl = undefined) => {
const sessionCache = (namespace, ttl = undefined) => {
namespace = namespace.endsWith(':') ? namespace : `${namespace}:`;
if (!cacheConfig.USE_REDIS) return new MemoryStore({ ttl, checkPeriod: Time.ONE_DAY });
return new ConnectRedis({ client: ioredisClient, ttl, prefix: namespace });
const store = new ConnectRedis({ client: ioredisClient, ttl, prefix: namespace });
if (ioredisClient) {
ioredisClient.on('error', (err) => {
logger.error(`Session store Redis error for namespace ${namespace}:`, err);
});
}
return store;
};
/**
@@ -59,8 +79,30 @@ const limiterCache = (prefix) => {
if (!prefix) throw new Error('prefix is required');
if (!cacheConfig.USE_REDIS) return undefined;
prefix = prefix.endsWith(':') ? prefix : `${prefix}:`;
return new RedisStore({ sendCommand, prefix });
try {
if (!ioredisClient) {
logger.warn(`Redis client not available for rate limiter with prefix ${prefix}`);
return undefined;
}
return new RedisStore({ sendCommand, prefix });
} catch (err) {
logger.error(`Failed to create Redis rate limiter for prefix ${prefix}:`, err);
return undefined;
}
};
const sendCommand = (...args) => {
if (!ioredisClient) {
logger.warn('Redis client not available for command execution');
return Promise.reject(new Error('Redis client not available'));
}
return ioredisClient.call(...args).catch((err) => {
logger.error('Redis command execution failed:', err);
throw err;
});
};
const sendCommand = (...args) => ioredisClient?.call(...args);
module.exports = { standardCache, sessionCache, violationCache, limiterCache };

View File

@@ -6,13 +6,17 @@ const mockKeyvRedis = {
keyPrefixSeparator: '',
};
const mockKeyv = jest.fn().mockReturnValue({ mock: 'keyv' });
const mockKeyv = jest.fn().mockReturnValue({
mock: 'keyv',
on: jest.fn(),
});
const mockConnectRedis = jest.fn().mockReturnValue({ mock: 'connectRedis' });
const mockMemoryStore = jest.fn().mockReturnValue({ mock: 'memoryStore' });
const mockRedisStore = jest.fn().mockReturnValue({ mock: 'redisStore' });
const mockIoredisClient = {
call: jest.fn(),
on: jest.fn(),
};
const mockKeyvRedisClient = {};
@@ -31,6 +35,7 @@ jest.mock('./cacheConfig', () => ({
cacheConfig: {
USE_REDIS: false,
REDIS_KEY_PREFIX: 'test',
FORCED_IN_MEMORY_CACHE_NAMESPACES: [],
},
}));
@@ -52,6 +57,14 @@ jest.mock('rate-limit-redis', () => ({
RedisStore: mockRedisStore,
}));
jest.mock('@librechat/data-schemas', () => ({
logger: {
error: jest.fn(),
warn: jest.fn(),
info: jest.fn(),
},
}));
// Import after mocking
const { standardCache, sessionCache, violationCache, limiterCache } = require('./cacheFactory');
const { cacheConfig } = require('./cacheConfig');
@@ -63,6 +76,7 @@ describe('cacheFactory', () => {
// Reset cache config mock
cacheConfig.USE_REDIS = false;
cacheConfig.REDIS_KEY_PREFIX = 'test';
cacheConfig.FORCED_IN_MEMORY_CACHE_NAMESPACES = [];
});
describe('redisCache', () => {
@@ -116,6 +130,52 @@ describe('cacheFactory', () => {
expect(mockKeyv).toHaveBeenCalledWith({ namespace: undefined, ttl: undefined });
});
it('should use fallback when namespace is in FORCED_IN_MEMORY_CACHE_NAMESPACES', () => {
cacheConfig.USE_REDIS = true;
cacheConfig.FORCED_IN_MEMORY_CACHE_NAMESPACES = ['forced-memory'];
const namespace = 'forced-memory';
const ttl = 3600;
standardCache(namespace, ttl);
expect(require('@keyv/redis').default).not.toHaveBeenCalled();
expect(mockKeyv).toHaveBeenCalledWith({ namespace, ttl });
});
it('should use Redis when namespace is not in FORCED_IN_MEMORY_CACHE_NAMESPACES', () => {
cacheConfig.USE_REDIS = true;
cacheConfig.FORCED_IN_MEMORY_CACHE_NAMESPACES = ['other-namespace'];
const namespace = 'test-namespace';
const ttl = 3600;
standardCache(namespace, ttl);
expect(require('@keyv/redis').default).toHaveBeenCalledWith(mockKeyvRedisClient);
expect(mockKeyv).toHaveBeenCalledWith(mockKeyvRedis, { namespace, ttl });
});
it('should throw error when Redis cache creation fails', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'test-namespace';
const ttl = 3600;
const testError = new Error('Redis connection failed');
const KeyvRedis = require('@keyv/redis').default;
KeyvRedis.mockImplementationOnce(() => {
throw testError;
});
expect(() => standardCache(namespace, ttl)).toThrow('Redis connection failed');
const { logger } = require('@librechat/data-schemas');
expect(logger.error).toHaveBeenCalledWith(
`Failed to create Redis cache for namespace ${namespace}:`,
testError,
);
expect(mockKeyv).not.toHaveBeenCalled();
});
});
describe('violationCache', () => {
@@ -207,6 +267,86 @@ describe('cacheFactory', () => {
checkPeriod: Time.ONE_DAY,
});
});
it('should throw error when ConnectRedis constructor fails', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'sessions';
const ttl = 86400;
// Mock ConnectRedis to throw an error during construction
const redisError = new Error('Redis connection failed');
mockConnectRedis.mockImplementationOnce(() => {
throw redisError;
});
// The error should propagate up, not be caught
expect(() => sessionCache(namespace, ttl)).toThrow('Redis connection failed');
// Verify that MemoryStore was NOT used as fallback
expect(mockMemoryStore).not.toHaveBeenCalled();
});
it('should register error handler but let errors propagate to Express', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'sessions';
// Create a mock session store with middleware methods
const mockSessionStore = {
get: jest.fn(),
set: jest.fn(),
destroy: jest.fn(),
};
mockConnectRedis.mockReturnValue(mockSessionStore);
const store = sessionCache(namespace);
// Verify error handler was registered
expect(mockIoredisClient.on).toHaveBeenCalledWith('error', expect.any(Function));
// Get the error handler
const errorHandler = mockIoredisClient.on.mock.calls.find((call) => call[0] === 'error')[1];
// Simulate an error from Redis during a session operation
const redisError = new Error('Socket closed unexpectedly');
// The error handler should log but not swallow the error
const { logger } = require('@librechat/data-schemas');
errorHandler(redisError);
expect(logger.error).toHaveBeenCalledWith(
`Session store Redis error for namespace ${namespace}::`,
redisError,
);
// Now simulate what happens when session middleware tries to use the store
const callback = jest.fn();
mockSessionStore.get.mockImplementation((sid, cb) => {
cb(new Error('Redis connection lost'));
});
// Call the store's get method (as Express session would)
store.get('test-session-id', callback);
// The error should be passed to the callback, not swallowed
expect(callback).toHaveBeenCalledWith(new Error('Redis connection lost'));
});
it('should handle null ioredisClient gracefully', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'sessions';
// Temporarily set ioredisClient to null (simulating connection not established)
const originalClient = require('./redisClients').ioredisClient;
require('./redisClients').ioredisClient = null;
// ConnectRedis might accept null client but would fail on first use
// The important thing is it doesn't throw uncaught exceptions during construction
const store = sessionCache(namespace);
expect(store).toBeDefined();
// Restore original client
require('./redisClients').ioredisClient = originalClient;
});
});
describe('limiterCache', () => {
@@ -248,8 +388,10 @@ describe('cacheFactory', () => {
});
});
it('should pass sendCommand function that calls ioredisClient.call', () => {
it('should pass sendCommand function that calls ioredisClient.call', async () => {
cacheConfig.USE_REDIS = true;
mockIoredisClient.call.mockResolvedValue('test-value');
limiterCache('rate-limit');
const sendCommandCall = mockRedisStore.mock.calls[0][0];
@@ -257,9 +399,29 @@ describe('cacheFactory', () => {
// Test that sendCommand properly delegates to ioredisClient.call
const args = ['GET', 'test-key'];
sendCommand(...args);
const result = await sendCommand(...args);
expect(mockIoredisClient.call).toHaveBeenCalledWith(...args);
expect(result).toBe('test-value');
});
it('should handle sendCommand errors properly', async () => {
cacheConfig.USE_REDIS = true;
// Mock the call method to reject with an error
const testError = new Error('Redis error');
mockIoredisClient.call.mockRejectedValue(testError);
limiterCache('rate-limit');
const sendCommandCall = mockRedisStore.mock.calls[0][0];
const sendCommand = sendCommandCall.sendCommand;
// Test that sendCommand properly handles errors
const args = ['GET', 'test-key'];
await expect(sendCommand(...args)).rejects.toThrow('Redis error');
expect(mockIoredisClient.call).toHaveBeenCalledWith(...args);
});
it('should handle undefined prefix', () => {

View File

@@ -31,8 +31,8 @@ const namespaces = {
[CacheKeys.SAML_SESSION]: sessionCache(CacheKeys.SAML_SESSION),
[CacheKeys.ROLES]: standardCache(CacheKeys.ROLES),
[CacheKeys.MCP_TOOLS]: standardCache(CacheKeys.MCP_TOOLS),
[CacheKeys.CONFIG_STORE]: standardCache(CacheKeys.CONFIG_STORE),
[CacheKeys.STATIC_CONFIG]: standardCache(CacheKeys.STATIC_CONFIG),
[CacheKeys.PENDING_REQ]: standardCache(CacheKeys.PENDING_REQ),
[CacheKeys.ENCODED_DOMAINS]: new Keyv({ store: keyvMongo, namespace: CacheKeys.ENCODED_DOMAINS }),
[CacheKeys.ABORT_KEYS]: standardCache(CacheKeys.ABORT_KEYS, Time.TEN_MINUTES),

View File

@@ -1,6 +1,7 @@
const IoRedis = require('ioredis');
const { cacheConfig } = require('./cacheConfig');
const { logger } = require('@librechat/data-schemas');
const { createClient, createCluster } = require('@keyv/redis');
const { cacheConfig } = require('./cacheConfig');
const GLOBAL_PREFIX_SEPARATOR = '::';
@@ -12,46 +13,198 @@ const ca = cacheConfig.REDIS_CA;
/** @type {import('ioredis').Redis | import('ioredis').Cluster | null} */
let ioredisClient = null;
if (cacheConfig.USE_REDIS) {
/** @type {import('ioredis').RedisOptions | import('ioredis').ClusterOptions} */
const redisOptions = {
username: username,
password: password,
tls: ca ? { ca } : undefined,
keyPrefix: `${cacheConfig.REDIS_KEY_PREFIX}${GLOBAL_PREFIX_SEPARATOR}`,
maxListeners: cacheConfig.REDIS_MAX_LISTENERS,
retryStrategy: (times) => {
if (
cacheConfig.REDIS_RETRY_MAX_ATTEMPTS > 0 &&
times > cacheConfig.REDIS_RETRY_MAX_ATTEMPTS
) {
logger.error(
`ioredis giving up after ${cacheConfig.REDIS_RETRY_MAX_ATTEMPTS} reconnection attempts`,
);
return null;
}
const delay = Math.min(times * 50, cacheConfig.REDIS_RETRY_MAX_DELAY);
logger.info(`ioredis reconnecting... attempt ${times}, delay ${delay}ms`);
return delay;
},
reconnectOnError: (err) => {
const targetError = 'READONLY';
if (err.message.includes(targetError)) {
logger.warn('ioredis reconnecting due to READONLY error');
return 2; // Return retry delay instead of boolean
}
return false;
},
enableOfflineQueue: cacheConfig.REDIS_ENABLE_OFFLINE_QUEUE,
connectTimeout: cacheConfig.REDIS_CONNECT_TIMEOUT,
maxRetriesPerRequest: 3,
};
ioredisClient =
urls.length === 1
urls.length === 1 && !cacheConfig.USE_REDIS_CLUSTER
? new IoRedis(cacheConfig.REDIS_URI, redisOptions)
: new IoRedis.Cluster(cacheConfig.REDIS_URI, { redisOptions });
: new IoRedis.Cluster(
urls.map((url) => ({ host: url.hostname, port: parseInt(url.port, 10) || 6379 })),
{
...(cacheConfig.REDIS_USE_ALTERNATIVE_DNS_LOOKUP
? { dnsLookup: (address, callback) => callback(null, address) }
: {}),
redisOptions,
clusterRetryStrategy: (times) => {
if (
cacheConfig.REDIS_RETRY_MAX_ATTEMPTS > 0 &&
times > cacheConfig.REDIS_RETRY_MAX_ATTEMPTS
) {
logger.error(
`ioredis cluster giving up after ${cacheConfig.REDIS_RETRY_MAX_ATTEMPTS} reconnection attempts`,
);
return null;
}
const delay = Math.min(times * 100, cacheConfig.REDIS_RETRY_MAX_DELAY);
logger.info(`ioredis cluster reconnecting... attempt ${times}, delay ${delay}ms`);
return delay;
},
enableOfflineQueue: cacheConfig.REDIS_ENABLE_OFFLINE_QUEUE,
},
);
// Pinging the Redis server every 5 minutes to keep the connection alive
const pingInterval = setInterval(() => ioredisClient.ping(), 5 * 60 * 1000);
ioredisClient.on('close', () => clearInterval(pingInterval));
ioredisClient.on('end', () => clearInterval(pingInterval));
ioredisClient.on('error', (err) => {
logger.error('ioredis client error:', err);
});
ioredisClient.on('connect', () => {
logger.info('ioredis client connected');
});
ioredisClient.on('ready', () => {
logger.info('ioredis client ready');
});
ioredisClient.on('reconnecting', (delay) => {
logger.info(`ioredis client reconnecting in ${delay}ms`);
});
ioredisClient.on('close', () => {
logger.warn('ioredis client connection closed');
});
/** Ping Interval to keep the Redis server connection alive (if enabled) */
let pingInterval = null;
const clearPingInterval = () => {
if (pingInterval) {
clearInterval(pingInterval);
pingInterval = null;
}
};
if (cacheConfig.REDIS_PING_INTERVAL > 0) {
pingInterval = setInterval(() => {
if (ioredisClient && ioredisClient.status === 'ready') {
ioredisClient.ping().catch((err) => {
logger.error('ioredis ping failed:', err);
});
}
}, cacheConfig.REDIS_PING_INTERVAL * 1000);
ioredisClient.on('close', clearPingInterval);
ioredisClient.on('end', clearPingInterval);
}
}
/** @type {import('@keyv/redis').RedisClient | import('@keyv/redis').RedisCluster | null} */
let keyvRedisClient = null;
if (cacheConfig.USE_REDIS) {
// ** WARNING ** Keyv Redis client does not support Prefix like ioredis above.
// The prefix feature will be handled by the Keyv-Redis store in cacheFactory.js
const redisOptions = { username, password, socket: { tls: ca != null, ca } };
/**
* ** WARNING ** Keyv Redis client does not support Prefix like ioredis above.
* The prefix feature will be handled by the Keyv-Redis store in cacheFactory.js
* @type {import('@keyv/redis').RedisClientOptions | import('@keyv/redis').RedisClusterOptions}
*/
const redisOptions = {
username,
password,
socket: {
tls: ca != null,
ca,
connectTimeout: cacheConfig.REDIS_CONNECT_TIMEOUT,
reconnectStrategy: (retries) => {
if (
cacheConfig.REDIS_RETRY_MAX_ATTEMPTS > 0 &&
retries > cacheConfig.REDIS_RETRY_MAX_ATTEMPTS
) {
logger.error(
`@keyv/redis client giving up after ${cacheConfig.REDIS_RETRY_MAX_ATTEMPTS} reconnection attempts`,
);
return new Error('Max reconnection attempts reached');
}
const delay = Math.min(retries * 100, cacheConfig.REDIS_RETRY_MAX_DELAY);
logger.info(`@keyv/redis reconnecting... attempt ${retries}, delay ${delay}ms`);
return delay;
},
},
disableOfflineQueue: !cacheConfig.REDIS_ENABLE_OFFLINE_QUEUE,
};
keyvRedisClient =
urls.length === 1
urls.length === 1 && !cacheConfig.USE_REDIS_CLUSTER
? createClient({ url: cacheConfig.REDIS_URI, ...redisOptions })
: createCluster({
rootNodes: cacheConfig.REDIS_URI.split(',').map((url) => ({ url })),
rootNodes: urls.map((url) => ({ url: url.href })),
defaults: redisOptions,
});
keyvRedisClient.setMaxListeners(cacheConfig.REDIS_MAX_LISTENERS);
// Pinging the Redis server every 5 minutes to keep the connection alive
const keyvPingInterval = setInterval(() => keyvRedisClient.ping(), 5 * 60 * 1000);
keyvRedisClient.on('disconnect', () => clearInterval(keyvPingInterval));
keyvRedisClient.on('end', () => clearInterval(keyvPingInterval));
keyvRedisClient.on('error', (err) => {
logger.error('@keyv/redis client error:', err);
});
keyvRedisClient.on('connect', () => {
logger.info('@keyv/redis client connected');
});
keyvRedisClient.on('ready', () => {
logger.info('@keyv/redis client ready');
});
keyvRedisClient.on('reconnecting', () => {
logger.info('@keyv/redis client reconnecting...');
});
keyvRedisClient.on('disconnect', () => {
logger.warn('@keyv/redis client disconnected');
});
keyvRedisClient.connect().catch((err) => {
logger.error('@keyv/redis initial connection failed:', err);
throw err;
});
/** Ping Interval to keep the Redis server connection alive (if enabled) */
let pingInterval = null;
const clearPingInterval = () => {
if (pingInterval) {
clearInterval(pingInterval);
pingInterval = null;
}
};
if (cacheConfig.REDIS_PING_INTERVAL > 0) {
pingInterval = setInterval(() => {
if (keyvRedisClient && keyvRedisClient.isReady) {
keyvRedisClient.ping().catch((err) => {
logger.error('@keyv/redis ping failed:', err);
});
}
}, cacheConfig.REDIS_PING_INTERVAL * 1000);
keyvRedisClient.on('disconnect', clearPingInterval);
keyvRedisClient.on('end', clearPingInterval);
}
}
module.exports = { ioredisClient, keyvRedisClient, GLOBAL_PREFIX_SEPARATOR };

View File

@@ -1,27 +1,13 @@
const { MCPManager, FlowStateManager } = require('@librechat/api');
const { EventSource } = require('eventsource');
const { Time } = require('librechat-data-provider');
const { MCPManager, FlowStateManager } = require('@librechat/api');
const logger = require('./winston');
global.EventSource = EventSource;
/** @type {MCPManager} */
let mcpManager = null;
let flowManager = null;
/**
* @param {string} [userId] - Optional user ID, to avoid disconnecting the current user.
* @returns {MCPManager}
*/
function getMCPManager(userId) {
if (!mcpManager) {
mcpManager = MCPManager.getInstance();
} else {
mcpManager.checkIdleConnections(userId);
}
return mcpManager;
}
/**
* @param {Keyv} flowsCache
* @returns {FlowStateManager}
@@ -37,6 +23,7 @@ function getFlowStateManager(flowsCache) {
module.exports = {
logger,
getMCPManager,
createMCPManager: MCPManager.createInstance,
getMCPManager: MCPManager.getInstance,
getFlowStateManager,
};

View File

@@ -1,11 +1,34 @@
require('dotenv').config();
const { isEnabled } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const mongoose = require('mongoose');
const MONGO_URI = process.env.MONGO_URI;
if (!MONGO_URI) {
throw new Error('Please define the MONGO_URI environment variable');
}
/** The maximum number of connections in the connection pool. */
const maxPoolSize = parseInt(process.env.MONGO_MAX_POOL_SIZE) || undefined;
/** The minimum number of connections in the connection pool. */
const minPoolSize = parseInt(process.env.MONGO_MIN_POOL_SIZE) || undefined;
/** The maximum number of connections that may be in the process of being established concurrently by the connection pool. */
const maxConnecting = parseInt(process.env.MONGO_MAX_CONNECTING) || undefined;
/** The maximum number of milliseconds that a connection can remain idle in the pool before being removed and closed. */
const maxIdleTimeMS = parseInt(process.env.MONGO_MAX_IDLE_TIME_MS) || undefined;
/** The maximum time in milliseconds that a thread can wait for a connection to become available. */
const waitQueueTimeoutMS = parseInt(process.env.MONGO_WAIT_QUEUE_TIMEOUT_MS) || undefined;
/** Set to false to disable automatic index creation for all models associated with this connection. */
const autoIndex =
process.env.MONGO_AUTO_INDEX != undefined
? isEnabled(process.env.MONGO_AUTO_INDEX) || false
: undefined;
/** Set to `false` to disable Mongoose automatically calling `createCollection()` on every model created on this connection. */
const autoCreate =
process.env.MONGO_AUTO_CREATE != undefined
? isEnabled(process.env.MONGO_AUTO_CREATE) || false
: undefined;
/**
* Global is used here to maintain a cached connection across hot reloads
* in development. This prevents connections growing exponentially
@@ -26,13 +49,21 @@ async function connectDb() {
if (!cached.promise || disconnected) {
const opts = {
bufferCommands: false,
...(maxPoolSize ? { maxPoolSize } : {}),
...(minPoolSize ? { minPoolSize } : {}),
...(maxConnecting ? { maxConnecting } : {}),
...(maxIdleTimeMS ? { maxIdleTimeMS } : {}),
...(waitQueueTimeoutMS ? { waitQueueTimeoutMS } : {}),
...(autoIndex != undefined ? { autoIndex } : {}),
...(autoCreate != undefined ? { autoCreate } : {}),
// useNewUrlParser: true,
// useUnifiedTopology: true,
// bufferMaxEntries: 0,
// useFindAndModify: true,
// useCreateIndex: true
};
logger.info('Mongo Connection options');
logger.info(JSON.stringify(opts, null, 2));
mongoose.set('strictQuery', true);
cached.promise = mongoose.connect(MONGO_URI, opts).then((mongoose) => {
return mongoose;

View File

@@ -3,6 +3,7 @@ module.exports = {
clearMocks: true,
roots: ['<rootDir>'],
coverageDirectory: 'coverage',
testTimeout: 30000, // 30 seconds timeout for all tests
setupFiles: [
'./test/jestSetup.js',
'./test/__mocks__/logger.js',

View File

@@ -1,18 +1,17 @@
const mongoose = require('mongoose');
const crypto = require('node:crypto');
const { logger } = require('@librechat/data-schemas');
const { SystemRoles, Tools, actionDelimiter } = require('librechat-data-provider');
const { GLOBAL_PROJECT_NAME, EPHEMERAL_AGENT_ID, mcp_delimiter } =
const { ResourceType, SystemRoles, Tools, actionDelimiter } = require('librechat-data-provider');
const { GLOBAL_PROJECT_NAME, EPHEMERAL_AGENT_ID, mcp_all, mcp_delimiter } =
require('librechat-data-provider').Constants;
const { CONFIG_STORE, STARTUP_CONFIG } = require('librechat-data-provider').CacheKeys;
const {
getProjectByName,
addAgentIdsToProject,
removeAgentIdsFromProject,
removeAgentFromAllProjects,
removeAgentIdsFromProject,
addAgentIdsToProject,
getProjectByName,
} = require('./Project');
const { removeAllPermissions } = require('~/server/services/PermissionService');
const { getCachedTools } = require('~/server/services/Config');
const getLogStores = require('~/cache/getLogStores');
const { getActions } = require('./Action');
const { Agent } = require('~/db/models');
@@ -23,7 +22,7 @@ const { Agent } = require('~/db/models');
* @throws {Error} If the agent creation fails.
*/
const createAgent = async (agentData) => {
const { author, ...versionData } = agentData;
const { author: _author, ...versionData } = agentData;
const timestamp = new Date();
const initialAgentData = {
...agentData,
@@ -34,7 +33,9 @@ const createAgent = async (agentData) => {
updatedAt: timestamp,
},
],
category: agentData.category || 'general',
};
return (await Agent.create(initialAgentData)).toObject();
};
@@ -77,6 +78,7 @@ const loadEphemeralAgent = async ({ req, agent_id, endpoint, model_parameters: _
tools.push(Tools.web_search);
}
const addedServers = new Set();
if (mcpServers.size > 0) {
for (const toolName of Object.keys(availableTools)) {
if (!toolName.includes(mcp_delimiter)) {
@@ -84,9 +86,17 @@ const loadEphemeralAgent = async ({ req, agent_id, endpoint, model_parameters: _
}
const mcpServer = toolName.split(mcp_delimiter)?.[1];
if (mcpServer && mcpServers.has(mcpServer)) {
addedServers.add(mcpServer);
tools.push(toolName);
}
}
for (const mcpServer of mcpServers) {
if (addedServers.has(mcpServer)) {
continue;
}
tools.push(`${mcp_all}${mcp_delimiter}${mcpServer}`);
}
}
const instructions = req.body.promptPrefix;
@@ -131,29 +141,7 @@ const loadAgent = async ({ req, agent_id, endpoint, model_parameters }) => {
}
agent.version = agent.versions ? agent.versions.length : 0;
if (agent.author.toString() === req.user.id) {
return agent;
}
if (!agent.projectIds) {
return null;
}
const cache = getLogStores(CONFIG_STORE);
/** @type {TStartupConfig} */
const cachedStartupConfig = await cache.get(STARTUP_CONFIG);
let { instanceProjectId } = cachedStartupConfig ?? {};
if (!instanceProjectId) {
instanceProjectId = (await getProjectByName(GLOBAL_PROJECT_NAME, '_id'))._id.toString();
}
for (const projectObjectId of agent.projectIds) {
const projectId = projectObjectId.toString();
if (projectId === instanceProjectId) {
return agent;
}
}
return agent;
};
/**
@@ -183,7 +171,7 @@ const isDuplicateVersion = (updateData, currentData, versions, actionsHash = nul
'actionsHash', // Exclude actionsHash from direct comparison
];
const { $push, $pull, $addToSet, ...directUpdates } = updateData;
const { $push: _$push, $pull: _$pull, $addToSet: _$addToSet, ...directUpdates } = updateData;
if (Object.keys(directUpdates).length === 0 && !actionsHash) {
return null;
@@ -202,54 +190,116 @@ const isDuplicateVersion = (updateData, currentData, versions, actionsHash = nul
let isMatch = true;
for (const field of importantFields) {
if (!wouldBeVersion[field] && !lastVersion[field]) {
const wouldBeValue = wouldBeVersion[field];
const lastVersionValue = lastVersion[field];
// Skip if both are undefined/null
if (!wouldBeValue && !lastVersionValue) {
continue;
}
if (Array.isArray(wouldBeVersion[field]) && Array.isArray(lastVersion[field])) {
if (wouldBeVersion[field].length !== lastVersion[field].length) {
// Handle arrays
if (Array.isArray(wouldBeValue) || Array.isArray(lastVersionValue)) {
// Normalize: treat undefined/null as empty array for comparison
let wouldBeArr;
if (Array.isArray(wouldBeValue)) {
wouldBeArr = wouldBeValue;
} else if (wouldBeValue == null) {
wouldBeArr = [];
} else {
wouldBeArr = [wouldBeValue];
}
let lastVersionArr;
if (Array.isArray(lastVersionValue)) {
lastVersionArr = lastVersionValue;
} else if (lastVersionValue == null) {
lastVersionArr = [];
} else {
lastVersionArr = [lastVersionValue];
}
if (wouldBeArr.length !== lastVersionArr.length) {
isMatch = false;
break;
}
// Special handling for projectIds (MongoDB ObjectIds)
if (field === 'projectIds') {
const wouldBeIds = wouldBeVersion[field].map((id) => id.toString()).sort();
const versionIds = lastVersion[field].map((id) => id.toString()).sort();
const wouldBeIds = wouldBeArr.map((id) => id.toString()).sort();
const versionIds = lastVersionArr.map((id) => id.toString()).sort();
if (!wouldBeIds.every((id, i) => id === versionIds[i])) {
isMatch = false;
break;
}
}
// Handle arrays of objects like tool_kwargs
else if (typeof wouldBeVersion[field][0] === 'object' && wouldBeVersion[field][0] !== null) {
const sortedWouldBe = [...wouldBeVersion[field]].map((item) => JSON.stringify(item)).sort();
const sortedVersion = [...lastVersion[field]].map((item) => JSON.stringify(item)).sort();
// Handle arrays of objects
else if (
wouldBeArr.length > 0 &&
typeof wouldBeArr[0] === 'object' &&
wouldBeArr[0] !== null
) {
const sortedWouldBe = [...wouldBeArr].map((item) => JSON.stringify(item)).sort();
const sortedVersion = [...lastVersionArr].map((item) => JSON.stringify(item)).sort();
if (!sortedWouldBe.every((item, i) => item === sortedVersion[i])) {
isMatch = false;
break;
}
} else {
const sortedWouldBe = [...wouldBeVersion[field]].sort();
const sortedVersion = [...lastVersion[field]].sort();
const sortedWouldBe = [...wouldBeArr].sort();
const sortedVersion = [...lastVersionArr].sort();
if (!sortedWouldBe.every((item, i) => item === sortedVersion[i])) {
isMatch = false;
break;
}
}
} else if (field === 'model_parameters') {
const wouldBeParams = wouldBeVersion[field] || {};
const lastVersionParams = lastVersion[field] || {};
if (JSON.stringify(wouldBeParams) !== JSON.stringify(lastVersionParams)) {
}
// Handle objects
else if (typeof wouldBeValue === 'object' && wouldBeValue !== null) {
const lastVersionObj =
typeof lastVersionValue === 'object' && lastVersionValue !== null ? lastVersionValue : {};
// For empty objects, normalize the comparison
const wouldBeKeys = Object.keys(wouldBeValue);
const lastVersionKeys = Object.keys(lastVersionObj);
// If both are empty objects, they're equal
if (wouldBeKeys.length === 0 && lastVersionKeys.length === 0) {
continue;
}
// Otherwise do a deep comparison
if (JSON.stringify(wouldBeValue) !== JSON.stringify(lastVersionObj)) {
isMatch = false;
break;
}
}
// Handle primitive values
else {
// For primitives, handle the case where one is undefined and the other is a default value
if (wouldBeValue !== lastVersionValue) {
// Special handling for boolean false vs undefined
if (
typeof wouldBeValue === 'boolean' &&
wouldBeValue === false &&
lastVersionValue === undefined
) {
continue;
}
// Special handling for empty string vs undefined
if (
typeof wouldBeValue === 'string' &&
wouldBeValue === '' &&
lastVersionValue === undefined
) {
continue;
}
isMatch = false;
break;
}
} else if (wouldBeVersion[field] !== lastVersion[field]) {
isMatch = false;
break;
}
}
@@ -278,7 +328,14 @@ const updateAgent = async (searchParameter, updateData, options = {}) => {
const currentAgent = await Agent.findOne(searchParameter);
if (currentAgent) {
const { __v, _id, id, versions, author, ...versionData } = currentAgent.toObject();
const {
__v,
_id,
id: __id,
versions,
author: _author,
...versionData
} = currentAgent.toObject();
const { $push, $pull, $addToSet, ...directUpdates } = updateData;
let actionsHash = null;
@@ -316,17 +373,10 @@ const updateAgent = async (searchParameter, updateData, options = {}) => {
if (shouldCreateVersion) {
const duplicateVersion = isDuplicateVersion(updateData, versionData, versions, actionsHash);
if (duplicateVersion && !forceVersion) {
const error = new Error(
'Duplicate version: This would create a version identical to an existing one',
);
error.statusCode = 409;
error.details = {
duplicateVersion,
versionIndex: versions.findIndex(
(v) => JSON.stringify(duplicateVersion) === JSON.stringify(v),
),
};
throw error;
// No changes detected, return the current agent without creating a new version
const agentObj = currentAgent.toObject();
agentObj.version = versions.length;
return agentObj;
}
}
@@ -465,12 +515,117 @@ const deleteAgent = async (searchParameter) => {
const agent = await Agent.findOneAndDelete(searchParameter);
if (agent) {
await removeAgentFromAllProjects(agent.id);
await removeAllPermissions({
resourceType: ResourceType.AGENT,
resourceId: agent._id,
});
}
return agent;
};
/**
* Get agents by accessible IDs with optional cursor-based pagination.
* @param {Object} params - The parameters for getting accessible agents.
* @param {Array} [params.accessibleIds] - Array of agent ObjectIds the user has ACL access to.
* @param {Object} [params.otherParams] - Additional query parameters (including author filter).
* @param {number} [params.limit] - Number of agents to return (max 100). If not provided, returns all agents.
* @param {string} [params.after] - Cursor for pagination - get agents after this cursor. // base64 encoded JSON string with updatedAt and _id.
* @returns {Promise<Object>} A promise that resolves to an object containing the agents data and pagination info.
*/
const getListAgentsByAccess = async ({
accessibleIds = [],
otherParams = {},
limit = null,
after = null,
}) => {
const isPaginated = limit !== null && limit !== undefined;
const normalizedLimit = isPaginated ? Math.min(Math.max(1, parseInt(limit) || 20), 100) : null;
// Build base query combining ACL accessible agents with other filters
const baseQuery = { ...otherParams, _id: { $in: accessibleIds } };
// Add cursor condition
if (after) {
try {
const cursor = JSON.parse(Buffer.from(after, 'base64').toString('utf8'));
const { updatedAt, _id } = cursor;
const cursorCondition = {
$or: [
{ updatedAt: { $lt: new Date(updatedAt) } },
{ updatedAt: new Date(updatedAt), _id: { $gt: new mongoose.Types.ObjectId(_id) } },
],
};
// Merge cursor condition with base query
if (Object.keys(baseQuery).length > 0) {
baseQuery.$and = [{ ...baseQuery }, cursorCondition];
// Remove the original conditions from baseQuery to avoid duplication
Object.keys(baseQuery).forEach((key) => {
if (key !== '$and') delete baseQuery[key];
});
} else {
Object.assign(baseQuery, cursorCondition);
}
} catch (error) {
logger.warn('Invalid cursor:', error.message);
}
}
let query = Agent.find(baseQuery, {
id: 1,
_id: 1,
name: 1,
avatar: 1,
author: 1,
projectIds: 1,
description: 1,
updatedAt: 1,
category: 1,
support_contact: 1,
is_promoted: 1,
}).sort({ updatedAt: -1, _id: 1 });
// Only apply limit if pagination is requested
if (isPaginated) {
query = query.limit(normalizedLimit + 1);
}
const agents = await query.lean();
const hasMore = isPaginated ? agents.length > normalizedLimit : false;
const data = (isPaginated ? agents.slice(0, normalizedLimit) : agents).map((agent) => {
if (agent.author) {
agent.author = agent.author.toString();
}
return agent;
});
// Generate next cursor only if paginated
let nextCursor = null;
if (isPaginated && hasMore && data.length > 0) {
const lastAgent = agents[normalizedLimit - 1];
nextCursor = Buffer.from(
JSON.stringify({
updatedAt: lastAgent.updatedAt.toISOString(),
_id: lastAgent._id.toString(),
}),
).toString('base64');
}
return {
object: 'list',
data,
first_id: data.length > 0 ? data[0].id : null,
last_id: data.length > 0 ? data[data.length - 1].id : null,
has_more: hasMore,
after: nextCursor,
};
};
/**
* Get all agents.
* @deprecated Use getListAgentsByAccess for ACL-aware agent listing
* @param {Object} searchParameter - The search parameters to find matching agents.
* @param {string} searchParameter.author - The user ID of the agent's author.
* @returns {Promise<Object>} A promise that resolves to an object containing the agents data and pagination info.
@@ -489,13 +644,15 @@ const getListAgents = async (searchParameter) => {
const agents = (
await Agent.find(query, {
id: 1,
_id: 0,
_id: 1,
name: 1,
avatar: 1,
author: 1,
projectIds: 1,
description: 1,
// @deprecated - isCollaborative replaced by ACL permissions
isCollaborative: 1,
category: 1,
}).lean()
).map((agent) => {
if (agent.author?.toString() !== author) {
@@ -524,7 +681,7 @@ const getListAgents = async (searchParameter) => {
* This function also updates the corresponding projects to include or exclude the agent ID.
*
* @param {Object} params - Parameters for updating the agent's projects.
* @param {MongoUser} params.user - Parameters for updating the agent's projects.
* @param {IUser} params.user - Parameters for updating the agent's projects.
* @param {string} params.agentId - The ID of the agent to update.
* @param {string[]} [params.projectIds] - Array of project IDs to add to the agent.
* @param {string[]} [params.removeProjectIds] - Array of project IDs to remove from the agent.
@@ -661,6 +818,14 @@ const generateActionMetadataHash = async (actionIds, actions) => {
return hashHex;
};
/**
* Counts the number of promoted agents.
* @returns {Promise<number>} - The count of promoted agents
*/
const countPromotedAgents = async () => {
const count = await Agent.countDocuments({ is_promoted: true });
return count;
};
/**
* Load a default agent based on the endpoint
@@ -678,6 +843,8 @@ module.exports = {
revertAgentVersion,
updateAgentProjects,
addAgentResourceFile,
getListAgentsByAccess,
removeAgentResourceFiles,
generateActionMetadataHash,
countPromotedAgents,
};

View File

@@ -14,6 +14,7 @@ const mongoose = require('mongoose');
const { v4: uuidv4 } = require('uuid');
const { agentSchema } = require('@librechat/data-schemas');
const { MongoMemoryServer } = require('mongodb-memory-server');
const { AccessRoleIds, ResourceType, PrincipalType } = require('librechat-data-provider');
const {
getAgent,
loadAgent,
@@ -21,13 +22,16 @@ const {
updateAgent,
deleteAgent,
getListAgents,
getListAgentsByAccess,
revertAgentVersion,
updateAgentProjects,
addAgentResourceFile,
removeAgentResourceFiles,
generateActionMetadataHash,
revertAgentVersion,
} = require('./Agent');
const permissionService = require('~/server/services/PermissionService');
const { getCachedTools } = require('~/server/services/Config');
const { AclEntry } = require('~/db/models');
/**
* @type {import('mongoose').Model<import('@librechat/data-schemas').IAgent>}
@@ -407,12 +411,26 @@ describe('models/Agent', () => {
describe('Agent CRUD Operations', () => {
let mongoServer;
let AccessRole;
beforeAll(async () => {
mongoServer = await MongoMemoryServer.create();
const mongoUri = mongoServer.getUri();
Agent = mongoose.models.Agent || mongoose.model('Agent', agentSchema);
await mongoose.connect(mongoUri);
// Initialize models
const dbModels = require('~/db/models');
AccessRole = dbModels.AccessRole;
// Create necessary access roles for agents
await AccessRole.create({
accessRoleId: AccessRoleIds.AGENT_OWNER,
name: 'Owner',
description: 'Full control over agents',
resourceType: ResourceType.AGENT,
permBits: 15, // VIEW | EDIT | DELETE | SHARE
});
}, 20000);
afterAll(async () => {
@@ -468,6 +486,51 @@ describe('models/Agent', () => {
expect(agentAfterDelete).toBeNull();
});
test('should remove ACL entries when deleting an agent', async () => {
const agentId = `agent_${uuidv4()}`;
const authorId = new mongoose.Types.ObjectId();
// Create agent
const agent = await createAgent({
id: agentId,
name: 'Agent With Permissions',
provider: 'test',
model: 'test-model',
author: authorId,
});
// Grant permissions (simulating sharing)
await permissionService.grantPermission({
principalType: PrincipalType.USER,
principalId: authorId,
resourceType: ResourceType.AGENT,
resourceId: agent._id,
accessRoleId: AccessRoleIds.AGENT_OWNER,
grantedBy: authorId,
});
// Verify ACL entry exists
const aclEntriesBefore = await AclEntry.find({
resourceType: ResourceType.AGENT,
resourceId: agent._id,
});
expect(aclEntriesBefore).toHaveLength(1);
// Delete the agent
await deleteAgent({ id: agentId });
// Verify agent is deleted
const agentAfterDelete = await getAgent({ id: agentId });
expect(agentAfterDelete).toBeNull();
// Verify ACL entries are removed
const aclEntriesAfter = await AclEntry.find({
resourceType: ResourceType.AGENT,
resourceId: agent._id,
});
expect(aclEntriesAfter).toHaveLength(0);
});
test('should list agents by author', async () => {
const authorId = new mongoose.Types.ObjectId();
const otherAuthorId = new mongoose.Types.ObjectId();
@@ -879,45 +942,31 @@ describe('models/Agent', () => {
expect(emptyParamsAgent.model_parameters).toEqual({});
});
test('should detect duplicate versions and reject updates', async () => {
const originalConsoleError = console.error;
console.error = jest.fn();
test('should not create new version for duplicate updates', async () => {
const authorId = new mongoose.Types.ObjectId();
const testCases = generateVersionTestCases();
try {
const authorId = new mongoose.Types.ObjectId();
const testCases = generateVersionTestCases();
for (const testCase of testCases) {
const testAgentId = `agent_${uuidv4()}`;
for (const testCase of testCases) {
const testAgentId = `agent_${uuidv4()}`;
await createAgent({
id: testAgentId,
provider: 'test',
model: 'test-model',
author: authorId,
...testCase.initial,
});
await createAgent({
id: testAgentId,
provider: 'test',
model: 'test-model',
author: authorId,
...testCase.initial,
});
const updatedAgent = await updateAgent({ id: testAgentId }, testCase.update);
expect(updatedAgent.versions).toHaveLength(2); // No new version created
await updateAgent({ id: testAgentId }, testCase.update);
// Update with duplicate data should succeed but not create a new version
const duplicateUpdate = await updateAgent({ id: testAgentId }, testCase.duplicate);
let error;
try {
await updateAgent({ id: testAgentId }, testCase.duplicate);
} catch (e) {
error = e;
}
expect(duplicateUpdate.versions).toHaveLength(2); // No new version created
expect(error).toBeDefined();
expect(error.message).toContain('Duplicate version');
expect(error.statusCode).toBe(409);
expect(error.details).toBeDefined();
expect(error.details.duplicateVersion).toBeDefined();
const agent = await getAgent({ id: testAgentId });
expect(agent.versions).toHaveLength(2);
}
} finally {
console.error = originalConsoleError;
const agent = await getAgent({ id: testAgentId });
expect(agent.versions).toHaveLength(2);
}
});
@@ -1093,20 +1142,13 @@ describe('models/Agent', () => {
expect(secondUpdate.versions).toHaveLength(3);
// Update without forceVersion and no changes should not create a version
let error;
try {
await updateAgent(
{ id: agentId },
{ tools: ['listEvents_action_test.com', 'createEvent_action_test.com'] },
{ updatingUserId: authorId.toString(), forceVersion: false },
);
} catch (e) {
error = e;
}
const duplicateUpdate = await updateAgent(
{ id: agentId },
{ tools: ['listEvents_action_test.com', 'createEvent_action_test.com'] },
{ updatingUserId: authorId.toString(), forceVersion: false },
);
expect(error).toBeDefined();
expect(error.message).toContain('Duplicate version');
expect(error.statusCode).toBe(409);
expect(duplicateUpdate.versions).toHaveLength(3); // No new version created
});
test('should handle isDuplicateVersion with arrays containing null/undefined values', async () => {
@@ -1258,6 +1300,335 @@ describe('models/Agent', () => {
expect(secondUpdate.versions).toHaveLength(3);
});
test('should detect changes in support_contact fields', async () => {
const agentId = `agent_${uuidv4()}`;
const authorId = new mongoose.Types.ObjectId();
// Create agent with initial support_contact
await createAgent({
id: agentId,
name: 'Agent with Support Contact',
provider: 'test',
model: 'test-model',
author: authorId,
support_contact: {
name: 'Initial Support',
email: 'initial@support.com',
},
});
// Update support_contact name only
const firstUpdate = await updateAgent(
{ id: agentId },
{
support_contact: {
name: 'Updated Support',
email: 'initial@support.com',
},
},
);
expect(firstUpdate.versions).toHaveLength(2);
expect(firstUpdate.support_contact.name).toBe('Updated Support');
expect(firstUpdate.support_contact.email).toBe('initial@support.com');
// Update support_contact email only
const secondUpdate = await updateAgent(
{ id: agentId },
{
support_contact: {
name: 'Updated Support',
email: 'updated@support.com',
},
},
);
expect(secondUpdate.versions).toHaveLength(3);
expect(secondUpdate.support_contact.email).toBe('updated@support.com');
// Try to update with same support_contact - should be detected as duplicate but return successfully
const duplicateUpdate = await updateAgent(
{ id: agentId },
{
support_contact: {
name: 'Updated Support',
email: 'updated@support.com',
},
},
);
// Should not create a new version
expect(duplicateUpdate.versions).toHaveLength(3);
expect(duplicateUpdate.version).toBe(3);
expect(duplicateUpdate.support_contact.email).toBe('updated@support.com');
});
test('should handle support_contact from empty to populated', async () => {
const agentId = `agent_${uuidv4()}`;
const authorId = new mongoose.Types.ObjectId();
// Create agent without support_contact
const agent = await createAgent({
id: agentId,
name: 'Agent without Support',
provider: 'test',
model: 'test-model',
author: authorId,
});
// Verify support_contact is undefined since it wasn't provided
expect(agent.support_contact).toBeUndefined();
// Update to add support_contact
const updated = await updateAgent(
{ id: agentId },
{
support_contact: {
name: 'New Support Team',
email: 'support@example.com',
},
},
);
expect(updated.versions).toHaveLength(2);
expect(updated.support_contact.name).toBe('New Support Team');
expect(updated.support_contact.email).toBe('support@example.com');
});
test('should handle support_contact edge cases in isDuplicateVersion', async () => {
const agentId = `agent_${uuidv4()}`;
const authorId = new mongoose.Types.ObjectId();
// Create agent with support_contact
await createAgent({
id: agentId,
name: 'Edge Case Agent',
provider: 'test',
model: 'test-model',
author: authorId,
support_contact: {
name: 'Support',
email: 'support@test.com',
},
});
// Update to empty support_contact
const emptyUpdate = await updateAgent(
{ id: agentId },
{
support_contact: {},
},
);
expect(emptyUpdate.versions).toHaveLength(2);
expect(emptyUpdate.support_contact).toEqual({});
// Update back to populated support_contact
const repopulated = await updateAgent(
{ id: agentId },
{
support_contact: {
name: 'Support',
email: 'support@test.com',
},
},
);
expect(repopulated.versions).toHaveLength(3);
// Verify all versions have correct support_contact
const finalAgent = await getAgent({ id: agentId });
expect(finalAgent.versions[0].support_contact).toEqual({
name: 'Support',
email: 'support@test.com',
});
expect(finalAgent.versions[1].support_contact).toEqual({});
expect(finalAgent.versions[2].support_contact).toEqual({
name: 'Support',
email: 'support@test.com',
});
});
test('should preserve support_contact in version history', async () => {
const agentId = `agent_${uuidv4()}`;
const authorId = new mongoose.Types.ObjectId();
// Create agent
await createAgent({
id: agentId,
name: 'Version History Test',
provider: 'test',
model: 'test-model',
author: authorId,
support_contact: {
name: 'Initial Contact',
email: 'initial@test.com',
},
});
// Multiple updates with different support_contact values
await updateAgent(
{ id: agentId },
{
support_contact: {
name: 'Second Contact',
email: 'second@test.com',
},
},
);
await updateAgent(
{ id: agentId },
{
support_contact: {
name: 'Third Contact',
email: 'third@test.com',
},
},
);
const finalAgent = await getAgent({ id: agentId });
// Verify version history
expect(finalAgent.versions).toHaveLength(3);
expect(finalAgent.versions[0].support_contact).toEqual({
name: 'Initial Contact',
email: 'initial@test.com',
});
expect(finalAgent.versions[1].support_contact).toEqual({
name: 'Second Contact',
email: 'second@test.com',
});
expect(finalAgent.versions[2].support_contact).toEqual({
name: 'Third Contact',
email: 'third@test.com',
});
// Current state should match last version
expect(finalAgent.support_contact).toEqual({
name: 'Third Contact',
email: 'third@test.com',
});
});
test('should handle partial support_contact updates', async () => {
const agentId = `agent_${uuidv4()}`;
const authorId = new mongoose.Types.ObjectId();
// Create agent with full support_contact
await createAgent({
id: agentId,
name: 'Partial Update Test',
provider: 'test',
model: 'test-model',
author: authorId,
support_contact: {
name: 'Original Name',
email: 'original@email.com',
},
});
// MongoDB's findOneAndUpdate will replace the entire support_contact object
// So we need to verify that partial updates still work correctly
const updated = await updateAgent(
{ id: agentId },
{
support_contact: {
name: 'New Name',
email: '', // Empty email
},
},
);
expect(updated.versions).toHaveLength(2);
expect(updated.support_contact.name).toBe('New Name');
expect(updated.support_contact.email).toBe('');
// Verify isDuplicateVersion works with partial changes - should return successfully without creating new version
const duplicateUpdate = await updateAgent(
{ id: agentId },
{
support_contact: {
name: 'New Name',
email: '',
},
},
);
// Should not create a new version since content is the same
expect(duplicateUpdate.versions).toHaveLength(2);
expect(duplicateUpdate.version).toBe(2);
expect(duplicateUpdate.support_contact.name).toBe('New Name');
expect(duplicateUpdate.support_contact.email).toBe('');
});
// Edge Cases
describe.each([
{
operation: 'add',
name: 'empty file_id',
needsAgent: true,
params: { tool_resource: 'file_search', file_id: '' },
shouldResolve: true,
},
{
operation: 'add',
name: 'non-existent agent',
needsAgent: false,
params: { tool_resource: 'file_search', file_id: 'file123' },
shouldResolve: false,
error: 'Agent not found for adding resource file',
},
])('addAgentResourceFile with $name', ({ needsAgent, params, shouldResolve, error }) => {
test(`should ${shouldResolve ? 'resolve' : 'reject'}`, async () => {
const agent = needsAgent ? await createBasicAgent() : null;
const agent_id = needsAgent ? agent.id : `agent_${uuidv4()}`;
if (shouldResolve) {
await expect(addAgentResourceFile({ agent_id, ...params })).resolves.toBeDefined();
} else {
await expect(addAgentResourceFile({ agent_id, ...params })).rejects.toThrow(error);
}
});
});
describe.each([
{
name: 'empty files array',
files: [],
needsAgent: true,
shouldResolve: true,
},
{
name: 'non-existent tool_resource',
files: [{ tool_resource: 'non_existent_tool', file_id: 'file123' }],
needsAgent: true,
shouldResolve: true,
},
{
name: 'non-existent agent',
files: [{ tool_resource: 'file_search', file_id: 'file123' }],
needsAgent: false,
shouldResolve: false,
error: 'Agent not found for removing resource files',
},
])('removeAgentResourceFiles with $name', ({ files, needsAgent, shouldResolve, error }) => {
test(`should ${shouldResolve ? 'resolve' : 'reject'}`, async () => {
const agent = needsAgent ? await createBasicAgent() : null;
const agent_id = needsAgent ? agent.id : `agent_${uuidv4()}`;
if (shouldResolve) {
const result = await removeAgentResourceFiles({ agent_id, files });
expect(result).toBeDefined();
if (agent) {
expect(result.id).toBe(agent.id);
}
} else {
await expect(removeAgentResourceFiles({ agent_id, files })).rejects.toThrow(error);
}
});
});
describe('Edge Cases', () => {
test('should handle extremely large version history', async () => {
const agentId = `agent_${uuidv4()}`;
@@ -1633,7 +2004,7 @@ describe('models/Agent', () => {
expect(result.version).toBe(1);
});
test('should return null when user is not author and agent has no projectIds', async () => {
test('should return agent even when user is not author (permissions checked at route level)', async () => {
const authorId = new mongoose.Types.ObjectId();
const userId = new mongoose.Types.ObjectId();
const agentId = `agent_${uuidv4()}`;
@@ -1654,7 +2025,11 @@ describe('models/Agent', () => {
model_parameters: { model: 'gpt-4' },
});
expect(result).toBeFalsy();
// With the new permission system, loadAgent returns the agent regardless of permissions
// Permission checks are handled at the route level via middleware
expect(result).toBeTruthy();
expect(result.id).toBe(agentId);
expect(result.name).toBe('Test Agent');
});
test('should handle ephemeral agent with no MCP servers', async () => {
@@ -1762,7 +2137,7 @@ describe('models/Agent', () => {
}
});
test('should handle loadAgent with agent from different project', async () => {
test('should return agent from different project (permissions checked at route level)', async () => {
const authorId = new mongoose.Types.ObjectId();
const userId = new mongoose.Types.ObjectId();
const agentId = `agent_${uuidv4()}`;
@@ -1785,7 +2160,11 @@ describe('models/Agent', () => {
model_parameters: { model: 'gpt-4' },
});
expect(result).toBeFalsy();
// With the new permission system, loadAgent returns the agent regardless of permissions
// Permission checks are handled at the route level via middleware
expect(result).toBeTruthy();
expect(result.id).toBe(agentId);
expect(result.name).toBe('Project Agent');
});
});
});
@@ -2400,11 +2779,18 @@ describe('models/Agent', () => {
agent_ids: ['agent1', 'agent2'],
});
await updateAgent({ id: agentId }, { agent_ids: ['agent1', 'agent2', 'agent3'] });
const updatedAgent = await updateAgent(
{ id: agentId },
{ agent_ids: ['agent1', 'agent2', 'agent3'] },
);
expect(updatedAgent.versions).toHaveLength(2);
await expect(
updateAgent({ id: agentId }, { agent_ids: ['agent1', 'agent2', 'agent3'] }),
).rejects.toThrow('Duplicate version');
// Update with same agent_ids should succeed but not create a new version
const duplicateUpdate = await updateAgent(
{ id: agentId },
{ agent_ids: ['agent1', 'agent2', 'agent3'] },
);
expect(duplicateUpdate.versions).toHaveLength(2); // No new version created
});
test('should handle agent_ids field alongside other fields', async () => {
@@ -2543,9 +2929,10 @@ describe('models/Agent', () => {
expect(updated.versions).toHaveLength(2);
expect(updated.agent_ids).toEqual([]);
await expect(updateAgent({ id: agentId }, { agent_ids: [] })).rejects.toThrow(
'Duplicate version',
);
// Update with same empty agent_ids should succeed but not create a new version
const duplicateUpdate = await updateAgent({ id: agentId }, { agent_ids: [] });
expect(duplicateUpdate.versions).toHaveLength(2); // No new version created
expect(duplicateUpdate.agent_ids).toEqual([]);
});
test('should handle agent without agent_ids field', async () => {
@@ -2570,6 +2957,299 @@ describe('models/Agent', () => {
});
});
describe('Support Contact Field', () => {
let mongoServer;
beforeAll(async () => {
mongoServer = await MongoMemoryServer.create();
const mongoUri = mongoServer.getUri();
Agent = mongoose.models.Agent || mongoose.model('Agent', agentSchema);
await mongoose.connect(mongoUri);
}, 20000);
afterAll(async () => {
await mongoose.disconnect();
await mongoServer.stop();
});
beforeEach(async () => {
await Agent.deleteMany({});
});
it('should not create subdocument with ObjectId for support_contact', async () => {
const userId = new mongoose.Types.ObjectId();
const agentData = {
id: 'agent_test_support',
name: 'Test Agent',
provider: 'openai',
model: 'gpt-4',
author: userId,
support_contact: {
name: 'Support Team',
email: 'support@example.com',
},
};
// Create agent
const agent = await createAgent(agentData);
// Verify support_contact is stored correctly
expect(agent.support_contact).toBeDefined();
expect(agent.support_contact.name).toBe('Support Team');
expect(agent.support_contact.email).toBe('support@example.com');
// Verify no _id field is created in support_contact
expect(agent.support_contact._id).toBeUndefined();
// Fetch from database to double-check
const dbAgent = await Agent.findOne({ id: agentData.id });
expect(dbAgent.support_contact).toBeDefined();
expect(dbAgent.support_contact.name).toBe('Support Team');
expect(dbAgent.support_contact.email).toBe('support@example.com');
expect(dbAgent.support_contact._id).toBeUndefined();
});
it('should handle empty support_contact correctly', async () => {
const userId = new mongoose.Types.ObjectId();
const agentData = {
id: 'agent_test_empty_support',
name: 'Test Agent',
provider: 'openai',
model: 'gpt-4',
author: userId,
support_contact: {},
};
const agent = await createAgent(agentData);
// Verify empty support_contact is stored as empty object
expect(agent.support_contact).toEqual({});
expect(agent.support_contact._id).toBeUndefined();
});
it('should handle missing support_contact correctly', async () => {
const userId = new mongoose.Types.ObjectId();
const agentData = {
id: 'agent_test_no_support',
name: 'Test Agent',
provider: 'openai',
model: 'gpt-4',
author: userId,
};
const agent = await createAgent(agentData);
// Verify support_contact is undefined when not provided
expect(agent.support_contact).toBeUndefined();
});
describe('getListAgentsByAccess - Security Tests', () => {
let userA, userB;
let agentA1, agentA2, agentA3;
beforeEach(async () => {
Agent = mongoose.models.Agent || mongoose.model('Agent', agentSchema);
await Agent.deleteMany({});
await AclEntry.deleteMany({});
// Create two users
userA = new mongoose.Types.ObjectId();
userB = new mongoose.Types.ObjectId();
// Create agents for user A
agentA1 = await createAgent({
id: `agent_${uuidv4().slice(0, 12)}`,
name: 'Agent A1',
description: 'User A agent 1',
provider: 'openai',
model: 'gpt-4',
author: userA,
});
agentA2 = await createAgent({
id: `agent_${uuidv4().slice(0, 12)}`,
name: 'Agent A2',
description: 'User A agent 2',
provider: 'openai',
model: 'gpt-4',
author: userA,
});
agentA3 = await createAgent({
id: `agent_${uuidv4().slice(0, 12)}`,
name: 'Agent A3',
description: 'User A agent 3',
provider: 'openai',
model: 'gpt-4',
author: userA,
});
});
test('should return empty list when user has no accessible agents (empty accessibleIds)', async () => {
// User B has no agents and no shared agents
const result = await getListAgentsByAccess({
accessibleIds: [],
otherParams: {},
});
expect(result.data).toHaveLength(0);
expect(result.has_more).toBe(false);
expect(result.first_id).toBeNull();
expect(result.last_id).toBeNull();
});
test('should not return other users agents when accessibleIds is empty', async () => {
// User B trying to list agents with empty accessibleIds should not see User A's agents
const result = await getListAgentsByAccess({
accessibleIds: [],
otherParams: { author: userB },
});
expect(result.data).toHaveLength(0);
expect(result.has_more).toBe(false);
});
test('should only return agents in accessibleIds list', async () => {
// Give User B access to only one of User A's agents
const accessibleIds = [agentA1._id];
const result = await getListAgentsByAccess({
accessibleIds,
otherParams: {},
});
expect(result.data).toHaveLength(1);
expect(result.data[0].id).toBe(agentA1.id);
expect(result.data[0].name).toBe('Agent A1');
});
test('should return multiple accessible agents when provided', async () => {
// Give User B access to two of User A's agents
const accessibleIds = [agentA1._id, agentA3._id];
const result = await getListAgentsByAccess({
accessibleIds,
otherParams: {},
});
expect(result.data).toHaveLength(2);
const returnedIds = result.data.map((agent) => agent.id);
expect(returnedIds).toContain(agentA1.id);
expect(returnedIds).toContain(agentA3.id);
expect(returnedIds).not.toContain(agentA2.id);
});
test('should respect other query parameters while enforcing accessibleIds', async () => {
// Give access to all agents but filter by name
const accessibleIds = [agentA1._id, agentA2._id, agentA3._id];
const result = await getListAgentsByAccess({
accessibleIds,
otherParams: { name: 'Agent A2' },
});
expect(result.data).toHaveLength(1);
expect(result.data[0].id).toBe(agentA2.id);
});
test('should handle pagination correctly with accessibleIds filter', async () => {
// Create more agents
const moreAgents = [];
for (let i = 4; i <= 10; i++) {
const agent = await createAgent({
id: `agent_${uuidv4().slice(0, 12)}`,
name: `Agent A${i}`,
description: `User A agent ${i}`,
provider: 'openai',
model: 'gpt-4',
author: userA,
});
moreAgents.push(agent);
}
// Give access to all agents
const allAgentIds = [agentA1, agentA2, agentA3, ...moreAgents].map((a) => a._id);
// First page
const page1 = await getListAgentsByAccess({
accessibleIds: allAgentIds,
otherParams: {},
limit: 5,
});
expect(page1.data).toHaveLength(5);
expect(page1.has_more).toBe(true);
expect(page1.after).toBeTruthy();
// Second page
const page2 = await getListAgentsByAccess({
accessibleIds: allAgentIds,
otherParams: {},
limit: 5,
after: page1.after,
});
expect(page2.data).toHaveLength(5);
expect(page2.has_more).toBe(false);
// Verify no overlap between pages
const page1Ids = page1.data.map((a) => a.id);
const page2Ids = page2.data.map((a) => a.id);
const intersection = page1Ids.filter((id) => page2Ids.includes(id));
expect(intersection).toHaveLength(0);
});
test('should return empty list when accessibleIds contains non-existent IDs', async () => {
// Try with non-existent agent IDs
const fakeIds = [new mongoose.Types.ObjectId(), new mongoose.Types.ObjectId()];
const result = await getListAgentsByAccess({
accessibleIds: fakeIds,
otherParams: {},
});
expect(result.data).toHaveLength(0);
expect(result.has_more).toBe(false);
});
test('should handle undefined accessibleIds as empty array', async () => {
// When accessibleIds is undefined, it should be treated as empty array
const result = await getListAgentsByAccess({
accessibleIds: undefined,
otherParams: {},
});
expect(result.data).toHaveLength(0);
expect(result.has_more).toBe(false);
});
test('should combine accessibleIds with author filter correctly', async () => {
// Create an agent for User B
const agentB1 = await createAgent({
id: `agent_${uuidv4().slice(0, 12)}`,
name: 'Agent B1',
description: 'User B agent 1',
provider: 'openai',
model: 'gpt-4',
author: userB,
});
// Give User B access to one of User A's agents
const accessibleIds = [agentA1._id, agentB1._id];
// Filter by author should further restrict the results
const result = await getListAgentsByAccess({
accessibleIds,
otherParams: { author: userB },
});
expect(result.data).toHaveLength(1);
expect(result.data[0].id).toBe(agentB1.id);
expect(result.data[0].author).toBe(userB.toString());
});
});
});
function createBasicAgent(overrides = {}) {
const defaults = {
id: `agent_${uuidv4()}`,

View File

@@ -1,6 +1,5 @@
const { logger } = require('@librechat/data-schemas');
const { createTempChatExpirationDate } = require('@librechat/api');
const getCustomConfig = require('~/server/services/Config/getCustomConfig');
const { getMessages, deleteMessages } = require('./Message');
const { Conversation } = require('~/db/models');
@@ -102,8 +101,8 @@ module.exports = {
if (req?.body?.isTemporary) {
try {
const customConfig = await getCustomConfig();
update.expiredAt = createTempChatExpirationDate(customConfig);
const appConfig = req.config;
update.expiredAt = createTempChatExpirationDate(appConfig?.interfaceConfig);
} catch (err) {
logger.error('Error creating temporary chat expiration date:', err);
logger.info(`---\`saveConvo\` context: ${metadata?.context}`);

View File

@@ -0,0 +1,570 @@
const mongoose = require('mongoose');
const { v4: uuidv4 } = require('uuid');
const { EModelEndpoint } = require('librechat-data-provider');
const { MongoMemoryServer } = require('mongodb-memory-server');
const {
deleteNullOrEmptyConversations,
searchConversation,
getConvosByCursor,
getConvosQueried,
getConvoFiles,
getConvoTitle,
deleteConvos,
saveConvo,
getConvo,
} = require('./Conversation');
jest.mock('~/server/services/Config/app');
jest.mock('./Message');
const { getMessages, deleteMessages } = require('./Message');
const { Conversation } = require('~/db/models');
describe('Conversation Operations', () => {
let mongoServer;
let mockReq;
let mockConversationData;
beforeAll(async () => {
mongoServer = await MongoMemoryServer.create();
const mongoUri = mongoServer.getUri();
await mongoose.connect(mongoUri);
});
afterAll(async () => {
await mongoose.disconnect();
await mongoServer.stop();
});
beforeEach(async () => {
// Clear database
await Conversation.deleteMany({});
// Reset mocks
jest.clearAllMocks();
// Default mock implementations
getMessages.mockResolvedValue([]);
deleteMessages.mockResolvedValue({ deletedCount: 0 });
mockReq = {
user: { id: 'user123' },
body: {},
config: {
interfaceConfig: {
temporaryChatRetention: 24, // Default 24 hours
},
},
};
mockConversationData = {
conversationId: uuidv4(),
title: 'Test Conversation',
endpoint: EModelEndpoint.openAI,
};
});
describe('saveConvo', () => {
it('should save a conversation for an authenticated user', async () => {
const result = await saveConvo(mockReq, mockConversationData);
expect(result.conversationId).toBe(mockConversationData.conversationId);
expect(result.user).toBe('user123');
expect(result.title).toBe('Test Conversation');
expect(result.endpoint).toBe(EModelEndpoint.openAI);
// Verify the conversation was actually saved to the database
const savedConvo = await Conversation.findOne({
conversationId: mockConversationData.conversationId,
user: 'user123',
});
expect(savedConvo).toBeTruthy();
expect(savedConvo.title).toBe('Test Conversation');
});
it('should query messages when saving a conversation', async () => {
// Mock messages as ObjectIds
const mongoose = require('mongoose');
const mockMessages = [new mongoose.Types.ObjectId(), new mongoose.Types.ObjectId()];
getMessages.mockResolvedValue(mockMessages);
await saveConvo(mockReq, mockConversationData);
// Verify that getMessages was called with correct parameters
expect(getMessages).toHaveBeenCalledWith(
{ conversationId: mockConversationData.conversationId },
'_id',
);
});
it('should handle newConversationId when provided', async () => {
const newConversationId = uuidv4();
const result = await saveConvo(mockReq, {
...mockConversationData,
newConversationId,
});
expect(result.conversationId).toBe(newConversationId);
});
it('should handle unsetFields metadata', async () => {
const metadata = {
unsetFields: { someField: 1 },
};
await saveConvo(mockReq, mockConversationData, metadata);
const savedConvo = await Conversation.findOne({
conversationId: mockConversationData.conversationId,
});
expect(savedConvo.someField).toBeUndefined();
});
});
describe('isTemporary conversation handling', () => {
it('should save a conversation with expiredAt when isTemporary is true', async () => {
// Mock app config with 24 hour retention
mockReq.config.interfaceConfig.temporaryChatRetention = 24;
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveConvo(mockReq, mockConversationData);
const afterSave = new Date();
expect(result.conversationId).toBe(mockConversationData.conversationId);
expect(result.expiredAt).toBeDefined();
expect(result.expiredAt).toBeInstanceOf(Date);
// Verify expiredAt is approximately 24 hours in the future
const expectedExpirationTime = new Date(beforeSave.getTime() + 24 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
new Date(afterSave.getTime() + 24 * 60 * 60 * 1000 + 1000).getTime(),
);
});
it('should save a conversation without expiredAt when isTemporary is false', async () => {
mockReq.body = { isTemporary: false };
const result = await saveConvo(mockReq, mockConversationData);
expect(result.conversationId).toBe(mockConversationData.conversationId);
expect(result.expiredAt).toBeNull();
});
it('should save a conversation without expiredAt when isTemporary is not provided', async () => {
// No isTemporary in body
mockReq.body = {};
const result = await saveConvo(mockReq, mockConversationData);
expect(result.conversationId).toBe(mockConversationData.conversationId);
expect(result.expiredAt).toBeNull();
});
it('should use custom retention period from config', async () => {
// Mock app config with 48 hour retention
mockReq.config.interfaceConfig.temporaryChatRetention = 48;
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveConvo(mockReq, mockConversationData);
expect(result.expiredAt).toBeDefined();
// Verify expiredAt is approximately 48 hours in the future
const expectedExpirationTime = new Date(beforeSave.getTime() + 48 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
expectedExpirationTime.getTime() + 1000,
);
});
it('should handle minimum retention period (1 hour)', async () => {
// Mock app config with less than minimum retention
mockReq.config.interfaceConfig.temporaryChatRetention = 0.5; // Half hour - should be clamped to 1 hour
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveConvo(mockReq, mockConversationData);
expect(result.expiredAt).toBeDefined();
// Verify expiredAt is approximately 1 hour in the future (minimum)
const expectedExpirationTime = new Date(beforeSave.getTime() + 1 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
expectedExpirationTime.getTime() + 1000,
);
});
it('should handle maximum retention period (8760 hours)', async () => {
// Mock app config with more than maximum retention
mockReq.config.interfaceConfig.temporaryChatRetention = 10000; // Should be clamped to 8760 hours
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveConvo(mockReq, mockConversationData);
expect(result.expiredAt).toBeDefined();
// Verify expiredAt is approximately 8760 hours (1 year) in the future
const expectedExpirationTime = new Date(beforeSave.getTime() + 8760 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
expectedExpirationTime.getTime() + 1000,
);
});
it('should handle missing config gracefully', async () => {
// Simulate missing config - should use default retention period
delete mockReq.config;
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveConvo(mockReq, mockConversationData);
const afterSave = new Date();
// Should still save the conversation with default retention period (30 days)
expect(result.conversationId).toBe(mockConversationData.conversationId);
expect(result.expiredAt).toBeDefined();
expect(result.expiredAt).toBeInstanceOf(Date);
// Verify expiredAt is approximately 30 days in the future (720 hours)
const expectedExpirationTime = new Date(beforeSave.getTime() + 720 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
new Date(afterSave.getTime() + 720 * 60 * 60 * 1000 + 1000).getTime(),
);
});
it('should use default retention when config is not provided', async () => {
// Mock getAppConfig to return empty config
mockReq.config = {}; // Empty config
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveConvo(mockReq, mockConversationData);
expect(result.expiredAt).toBeDefined();
// Default retention is 30 days (720 hours)
const expectedExpirationTime = new Date(beforeSave.getTime() + 30 * 24 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
expectedExpirationTime.getTime() + 1000,
);
});
it('should update expiredAt when saving existing temporary conversation', async () => {
// First save a temporary conversation
mockReq.config.interfaceConfig.temporaryChatRetention = 24;
mockReq.body = { isTemporary: true };
const firstSave = await saveConvo(mockReq, mockConversationData);
const originalExpiredAt = firstSave.expiredAt;
// Wait a bit to ensure time difference
await new Promise((resolve) => setTimeout(resolve, 100));
// Save again with same conversationId but different title
const updatedData = { ...mockConversationData, title: 'Updated Title' };
const secondSave = await saveConvo(mockReq, updatedData);
// Should update title and create new expiredAt
expect(secondSave.title).toBe('Updated Title');
expect(secondSave.expiredAt).toBeDefined();
expect(new Date(secondSave.expiredAt).getTime()).toBeGreaterThan(
new Date(originalExpiredAt).getTime(),
);
});
it('should not set expiredAt when updating non-temporary conversation', async () => {
// First save a non-temporary conversation
mockReq.body = { isTemporary: false };
const firstSave = await saveConvo(mockReq, mockConversationData);
expect(firstSave.expiredAt).toBeNull();
// Update without isTemporary flag
mockReq.body = {};
const updatedData = { ...mockConversationData, title: 'Updated Title' };
const secondSave = await saveConvo(mockReq, updatedData);
expect(secondSave.title).toBe('Updated Title');
expect(secondSave.expiredAt).toBeNull();
});
it('should filter out expired conversations in getConvosByCursor', async () => {
// Create some test conversations
const nonExpiredConvo = await Conversation.create({
conversationId: uuidv4(),
user: 'user123',
title: 'Non-expired',
endpoint: EModelEndpoint.openAI,
expiredAt: null,
updatedAt: new Date(),
});
await Conversation.create({
conversationId: uuidv4(),
user: 'user123',
title: 'Future expired',
endpoint: EModelEndpoint.openAI,
expiredAt: new Date(Date.now() + 24 * 60 * 60 * 1000), // 24 hours from now
updatedAt: new Date(),
});
// Mock Meili search
Conversation.meiliSearch = jest.fn().mockResolvedValue({ hits: [] });
const result = await getConvosByCursor('user123');
// Should only return conversations with null or non-existent expiredAt
expect(result.conversations).toHaveLength(1);
expect(result.conversations[0].conversationId).toBe(nonExpiredConvo.conversationId);
});
it('should filter out expired conversations in getConvosQueried', async () => {
// Create test conversations
const nonExpiredConvo = await Conversation.create({
conversationId: uuidv4(),
user: 'user123',
title: 'Non-expired',
endpoint: EModelEndpoint.openAI,
expiredAt: null,
});
const expiredConvo = await Conversation.create({
conversationId: uuidv4(),
user: 'user123',
title: 'Expired',
endpoint: EModelEndpoint.openAI,
expiredAt: new Date(Date.now() + 24 * 60 * 60 * 1000),
});
const convoIds = [
{ conversationId: nonExpiredConvo.conversationId },
{ conversationId: expiredConvo.conversationId },
];
const result = await getConvosQueried('user123', convoIds);
// Should only return the non-expired conversation
expect(result.conversations).toHaveLength(1);
expect(result.conversations[0].conversationId).toBe(nonExpiredConvo.conversationId);
expect(result.convoMap[nonExpiredConvo.conversationId]).toBeDefined();
expect(result.convoMap[expiredConvo.conversationId]).toBeUndefined();
});
});
describe('searchConversation', () => {
it('should find a conversation by conversationId', async () => {
await Conversation.create({
conversationId: mockConversationData.conversationId,
user: 'user123',
title: 'Test',
endpoint: EModelEndpoint.openAI,
});
const result = await searchConversation(mockConversationData.conversationId);
expect(result).toBeTruthy();
expect(result.conversationId).toBe(mockConversationData.conversationId);
expect(result.user).toBe('user123');
expect(result.title).toBeUndefined(); // Only returns conversationId and user
});
it('should return null if conversation not found', async () => {
const result = await searchConversation('non-existent-id');
expect(result).toBeNull();
});
});
describe('getConvo', () => {
it('should retrieve a conversation for a user', async () => {
await Conversation.create({
conversationId: mockConversationData.conversationId,
user: 'user123',
title: 'Test Conversation',
endpoint: EModelEndpoint.openAI,
});
const result = await getConvo('user123', mockConversationData.conversationId);
expect(result.conversationId).toBe(mockConversationData.conversationId);
expect(result.user).toBe('user123');
expect(result.title).toBe('Test Conversation');
});
it('should return null if conversation not found', async () => {
const result = await getConvo('user123', 'non-existent-id');
expect(result).toBeNull();
});
});
describe('getConvoTitle', () => {
it('should return the conversation title', async () => {
await Conversation.create({
conversationId: mockConversationData.conversationId,
user: 'user123',
title: 'Test Title',
endpoint: EModelEndpoint.openAI,
});
const result = await getConvoTitle('user123', mockConversationData.conversationId);
expect(result).toBe('Test Title');
});
it('should return null if conversation has no title', async () => {
await Conversation.create({
conversationId: mockConversationData.conversationId,
user: 'user123',
title: null,
endpoint: EModelEndpoint.openAI,
});
const result = await getConvoTitle('user123', mockConversationData.conversationId);
expect(result).toBeNull();
});
it('should return "New Chat" if conversation not found', async () => {
const result = await getConvoTitle('user123', 'non-existent-id');
expect(result).toBe('New Chat');
});
});
describe('getConvoFiles', () => {
it('should return conversation files', async () => {
const files = ['file1', 'file2'];
await Conversation.create({
conversationId: mockConversationData.conversationId,
user: 'user123',
endpoint: EModelEndpoint.openAI,
files,
});
const result = await getConvoFiles(mockConversationData.conversationId);
expect(result).toEqual(files);
});
it('should return empty array if no files', async () => {
await Conversation.create({
conversationId: mockConversationData.conversationId,
user: 'user123',
endpoint: EModelEndpoint.openAI,
});
const result = await getConvoFiles(mockConversationData.conversationId);
expect(result).toEqual([]);
});
it('should return empty array if conversation not found', async () => {
const result = await getConvoFiles('non-existent-id');
expect(result).toEqual([]);
});
});
describe('deleteConvos', () => {
it('should delete conversations and associated messages', async () => {
await Conversation.create({
conversationId: mockConversationData.conversationId,
user: 'user123',
title: 'To Delete',
endpoint: EModelEndpoint.openAI,
});
deleteMessages.mockResolvedValue({ deletedCount: 5 });
const result = await deleteConvos('user123', {
conversationId: mockConversationData.conversationId,
});
expect(result.deletedCount).toBe(1);
expect(result.messages.deletedCount).toBe(5);
expect(deleteMessages).toHaveBeenCalledWith({
conversationId: { $in: [mockConversationData.conversationId] },
});
// Verify conversation was deleted
const deletedConvo = await Conversation.findOne({
conversationId: mockConversationData.conversationId,
});
expect(deletedConvo).toBeNull();
});
it('should throw error if no conversations found', async () => {
await expect(deleteConvos('user123', { conversationId: 'non-existent' })).rejects.toThrow(
'Conversation not found or already deleted.',
);
});
});
describe('deleteNullOrEmptyConversations', () => {
it('should delete conversations with null, empty, or missing conversationIds', async () => {
// Since conversationId is required by the schema, we can't create documents with null/missing IDs
// This test should verify the function works when such documents exist (e.g., from data corruption)
// For this test, let's create a valid conversation and verify the function doesn't delete it
await Conversation.create({
conversationId: mockConversationData.conversationId,
user: 'user4',
endpoint: EModelEndpoint.openAI,
});
deleteMessages.mockResolvedValue({ deletedCount: 0 });
const result = await deleteNullOrEmptyConversations();
expect(result.conversations.deletedCount).toBe(0); // No invalid conversations to delete
expect(result.messages.deletedCount).toBe(0);
// Verify valid conversation remains
const remainingConvos = await Conversation.find({});
expect(remainingConvos).toHaveLength(1);
expect(remainingConvos[0].conversationId).toBe(mockConversationData.conversationId);
});
});
describe('Error Handling', () => {
it('should handle database errors in saveConvo', async () => {
// Force a database error by disconnecting
await mongoose.disconnect();
const result = await saveConvo(mockReq, mockConversationData);
expect(result).toEqual({ message: 'Error saving conversation' });
// Reconnect for other tests
await mongoose.connect(mongoServer.getUri());
});
});
});

View File

@@ -1,7 +1,5 @@
const { logger } = require('@librechat/data-schemas');
const { EToolResources, FileContext, Constants } = require('librechat-data-provider');
const { getProjectByName } = require('./Project');
const { getAgent } = require('./Agent');
const { EToolResources, FileContext } = require('librechat-data-provider');
const { File } = require('~/db/models');
/**
@@ -14,124 +12,17 @@ const findFileById = async (file_id, options = {}) => {
return await File.findOne({ file_id, ...options }).lean();
};
/**
* Checks if a user has access to multiple files through a shared agent (batch operation)
* @param {string} userId - The user ID to check access for
* @param {string[]} fileIds - Array of file IDs to check
* @param {string} agentId - The agent ID that might grant access
* @returns {Promise<Map<string, boolean>>} Map of fileId to access status
*/
const hasAccessToFilesViaAgent = async (userId, fileIds, agentId, checkCollaborative = true) => {
const accessMap = new Map();
// Initialize all files as no access
fileIds.forEach((fileId) => accessMap.set(fileId, false));
try {
const agent = await getAgent({ id: agentId });
if (!agent) {
return accessMap;
}
// Check if user is the author - if so, grant access to all files
if (agent.author.toString() === userId) {
fileIds.forEach((fileId) => accessMap.set(fileId, true));
return accessMap;
}
// Check if agent is shared with the user via projects
if (!agent.projectIds || agent.projectIds.length === 0) {
return accessMap;
}
// Check if agent is in global project
const globalProject = await getProjectByName(Constants.GLOBAL_PROJECT_NAME, '_id');
if (
!globalProject ||
!agent.projectIds.some((pid) => pid.toString() === globalProject._id.toString())
) {
return accessMap;
}
// Agent is globally shared - check if it's collaborative
if (checkCollaborative && !agent.isCollaborative) {
return accessMap;
}
// Check which files are actually attached
const attachedFileIds = new Set();
if (agent.tool_resources) {
for (const [_resourceType, resource] of Object.entries(agent.tool_resources)) {
if (resource?.file_ids && Array.isArray(resource.file_ids)) {
resource.file_ids.forEach((fileId) => attachedFileIds.add(fileId));
}
}
}
// Grant access only to files that are attached to this agent
fileIds.forEach((fileId) => {
if (attachedFileIds.has(fileId)) {
accessMap.set(fileId, true);
}
});
return accessMap;
} catch (error) {
logger.error('[hasAccessToFilesViaAgent] Error checking file access:', error);
return accessMap;
}
};
/**
* Retrieves files matching a given filter, sorted by the most recently updated.
* @param {Object} filter - The filter criteria to apply.
* @param {Object} [_sortOptions] - Optional sort parameters.
* @param {Object|String} [selectFields={ text: 0 }] - Fields to include/exclude in the query results.
* Default excludes the 'text' field.
* @param {Object} [options] - Additional options
* @param {string} [options.userId] - User ID for access control
* @param {string} [options.agentId] - Agent ID that might grant access to files
* @returns {Promise<Array<MongoFile>>} A promise that resolves to an array of file documents.
*/
const getFiles = async (filter, _sortOptions, selectFields = { text: 0 }, options = {}) => {
const getFiles = async (filter, _sortOptions, selectFields = { text: 0 }) => {
const sortOptions = { updatedAt: -1, ..._sortOptions };
const files = await File.find(filter).select(selectFields).sort(sortOptions).lean();
// If userId and agentId are provided, filter files based on access
if (options.userId && options.agentId) {
// Collect file IDs that need access check
const filesToCheck = [];
const ownedFiles = [];
for (const file of files) {
if (file.user && file.user.toString() === options.userId) {
ownedFiles.push(file);
} else {
filesToCheck.push(file);
}
}
if (filesToCheck.length === 0) {
return ownedFiles;
}
// Batch check access for all non-owned files
const fileIds = filesToCheck.map((f) => f.file_id);
const accessMap = await hasAccessToFilesViaAgent(
options.userId,
fileIds,
options.agentId,
false,
);
// Filter files based on access
const accessibleFiles = filesToCheck.filter((file) => accessMap.get(file.file_id));
return [...ownedFiles, ...accessibleFiles];
}
return files;
return await File.find(filter).select(selectFields).sort(sortOptions).lean();
};
/**
@@ -285,5 +176,4 @@ module.exports = {
deleteFiles,
deleteFileByFilter,
batchUpdateFiles,
hasAccessToFilesViaAgent,
};

View File

@@ -1,17 +1,23 @@
const mongoose = require('mongoose');
const { v4: uuidv4 } = require('uuid');
const { fileSchema } = require('@librechat/data-schemas');
const { agentSchema } = require('@librechat/data-schemas');
const { projectSchema } = require('@librechat/data-schemas');
const { createModels } = require('@librechat/data-schemas');
const { MongoMemoryServer } = require('mongodb-memory-server');
const { GLOBAL_PROJECT_NAME } = require('librechat-data-provider').Constants;
const {
SystemRoles,
ResourceType,
AccessRoleIds,
PrincipalType,
} = require('librechat-data-provider');
const { grantPermission } = require('~/server/services/PermissionService');
const { getFiles, createFile } = require('./File');
const { getProjectByName } = require('./Project');
const { seedDefaultRoles } = require('~/models');
const { createAgent } = require('./Agent');
let File;
let Agent;
let Project;
let AclEntry;
let User;
let modelsToCleanup = [];
describe('File Access Control', () => {
let mongoServer;
@@ -19,13 +25,41 @@ describe('File Access Control', () => {
beforeAll(async () => {
mongoServer = await MongoMemoryServer.create();
const mongoUri = mongoServer.getUri();
File = mongoose.models.File || mongoose.model('File', fileSchema);
Agent = mongoose.models.Agent || mongoose.model('Agent', agentSchema);
Project = mongoose.models.Project || mongoose.model('Project', projectSchema);
await mongoose.connect(mongoUri);
// Initialize all models
const models = createModels(mongoose);
// Track which models we're adding
modelsToCleanup = Object.keys(models);
// Register models on mongoose.models so methods can access them
const dbModels = require('~/db/models');
Object.assign(mongoose.models, dbModels);
File = dbModels.File;
Agent = dbModels.Agent;
AclEntry = dbModels.AclEntry;
User = dbModels.User;
// Seed default roles
await seedDefaultRoles();
});
afterAll(async () => {
// Clean up all collections before disconnecting
const collections = mongoose.connection.collections;
for (const key in collections) {
await collections[key].deleteMany({});
}
// Clear only the models we added
for (const modelName of modelsToCleanup) {
if (mongoose.models[modelName]) {
delete mongoose.models[modelName];
}
}
await mongoose.disconnect();
await mongoServer.stop();
});
@@ -33,16 +67,33 @@ describe('File Access Control', () => {
beforeEach(async () => {
await File.deleteMany({});
await Agent.deleteMany({});
await Project.deleteMany({});
await AclEntry.deleteMany({});
await User.deleteMany({});
// Don't delete AccessRole as they are seeded defaults needed for tests
});
describe('hasAccessToFilesViaAgent', () => {
it('should efficiently check access for multiple files at once', async () => {
const userId = new mongoose.Types.ObjectId().toString();
const authorId = new mongoose.Types.ObjectId().toString();
const userId = new mongoose.Types.ObjectId();
const authorId = new mongoose.Types.ObjectId();
const agentId = uuidv4();
const fileIds = [uuidv4(), uuidv4(), uuidv4(), uuidv4()];
// Create users
await User.create({
_id: userId,
email: 'user@example.com',
emailVerified: true,
provider: 'local',
});
await User.create({
_id: authorId,
email: 'author@example.com',
emailVerified: true,
provider: 'local',
});
// Create files
for (const fileId of fileIds) {
await createFile({
@@ -54,13 +105,12 @@ describe('File Access Control', () => {
}
// Create agent with only first two files attached
await createAgent({
const agent = await createAgent({
id: agentId,
name: 'Test Agent',
author: authorId,
model: 'gpt-4',
provider: 'openai',
isCollaborative: true,
tool_resources: {
file_search: {
file_ids: [fileIds[0], fileIds[1]],
@@ -68,15 +118,24 @@ describe('File Access Control', () => {
},
});
// Get or create global project
const globalProject = await getProjectByName(GLOBAL_PROJECT_NAME, '_id');
// Share agent globally
await Agent.updateOne({ id: agentId }, { $push: { projectIds: globalProject._id } });
// Grant EDIT permission to user on the agent
await grantPermission({
principalType: PrincipalType.USER,
principalId: userId,
resourceType: ResourceType.AGENT,
resourceId: agent._id,
accessRoleId: AccessRoleIds.AGENT_EDITOR,
grantedBy: authorId,
});
// Check access for all files
const { hasAccessToFilesViaAgent } = require('./File');
const accessMap = await hasAccessToFilesViaAgent(userId, fileIds, agentId);
const { hasAccessToFilesViaAgent } = require('~/server/services/Files/permissions');
const accessMap = await hasAccessToFilesViaAgent({
userId: userId,
role: SystemRoles.USER,
fileIds,
agentId: agent.id, // Use agent.id which is the custom UUID
});
// Should have access only to the first two files
expect(accessMap.get(fileIds[0])).toBe(true);
@@ -86,10 +145,18 @@ describe('File Access Control', () => {
});
it('should grant access to all files when user is the agent author', async () => {
const authorId = new mongoose.Types.ObjectId().toString();
const authorId = new mongoose.Types.ObjectId();
const agentId = uuidv4();
const fileIds = [uuidv4(), uuidv4(), uuidv4()];
// Create author user
await User.create({
_id: authorId,
email: 'author@example.com',
emailVerified: true,
provider: 'local',
});
// Create agent
await createAgent({
id: agentId,
@@ -105,8 +172,13 @@ describe('File Access Control', () => {
});
// Check access as the author
const { hasAccessToFilesViaAgent } = require('./File');
const accessMap = await hasAccessToFilesViaAgent(authorId, fileIds, agentId);
const { hasAccessToFilesViaAgent } = require('~/server/services/Files/permissions');
const accessMap = await hasAccessToFilesViaAgent({
userId: authorId,
role: SystemRoles.USER,
fileIds,
agentId,
});
// Author should have access to all files
expect(accessMap.get(fileIds[0])).toBe(true);
@@ -115,31 +187,58 @@ describe('File Access Control', () => {
});
it('should handle non-existent agent gracefully', async () => {
const userId = new mongoose.Types.ObjectId().toString();
const userId = new mongoose.Types.ObjectId();
const fileIds = [uuidv4(), uuidv4()];
const { hasAccessToFilesViaAgent } = require('./File');
const accessMap = await hasAccessToFilesViaAgent(userId, fileIds, 'non-existent-agent');
// Create user
await User.create({
_id: userId,
email: 'user@example.com',
emailVerified: true,
provider: 'local',
});
const { hasAccessToFilesViaAgent } = require('~/server/services/Files/permissions');
const accessMap = await hasAccessToFilesViaAgent({
userId: userId,
role: SystemRoles.USER,
fileIds,
agentId: 'non-existent-agent',
});
// Should have no access to any files
expect(accessMap.get(fileIds[0])).toBe(false);
expect(accessMap.get(fileIds[1])).toBe(false);
});
it('should deny access when agent is not collaborative', async () => {
const userId = new mongoose.Types.ObjectId().toString();
const authorId = new mongoose.Types.ObjectId().toString();
it('should deny access when user only has VIEW permission and needs access for deletion', async () => {
const userId = new mongoose.Types.ObjectId();
const authorId = new mongoose.Types.ObjectId();
const agentId = uuidv4();
const fileIds = [uuidv4(), uuidv4()];
// Create agent with files but isCollaborative: false
await createAgent({
// Create users
await User.create({
_id: userId,
email: 'user@example.com',
emailVerified: true,
provider: 'local',
});
await User.create({
_id: authorId,
email: 'author@example.com',
emailVerified: true,
provider: 'local',
});
// Create agent with files
const agent = await createAgent({
id: agentId,
name: 'Non-Collaborative Agent',
name: 'View-Only Agent',
author: authorId,
model: 'gpt-4',
provider: 'openai',
isCollaborative: false,
tool_resources: {
file_search: {
file_ids: fileIds,
@@ -147,20 +246,88 @@ describe('File Access Control', () => {
},
});
// Get or create global project
const globalProject = await getProjectByName(GLOBAL_PROJECT_NAME, '_id');
// Share agent globally
await Agent.updateOne({ id: agentId }, { $push: { projectIds: globalProject._id } });
// Grant only VIEW permission to user on the agent
await grantPermission({
principalType: PrincipalType.USER,
principalId: userId,
resourceType: ResourceType.AGENT,
resourceId: agent._id,
accessRoleId: AccessRoleIds.AGENT_VIEWER,
grantedBy: authorId,
});
// Check access for files
const { hasAccessToFilesViaAgent } = require('./File');
const accessMap = await hasAccessToFilesViaAgent(userId, fileIds, agentId);
const { hasAccessToFilesViaAgent } = require('~/server/services/Files/permissions');
const accessMap = await hasAccessToFilesViaAgent({
userId: userId,
role: SystemRoles.USER,
fileIds,
agentId,
isDelete: true,
});
// Should have no access to any files when isCollaborative is false
// Should have no access to any files when only VIEW permission
expect(accessMap.get(fileIds[0])).toBe(false);
expect(accessMap.get(fileIds[1])).toBe(false);
});
it('should grant access when user has VIEW permission', async () => {
const userId = new mongoose.Types.ObjectId();
const authorId = new mongoose.Types.ObjectId();
const agentId = uuidv4();
const fileIds = [uuidv4(), uuidv4()];
// Create users
await User.create({
_id: userId,
email: 'user@example.com',
emailVerified: true,
provider: 'local',
});
await User.create({
_id: authorId,
email: 'author@example.com',
emailVerified: true,
provider: 'local',
});
// Create agent with files
const agent = await createAgent({
id: agentId,
name: 'View-Only Agent',
author: authorId,
model: 'gpt-4',
provider: 'openai',
tool_resources: {
file_search: {
file_ids: fileIds,
},
},
});
// Grant only VIEW permission to user on the agent
await grantPermission({
principalType: PrincipalType.USER,
principalId: userId,
resourceType: ResourceType.AGENT,
resourceId: agent._id,
accessRoleId: AccessRoleIds.AGENT_VIEWER,
grantedBy: authorId,
});
// Check access for files
const { hasAccessToFilesViaAgent } = require('~/server/services/Files/permissions');
const accessMap = await hasAccessToFilesViaAgent({
userId: userId,
role: SystemRoles.USER,
fileIds,
agentId,
});
expect(accessMap.get(fileIds[0])).toBe(true);
expect(accessMap.get(fileIds[1])).toBe(true);
});
});
describe('getFiles with agent access control', () => {
@@ -172,18 +339,28 @@ describe('File Access Control', () => {
const sharedFileId = `file_${uuidv4()}`;
const inaccessibleFileId = `file_${uuidv4()}`;
// Create/get global project using getProjectByName which will upsert
const globalProject = await getProjectByName(GLOBAL_PROJECT_NAME);
// Create users
await User.create({
_id: userId,
email: 'user@example.com',
emailVerified: true,
provider: 'local',
});
await User.create({
_id: authorId,
email: 'author@example.com',
emailVerified: true,
provider: 'local',
});
// Create agent with shared file
await createAgent({
const agent = await createAgent({
id: agentId,
name: 'Shared Agent',
provider: 'test',
model: 'test-model',
author: authorId,
projectIds: [globalProject._id],
isCollaborative: true,
tool_resources: {
file_search: {
file_ids: [sharedFileId],
@@ -191,6 +368,16 @@ describe('File Access Control', () => {
},
});
// Grant EDIT permission to user on the agent
await grantPermission({
principalType: PrincipalType.USER,
principalId: userId,
resourceType: ResourceType.AGENT,
resourceId: agent._id,
accessRoleId: AccessRoleIds.AGENT_EDITOR,
grantedBy: authorId,
});
// Create files
await createFile({
file_id: ownedFileId,
@@ -220,14 +407,22 @@ describe('File Access Control', () => {
bytes: 300,
});
// Get files with access control
const files = await getFiles(
// Get all files first
const allFiles = await getFiles(
{ file_id: { $in: [ownedFileId, sharedFileId, inaccessibleFileId] } },
null,
{ text: 0 },
{ userId: userId.toString(), agentId },
);
// Then filter by access control
const { filterFilesByAgentAccess } = require('~/server/services/Files/permissions');
const files = await filterFilesByAgentAccess({
files: allFiles,
userId: userId,
role: SystemRoles.USER,
agentId,
});
expect(files).toHaveLength(2);
expect(files.map((f) => f.file_id)).toContain(ownedFileId);
expect(files.map((f) => f.file_id)).toContain(sharedFileId);
@@ -261,4 +456,166 @@ describe('File Access Control', () => {
expect(files).toHaveLength(2);
});
});
describe('Role-based file permissions', () => {
it('should optimize permission checks when role is provided', async () => {
const userId = new mongoose.Types.ObjectId();
const authorId = new mongoose.Types.ObjectId();
const agentId = uuidv4();
const fileIds = [uuidv4(), uuidv4()];
// Create users
await User.create({
_id: userId,
email: 'user@example.com',
emailVerified: true,
provider: 'local',
role: 'ADMIN', // User has ADMIN role
});
await User.create({
_id: authorId,
email: 'author@example.com',
emailVerified: true,
provider: 'local',
});
// Create files
for (const fileId of fileIds) {
await createFile({
file_id: fileId,
user: authorId,
filename: `${fileId}.txt`,
filepath: `/uploads/${fileId}.txt`,
type: 'text/plain',
bytes: 100,
});
}
// Create agent with files
const agent = await createAgent({
id: agentId,
name: 'Test Agent',
author: authorId,
model: 'gpt-4',
provider: 'openai',
tool_resources: {
file_search: {
file_ids: fileIds,
},
},
});
// Grant permission to ADMIN role
await grantPermission({
principalType: PrincipalType.ROLE,
principalId: 'ADMIN',
resourceType: ResourceType.AGENT,
resourceId: agent._id,
accessRoleId: AccessRoleIds.AGENT_EDITOR,
grantedBy: authorId,
});
// Check access with role provided (should avoid DB query)
const { hasAccessToFilesViaAgent } = require('~/server/services/Files/permissions');
const accessMapWithRole = await hasAccessToFilesViaAgent({
userId: userId,
role: 'ADMIN',
fileIds,
agentId: agent.id,
});
// User should have access through their ADMIN role
expect(accessMapWithRole.get(fileIds[0])).toBe(true);
expect(accessMapWithRole.get(fileIds[1])).toBe(true);
// Check access without role (will query DB to get user's role)
const accessMapWithoutRole = await hasAccessToFilesViaAgent({
userId: userId,
fileIds,
agentId: agent.id,
});
// Should have same result
expect(accessMapWithoutRole.get(fileIds[0])).toBe(true);
expect(accessMapWithoutRole.get(fileIds[1])).toBe(true);
});
it('should deny access when user role changes', async () => {
const userId = new mongoose.Types.ObjectId();
const authorId = new mongoose.Types.ObjectId();
const agentId = uuidv4();
const fileId = uuidv4();
// Create users
await User.create({
_id: userId,
email: 'user@example.com',
emailVerified: true,
provider: 'local',
role: 'EDITOR',
});
await User.create({
_id: authorId,
email: 'author@example.com',
emailVerified: true,
provider: 'local',
});
// Create file
await createFile({
file_id: fileId,
user: authorId,
filename: 'test.txt',
filepath: '/uploads/test.txt',
type: 'text/plain',
bytes: 100,
});
// Create agent
const agent = await createAgent({
id: agentId,
name: 'Test Agent',
author: authorId,
model: 'gpt-4',
provider: 'openai',
tool_resources: {
file_search: {
file_ids: [fileId],
},
},
});
// Grant permission to EDITOR role only
await grantPermission({
principalType: PrincipalType.ROLE,
principalId: 'EDITOR',
resourceType: ResourceType.AGENT,
resourceId: agent._id,
accessRoleId: AccessRoleIds.AGENT_EDITOR,
grantedBy: authorId,
});
const { hasAccessToFilesViaAgent } = require('~/server/services/Files/permissions');
// Check with EDITOR role - should have access
const accessAsEditor = await hasAccessToFilesViaAgent({
userId: userId,
role: 'EDITOR',
fileIds: [fileId],
agentId: agent.id,
});
expect(accessAsEditor.get(fileId)).toBe(true);
// Simulate role change to USER - should lose access
const accessAsUser = await hasAccessToFilesViaAgent({
userId: userId,
role: SystemRoles.USER,
fileIds: [fileId],
agentId: agent.id,
});
expect(accessAsUser.get(fileId)).toBe(false);
});
});
});

View File

@@ -1,7 +1,6 @@
const { z } = require('zod');
const { logger } = require('@librechat/data-schemas');
const { createTempChatExpirationDate } = require('@librechat/api');
const getCustomConfig = require('~/server/services/Config/getCustomConfig');
const { Message } = require('~/db/models');
const idSchema = z.string().uuid();
@@ -11,7 +10,7 @@ const idSchema = z.string().uuid();
*
* @async
* @function saveMessage
* @param {Express.Request} req - The request object containing user information.
* @param {ServerRequest} req - The request object containing user information.
* @param {Object} params - The message data object.
* @param {string} params.endpoint - The endpoint where the message originated.
* @param {string} params.iconURL - The URL of the sender's icon.
@@ -57,8 +56,8 @@ async function saveMessage(req, params, metadata) {
if (req?.body?.isTemporary) {
try {
const customConfig = await getCustomConfig();
update.expiredAt = createTempChatExpirationDate(customConfig);
const appConfig = req.config;
update.expiredAt = createTempChatExpirationDate(appConfig?.interfaceConfig);
} catch (err) {
logger.error('Error creating temporary chat expiration date:', err);
logger.info(`---\`saveMessage\` context: ${metadata?.context}`);

View File

@@ -1,17 +1,20 @@
const mongoose = require('mongoose');
const { MongoMemoryServer } = require('mongodb-memory-server');
const { v4: uuidv4 } = require('uuid');
const { messageSchema } = require('@librechat/data-schemas');
const { MongoMemoryServer } = require('mongodb-memory-server');
const {
saveMessage,
getMessages,
updateMessage,
deleteMessages,
bulkSaveMessages,
updateMessageText,
deleteMessagesSince,
} = require('./Message');
jest.mock('~/server/services/Config/app');
/**
* @type {import('mongoose').Model<import('@librechat/data-schemas').IMessage>}
*/
@@ -40,6 +43,11 @@ describe('Message Operations', () => {
mockReq = {
user: { id: 'user123' },
config: {
interfaceConfig: {
temporaryChatRetention: 24, // Default 24 hours
},
},
};
mockMessageData = {
@@ -117,21 +125,21 @@ describe('Message Operations', () => {
const conversationId = uuidv4();
// Create multiple messages in the same conversation
const message1 = await saveMessage(mockReq, {
await saveMessage(mockReq, {
messageId: 'msg1',
conversationId,
text: 'First message',
user: 'user123',
});
const message2 = await saveMessage(mockReq, {
await saveMessage(mockReq, {
messageId: 'msg2',
conversationId,
text: 'Second message',
user: 'user123',
});
const message3 = await saveMessage(mockReq, {
await saveMessage(mockReq, {
messageId: 'msg3',
conversationId,
text: 'Third message',
@@ -314,4 +322,255 @@ describe('Message Operations', () => {
expect(messages[0].text).toBe('Victim message');
});
});
describe('isTemporary message handling', () => {
beforeEach(() => {
// Reset mocks before each test
jest.clearAllMocks();
});
it('should save a message with expiredAt when isTemporary is true', async () => {
// Mock app config with 24 hour retention
mockReq.config.interfaceConfig.temporaryChatRetention = 24;
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveMessage(mockReq, mockMessageData);
const afterSave = new Date();
expect(result.messageId).toBe('msg123');
expect(result.expiredAt).toBeDefined();
expect(result.expiredAt).toBeInstanceOf(Date);
// Verify expiredAt is approximately 24 hours in the future
const expectedExpirationTime = new Date(beforeSave.getTime() + 24 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
new Date(afterSave.getTime() + 24 * 60 * 60 * 1000 + 1000).getTime(),
);
});
it('should save a message without expiredAt when isTemporary is false', async () => {
mockReq.body = { isTemporary: false };
const result = await saveMessage(mockReq, mockMessageData);
expect(result.messageId).toBe('msg123');
expect(result.expiredAt).toBeNull();
});
it('should save a message without expiredAt when isTemporary is not provided', async () => {
// No isTemporary in body
mockReq.body = {};
const result = await saveMessage(mockReq, mockMessageData);
expect(result.messageId).toBe('msg123');
expect(result.expiredAt).toBeNull();
});
it('should use custom retention period from config', async () => {
// Mock app config with 48 hour retention
mockReq.config.interfaceConfig.temporaryChatRetention = 48;
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveMessage(mockReq, mockMessageData);
expect(result.expiredAt).toBeDefined();
// Verify expiredAt is approximately 48 hours in the future
const expectedExpirationTime = new Date(beforeSave.getTime() + 48 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
expectedExpirationTime.getTime() + 1000,
);
});
it('should handle minimum retention period (1 hour)', async () => {
// Mock app config with less than minimum retention
mockReq.config.interfaceConfig.temporaryChatRetention = 0.5; // Half hour - should be clamped to 1 hour
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveMessage(mockReq, mockMessageData);
expect(result.expiredAt).toBeDefined();
// Verify expiredAt is approximately 1 hour in the future (minimum)
const expectedExpirationTime = new Date(beforeSave.getTime() + 1 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
expectedExpirationTime.getTime() + 1000,
);
});
it('should handle maximum retention period (8760 hours)', async () => {
// Mock app config with more than maximum retention
mockReq.config.interfaceConfig.temporaryChatRetention = 10000; // Should be clamped to 8760 hours
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveMessage(mockReq, mockMessageData);
expect(result.expiredAt).toBeDefined();
// Verify expiredAt is approximately 8760 hours (1 year) in the future
const expectedExpirationTime = new Date(beforeSave.getTime() + 8760 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
expectedExpirationTime.getTime() + 1000,
);
});
it('should handle missing config gracefully', async () => {
// Simulate missing config - should use default retention period
delete mockReq.config;
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveMessage(mockReq, mockMessageData);
const afterSave = new Date();
// Should still save the message with default retention period (30 days)
expect(result.messageId).toBe('msg123');
expect(result.expiredAt).toBeDefined();
expect(result.expiredAt).toBeInstanceOf(Date);
// Verify expiredAt is approximately 30 days in the future (720 hours)
const expectedExpirationTime = new Date(beforeSave.getTime() + 720 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
new Date(afterSave.getTime() + 720 * 60 * 60 * 1000 + 1000).getTime(),
);
});
it('should use default retention when config is not provided', async () => {
// Mock getAppConfig to return empty config
mockReq.config = {}; // Empty config
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveMessage(mockReq, mockMessageData);
expect(result.expiredAt).toBeDefined();
// Default retention is 30 days (720 hours)
const expectedExpirationTime = new Date(beforeSave.getTime() + 30 * 24 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
expectedExpirationTime.getTime() + 1000,
);
});
it('should not update expiredAt on message update', async () => {
// First save a temporary message
mockReq.config.interfaceConfig.temporaryChatRetention = 24;
mockReq.body = { isTemporary: true };
const savedMessage = await saveMessage(mockReq, mockMessageData);
const originalExpiredAt = savedMessage.expiredAt;
// Now update the message without isTemporary flag
mockReq.body = {};
const updatedMessage = await updateMessage(mockReq, {
messageId: 'msg123',
text: 'Updated text',
});
// expiredAt should not be in the returned updated message object
expect(updatedMessage.expiredAt).toBeUndefined();
// Verify in database that expiredAt wasn't changed
const dbMessage = await Message.findOne({ messageId: 'msg123', user: 'user123' });
expect(dbMessage.expiredAt).toEqual(originalExpiredAt);
});
it('should preserve expiredAt when saving existing temporary message', async () => {
// First save a temporary message
mockReq.config.interfaceConfig.temporaryChatRetention = 24;
mockReq.body = { isTemporary: true };
const firstSave = await saveMessage(mockReq, mockMessageData);
const originalExpiredAt = firstSave.expiredAt;
// Wait a bit to ensure time difference
await new Promise((resolve) => setTimeout(resolve, 100));
// Save again with same messageId but different text
const updatedData = { ...mockMessageData, text: 'Updated text' };
const secondSave = await saveMessage(mockReq, updatedData);
// Should update text but create new expiredAt
expect(secondSave.text).toBe('Updated text');
expect(secondSave.expiredAt).toBeDefined();
expect(new Date(secondSave.expiredAt).getTime()).toBeGreaterThan(
new Date(originalExpiredAt).getTime(),
);
});
it('should handle bulk operations with temporary messages', async () => {
// This test verifies bulkSaveMessages doesn't interfere with expiredAt
const messages = [
{
messageId: 'bulk1',
conversationId: uuidv4(),
text: 'Bulk message 1',
user: 'user123',
expiredAt: new Date(Date.now() + 24 * 60 * 60 * 1000),
},
{
messageId: 'bulk2',
conversationId: uuidv4(),
text: 'Bulk message 2',
user: 'user123',
expiredAt: null,
},
];
await bulkSaveMessages(messages);
const savedMessages = await Message.find({
messageId: { $in: ['bulk1', 'bulk2'] },
}).lean();
expect(savedMessages).toHaveLength(2);
const bulk1 = savedMessages.find((m) => m.messageId === 'bulk1');
const bulk2 = savedMessages.find((m) => m.messageId === 'bulk2');
expect(bulk1.expiredAt).toBeDefined();
expect(bulk2.expiredAt).toBeNull();
});
});
});

View File

@@ -1,12 +1,18 @@
const { ObjectId } = require('mongodb');
const { logger } = require('@librechat/data-schemas');
const { SystemRoles, SystemCategories, Constants } = require('librechat-data-provider');
const {
getProjectByName,
addGroupIdsToProject,
removeGroupIdsFromProject,
Constants,
SystemRoles,
ResourceType,
SystemCategories,
} = require('librechat-data-provider');
const {
removeGroupFromAllProjects,
removeGroupIdsFromProject,
addGroupIdsToProject,
getProjectByName,
} = require('./Project');
const { removeAllPermissions } = require('~/server/services/PermissionService');
const { PromptGroup, Prompt } = require('~/db/models');
const { escapeRegExp } = require('~/server/utils');
@@ -100,10 +106,6 @@ const getAllPromptGroups = async (req, filter) => {
try {
const { name, ...query } = filter;
if (!query.author) {
throw new Error('Author is required');
}
let searchShared = true;
let searchSharedOnly = false;
if (name) {
@@ -153,10 +155,6 @@ const getPromptGroups = async (req, filter) => {
const validatedPageNumber = Math.max(parseInt(pageNumber, 10), 1);
const validatedPageSize = Math.max(parseInt(pageSize, 10), 1);
if (!query.author) {
throw new Error('Author is required');
}
let searchShared = true;
let searchSharedOnly = false;
if (name) {
@@ -221,12 +219,16 @@ const getPromptGroups = async (req, filter) => {
* @returns {Promise<TDeletePromptGroupResponse>}
*/
const deletePromptGroup = async ({ _id, author, role }) => {
const query = { _id, author };
const groupQuery = { groupId: new ObjectId(_id), author };
if (role === SystemRoles.ADMIN) {
delete query.author;
delete groupQuery.author;
// Build query - with ACL, author is optional
const query = { _id };
const groupQuery = { groupId: new ObjectId(_id) };
// Legacy: Add author filter if provided (backward compatibility)
if (author && role !== SystemRoles.ADMIN) {
query.author = author;
groupQuery.author = author;
}
const response = await PromptGroup.deleteOne(query);
if (!response || response.deletedCount === 0) {
@@ -235,13 +237,140 @@ const deletePromptGroup = async ({ _id, author, role }) => {
await Prompt.deleteMany(groupQuery);
await removeGroupFromAllProjects(_id);
try {
await removeAllPermissions({ resourceType: ResourceType.PROMPTGROUP, resourceId: _id });
} catch (error) {
logger.error('Error removing promptGroup permissions:', error);
}
return { message: 'Prompt group deleted successfully' };
};
/**
* Get prompt groups by accessible IDs with optional cursor-based pagination.
* @param {Object} params - The parameters for getting accessible prompt groups.
* @param {Array} [params.accessibleIds] - Array of prompt group ObjectIds the user has ACL access to.
* @param {Object} [params.otherParams] - Additional query parameters (including author filter).
* @param {number} [params.limit] - Number of prompt groups to return (max 100). If not provided, returns all prompt groups.
* @param {string} [params.after] - Cursor for pagination - get prompt groups after this cursor. // base64 encoded JSON string with updatedAt and _id.
* @returns {Promise<Object>} A promise that resolves to an object containing the prompt groups data and pagination info.
*/
async function getListPromptGroupsByAccess({
accessibleIds = [],
otherParams = {},
limit = null,
after = null,
}) {
const isPaginated = limit !== null && limit !== undefined;
const normalizedLimit = isPaginated ? Math.min(Math.max(1, parseInt(limit) || 20), 100) : null;
// Build base query combining ACL accessible prompt groups with other filters
const baseQuery = { ...otherParams, _id: { $in: accessibleIds } };
// Add cursor condition
if (after) {
try {
const cursor = JSON.parse(Buffer.from(after, 'base64').toString('utf8'));
const { updatedAt, _id } = cursor;
const cursorCondition = {
$or: [
{ updatedAt: { $lt: new Date(updatedAt) } },
{ updatedAt: new Date(updatedAt), _id: { $gt: new ObjectId(_id) } },
],
};
// Merge cursor condition with base query
if (Object.keys(baseQuery).length > 0) {
baseQuery.$and = [{ ...baseQuery }, cursorCondition];
// Remove the original conditions from baseQuery to avoid duplication
Object.keys(baseQuery).forEach((key) => {
if (key !== '$and') delete baseQuery[key];
});
} else {
Object.assign(baseQuery, cursorCondition);
}
} catch (error) {
logger.warn('Invalid cursor:', error.message);
}
}
// Build aggregation pipeline
const pipeline = [{ $match: baseQuery }, { $sort: { updatedAt: -1, _id: 1 } }];
// Only apply limit if pagination is requested
if (isPaginated) {
pipeline.push({ $limit: normalizedLimit + 1 });
}
// Add lookup for production prompt
pipeline.push(
{
$lookup: {
from: 'prompts',
localField: 'productionId',
foreignField: '_id',
as: 'productionPrompt',
},
},
{ $unwind: { path: '$productionPrompt', preserveNullAndEmptyArrays: true } },
{
$project: {
name: 1,
numberOfGenerations: 1,
oneliner: 1,
category: 1,
projectIds: 1,
productionId: 1,
author: 1,
authorName: 1,
createdAt: 1,
updatedAt: 1,
'productionPrompt.prompt': 1,
},
},
);
const promptGroups = await PromptGroup.aggregate(pipeline).exec();
const hasMore = isPaginated ? promptGroups.length > normalizedLimit : false;
const data = (isPaginated ? promptGroups.slice(0, normalizedLimit) : promptGroups).map(
(group) => {
if (group.author) {
group.author = group.author.toString();
}
return group;
},
);
// Generate next cursor only if paginated
let nextCursor = null;
if (isPaginated && hasMore && data.length > 0) {
const lastGroup = promptGroups[normalizedLimit - 1];
nextCursor = Buffer.from(
JSON.stringify({
updatedAt: lastGroup.updatedAt.toISOString(),
_id: lastGroup._id.toString(),
}),
).toString('base64');
}
return {
object: 'list',
data,
first_id: data.length > 0 ? data[0]._id.toString() : null,
last_id: data.length > 0 ? data[data.length - 1]._id.toString() : null,
has_more: hasMore,
after: nextCursor,
};
}
module.exports = {
getPromptGroups,
deletePromptGroup,
getAllPromptGroups,
getListPromptGroupsByAccess,
/**
* Create a prompt and its respective group
* @param {TCreatePromptRecord} saveData
@@ -430,6 +559,16 @@ module.exports = {
.lean();
if (remainingPrompts.length === 0) {
// Remove all ACL entries for the promptGroup when deleting the last prompt
try {
await removeAllPermissions({
resourceType: ResourceType.PROMPTGROUP,
resourceId: groupId,
});
} catch (error) {
logger.error('Error removing promptGroup permissions:', error);
}
await PromptGroup.deleteOne({ _id: groupId });
await removeGroupFromAllProjects(groupId);

564
api/models/Prompt.spec.js Normal file
View File

@@ -0,0 +1,564 @@
const mongoose = require('mongoose');
const { ObjectId } = require('mongodb');
const { logger } = require('@librechat/data-schemas');
const { MongoMemoryServer } = require('mongodb-memory-server');
const {
SystemRoles,
ResourceType,
AccessRoleIds,
PrincipalType,
PermissionBits,
} = require('librechat-data-provider');
// Mock the config/connect module to prevent connection attempts during tests
jest.mock('../../config/connect', () => jest.fn().mockResolvedValue(true));
const dbModels = require('~/db/models');
// Disable console for tests
logger.silent = true;
let mongoServer;
let Prompt, PromptGroup, AclEntry, AccessRole, User, Group, Project;
let promptFns, permissionService;
let testUsers, testGroups, testRoles;
beforeAll(async () => {
// Set up MongoDB memory server
mongoServer = await MongoMemoryServer.create();
const mongoUri = mongoServer.getUri();
await mongoose.connect(mongoUri);
// Initialize models
Prompt = dbModels.Prompt;
PromptGroup = dbModels.PromptGroup;
AclEntry = dbModels.AclEntry;
AccessRole = dbModels.AccessRole;
User = dbModels.User;
Group = dbModels.Group;
Project = dbModels.Project;
promptFns = require('~/models/Prompt');
permissionService = require('~/server/services/PermissionService');
// Create test data
await setupTestData();
});
afterAll(async () => {
await mongoose.disconnect();
await mongoServer.stop();
jest.clearAllMocks();
});
async function setupTestData() {
// Create access roles for promptGroups
testRoles = {
viewer: await AccessRole.create({
accessRoleId: AccessRoleIds.PROMPTGROUP_VIEWER,
name: 'Viewer',
description: 'Can view promptGroups',
resourceType: ResourceType.PROMPTGROUP,
permBits: PermissionBits.VIEW,
}),
editor: await AccessRole.create({
accessRoleId: AccessRoleIds.PROMPTGROUP_EDITOR,
name: 'Editor',
description: 'Can view and edit promptGroups',
resourceType: ResourceType.PROMPTGROUP,
permBits: PermissionBits.VIEW | PermissionBits.EDIT,
}),
owner: await AccessRole.create({
accessRoleId: AccessRoleIds.PROMPTGROUP_OWNER,
name: 'Owner',
description: 'Full control over promptGroups',
resourceType: ResourceType.PROMPTGROUP,
permBits:
PermissionBits.VIEW | PermissionBits.EDIT | PermissionBits.DELETE | PermissionBits.SHARE,
}),
};
// Create test users
testUsers = {
owner: await User.create({
name: 'Prompt Owner',
email: 'owner@example.com',
role: SystemRoles.USER,
}),
editor: await User.create({
name: 'Prompt Editor',
email: 'editor@example.com',
role: SystemRoles.USER,
}),
viewer: await User.create({
name: 'Prompt Viewer',
email: 'viewer@example.com',
role: SystemRoles.USER,
}),
admin: await User.create({
name: 'Admin User',
email: 'admin@example.com',
role: SystemRoles.ADMIN,
}),
noAccess: await User.create({
name: 'No Access User',
email: 'noaccess@example.com',
role: SystemRoles.USER,
}),
};
// Create test groups
testGroups = {
editors: await Group.create({
name: 'Prompt Editors',
description: 'Group with editor access',
}),
viewers: await Group.create({
name: 'Prompt Viewers',
description: 'Group with viewer access',
}),
};
await Project.create({
name: 'Global',
description: 'Global project',
promptGroupIds: [],
});
}
describe('Prompt ACL Permissions', () => {
describe('Creating Prompts with Permissions', () => {
it('should grant owner permissions when creating a prompt', async () => {
// First create a group
const testGroup = await PromptGroup.create({
name: 'Test Group',
category: 'testing',
author: testUsers.owner._id,
authorName: testUsers.owner.name,
productionId: new mongoose.Types.ObjectId(),
});
const promptData = {
prompt: {
prompt: 'Test prompt content',
name: 'Test Prompt',
type: 'text',
groupId: testGroup._id,
},
author: testUsers.owner._id,
};
await promptFns.savePrompt(promptData);
// Manually grant permissions as would happen in the route
await permissionService.grantPermission({
principalType: PrincipalType.USER,
principalId: testUsers.owner._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: testGroup._id,
accessRoleId: AccessRoleIds.PROMPTGROUP_OWNER,
grantedBy: testUsers.owner._id,
});
// Check ACL entry
const aclEntry = await AclEntry.findOne({
resourceType: ResourceType.PROMPTGROUP,
resourceId: testGroup._id,
principalType: PrincipalType.USER,
principalId: testUsers.owner._id,
});
expect(aclEntry).toBeTruthy();
expect(aclEntry.permBits).toBe(testRoles.owner.permBits);
});
});
describe('Accessing Prompts', () => {
let testPromptGroup;
beforeEach(async () => {
// Create a prompt group
testPromptGroup = await PromptGroup.create({
name: 'Test Group',
author: testUsers.owner._id,
authorName: testUsers.owner.name,
productionId: new ObjectId(),
});
// Create a prompt
await Prompt.create({
prompt: 'Test prompt for access control',
name: 'Access Test Prompt',
author: testUsers.owner._id,
groupId: testPromptGroup._id,
type: 'text',
});
// Grant owner permissions
await permissionService.grantPermission({
principalType: PrincipalType.USER,
principalId: testUsers.owner._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: testPromptGroup._id,
accessRoleId: AccessRoleIds.PROMPTGROUP_OWNER,
grantedBy: testUsers.owner._id,
});
});
afterEach(async () => {
await Prompt.deleteMany({});
await PromptGroup.deleteMany({});
await AclEntry.deleteMany({});
});
it('owner should have full access to their prompt', async () => {
const hasAccess = await permissionService.checkPermission({
userId: testUsers.owner._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: testPromptGroup._id,
requiredPermission: PermissionBits.VIEW,
});
expect(hasAccess).toBe(true);
const canEdit = await permissionService.checkPermission({
userId: testUsers.owner._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: testPromptGroup._id,
requiredPermission: PermissionBits.EDIT,
});
expect(canEdit).toBe(true);
});
it('user with viewer role should only have view access', async () => {
// Grant viewer permissions
await permissionService.grantPermission({
principalType: PrincipalType.USER,
principalId: testUsers.viewer._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: testPromptGroup._id,
accessRoleId: AccessRoleIds.PROMPTGROUP_VIEWER,
grantedBy: testUsers.owner._id,
});
const canView = await permissionService.checkPermission({
userId: testUsers.viewer._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: testPromptGroup._id,
requiredPermission: PermissionBits.VIEW,
});
const canEdit = await permissionService.checkPermission({
userId: testUsers.viewer._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: testPromptGroup._id,
requiredPermission: PermissionBits.EDIT,
});
expect(canView).toBe(true);
expect(canEdit).toBe(false);
});
it('user without permissions should have no access', async () => {
const hasAccess = await permissionService.checkPermission({
userId: testUsers.noAccess._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: testPromptGroup._id,
requiredPermission: PermissionBits.VIEW,
});
expect(hasAccess).toBe(false);
});
it('admin should have access regardless of permissions', async () => {
// Admin users should work through normal permission system
// The middleware layer handles admin bypass, not the permission service
const hasAccess = await permissionService.checkPermission({
userId: testUsers.admin._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: testPromptGroup._id,
requiredPermission: PermissionBits.VIEW,
});
// Without explicit permissions, even admin won't have access at this layer
expect(hasAccess).toBe(false);
// The actual admin bypass happens in the middleware layer (`canAccessPromptViaGroup`/`canAccessPromptGroupResource`)
// which checks req.user.role === SystemRoles.ADMIN
});
});
describe('Group-based Access', () => {
let testPromptGroup;
beforeEach(async () => {
// Create a prompt group first
testPromptGroup = await PromptGroup.create({
name: 'Group Access Test Group',
author: testUsers.owner._id,
authorName: testUsers.owner.name,
productionId: new ObjectId(),
});
await Prompt.create({
prompt: 'Group access test prompt',
name: 'Group Test',
author: testUsers.owner._id,
groupId: testPromptGroup._id,
type: 'text',
});
// Add users to groups
await User.findByIdAndUpdate(testUsers.editor._id, {
$push: { groups: testGroups.editors._id },
});
await User.findByIdAndUpdate(testUsers.viewer._id, {
$push: { groups: testGroups.viewers._id },
});
});
afterEach(async () => {
await Prompt.deleteMany({});
await AclEntry.deleteMany({});
await User.updateMany({}, { $set: { groups: [] } });
});
it('group members should inherit group permissions', async () => {
// Create a prompt group
const testPromptGroup = await PromptGroup.create({
name: 'Group Test Group',
author: testUsers.owner._id,
authorName: testUsers.owner.name,
productionId: new ObjectId(),
});
const { addUserToGroup } = require('~/models');
await addUserToGroup(testUsers.editor._id, testGroups.editors._id);
const prompt = await promptFns.savePrompt({
author: testUsers.owner._id,
prompt: {
prompt: 'Group test prompt',
name: 'Group Test',
groupId: testPromptGroup._id,
type: 'text',
},
});
// Check if savePrompt returned an error
if (!prompt || !prompt.prompt) {
throw new Error(`Failed to save prompt: ${prompt?.message || 'Unknown error'}`);
}
// Grant edit permissions to the group
await permissionService.grantPermission({
principalType: PrincipalType.GROUP,
principalId: testGroups.editors._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: testPromptGroup._id,
accessRoleId: AccessRoleIds.PROMPTGROUP_EDITOR,
grantedBy: testUsers.owner._id,
});
// Check if group member has access
const hasAccess = await permissionService.checkPermission({
userId: testUsers.editor._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: testPromptGroup._id,
requiredPermission: PermissionBits.EDIT,
});
expect(hasAccess).toBe(true);
// Check that non-member doesn't have access
const nonMemberAccess = await permissionService.checkPermission({
userId: testUsers.viewer._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: testPromptGroup._id,
requiredPermission: PermissionBits.EDIT,
});
expect(nonMemberAccess).toBe(false);
});
});
describe('Public Access', () => {
let publicPromptGroup, privatePromptGroup;
beforeEach(async () => {
// Create separate prompt groups for public and private access
publicPromptGroup = await PromptGroup.create({
name: 'Public Access Test Group',
author: testUsers.owner._id,
authorName: testUsers.owner.name,
productionId: new ObjectId(),
});
privatePromptGroup = await PromptGroup.create({
name: 'Private Access Test Group',
author: testUsers.owner._id,
authorName: testUsers.owner.name,
productionId: new ObjectId(),
});
// Create prompts in their respective groups
await Prompt.create({
prompt: 'Public prompt',
name: 'Public',
author: testUsers.owner._id,
groupId: publicPromptGroup._id,
type: 'text',
});
await Prompt.create({
prompt: 'Private prompt',
name: 'Private',
author: testUsers.owner._id,
groupId: privatePromptGroup._id,
type: 'text',
});
// Grant public view access to publicPromptGroup
await permissionService.grantPermission({
principalType: PrincipalType.PUBLIC,
principalId: null,
resourceType: ResourceType.PROMPTGROUP,
resourceId: publicPromptGroup._id,
accessRoleId: AccessRoleIds.PROMPTGROUP_VIEWER,
grantedBy: testUsers.owner._id,
});
// Grant only owner access to privatePromptGroup
await permissionService.grantPermission({
principalType: PrincipalType.USER,
principalId: testUsers.owner._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: privatePromptGroup._id,
accessRoleId: AccessRoleIds.PROMPTGROUP_OWNER,
grantedBy: testUsers.owner._id,
});
});
afterEach(async () => {
await Prompt.deleteMany({});
await PromptGroup.deleteMany({});
await AclEntry.deleteMany({});
});
it('public prompt should be accessible to any user', async () => {
const hasAccess = await permissionService.checkPermission({
userId: testUsers.noAccess._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: publicPromptGroup._id,
requiredPermission: PermissionBits.VIEW,
includePublic: true,
});
expect(hasAccess).toBe(true);
});
it('private prompt should not be accessible to unauthorized users', async () => {
const hasAccess = await permissionService.checkPermission({
userId: testUsers.noAccess._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: privatePromptGroup._id,
requiredPermission: PermissionBits.VIEW,
includePublic: true,
});
expect(hasAccess).toBe(false);
});
});
describe('Prompt Deletion', () => {
let testPromptGroup;
it('should remove ACL entries when prompt is deleted', async () => {
testPromptGroup = await PromptGroup.create({
name: 'Deletion Test Group',
author: testUsers.owner._id,
authorName: testUsers.owner.name,
productionId: new ObjectId(),
});
const prompt = await promptFns.savePrompt({
author: testUsers.owner._id,
prompt: {
prompt: 'To be deleted',
name: 'Delete Test',
groupId: testPromptGroup._id,
type: 'text',
},
});
// Check if savePrompt returned an error
if (!prompt || !prompt.prompt) {
throw new Error(`Failed to save prompt: ${prompt?.message || 'Unknown error'}`);
}
const testPromptId = prompt.prompt._id;
const promptGroupId = testPromptGroup._id;
// Grant permission
await permissionService.grantPermission({
principalType: PrincipalType.USER,
principalId: testUsers.owner._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: testPromptGroup._id,
accessRoleId: AccessRoleIds.PROMPTGROUP_OWNER,
grantedBy: testUsers.owner._id,
});
// Verify ACL entry exists
const beforeDelete = await AclEntry.find({
resourceType: ResourceType.PROMPTGROUP,
resourceId: testPromptGroup._id,
});
expect(beforeDelete).toHaveLength(1);
// Delete the prompt
await promptFns.deletePrompt({
promptId: testPromptId,
groupId: promptGroupId,
author: testUsers.owner._id,
role: SystemRoles.USER,
});
// Verify ACL entries are removed
const aclEntries = await AclEntry.find({
resourceType: ResourceType.PROMPTGROUP,
resourceId: testPromptGroup._id,
});
expect(aclEntries).toHaveLength(0);
});
});
describe('Backwards Compatibility', () => {
it('should handle prompts without ACL entries gracefully', async () => {
// Create a prompt group first
const promptGroup = await PromptGroup.create({
name: 'Legacy Test Group',
author: testUsers.owner._id,
authorName: testUsers.owner.name,
productionId: new ObjectId(),
});
// Create a prompt without ACL entries (legacy prompt)
const legacyPrompt = await Prompt.create({
prompt: 'Legacy prompt without ACL',
name: 'Legacy',
author: testUsers.owner._id,
groupId: promptGroup._id,
type: 'text',
});
// The system should handle this gracefully
const prompt = await promptFns.getPrompt({ _id: legacyPrompt._id });
expect(prompt).toBeTruthy();
expect(prompt._id.toString()).toBe(legacyPrompt._id.toString());
});
});
});

View File

@@ -0,0 +1,280 @@
const mongoose = require('mongoose');
const { ObjectId } = require('mongodb');
const { logger } = require('@librechat/data-schemas');
const { MongoMemoryServer } = require('mongodb-memory-server');
const {
Constants,
ResourceType,
AccessRoleIds,
PrincipalType,
PrincipalModel,
PermissionBits,
} = require('librechat-data-provider');
// Mock the config/connect module to prevent connection attempts during tests
jest.mock('../../config/connect', () => jest.fn().mockResolvedValue(true));
// Disable console for tests
logger.silent = true;
describe('PromptGroup Migration Script', () => {
let mongoServer;
let Prompt, PromptGroup, AclEntry, AccessRole, User, Project;
let migrateToPromptGroupPermissions;
let testOwner, testProject;
let ownerRole, viewerRole;
beforeAll(async () => {
// Set up MongoDB memory server
mongoServer = await MongoMemoryServer.create();
const mongoUri = mongoServer.getUri();
await mongoose.connect(mongoUri);
// Initialize models
const dbModels = require('~/db/models');
Prompt = dbModels.Prompt;
PromptGroup = dbModels.PromptGroup;
AclEntry = dbModels.AclEntry;
AccessRole = dbModels.AccessRole;
User = dbModels.User;
Project = dbModels.Project;
// Create test user
testOwner = await User.create({
name: 'Test Owner',
email: 'owner@test.com',
role: 'USER',
});
// Create test project with the proper name
const projectName = Constants.GLOBAL_PROJECT_NAME || 'instance';
testProject = await Project.create({
name: projectName,
description: 'Global project',
promptGroupIds: [],
});
// Create promptGroup access roles
ownerRole = await AccessRole.create({
accessRoleId: AccessRoleIds.PROMPTGROUP_OWNER,
name: 'Owner',
description: 'Full control over promptGroups',
resourceType: ResourceType.PROMPTGROUP,
permBits:
PermissionBits.VIEW | PermissionBits.EDIT | PermissionBits.DELETE | PermissionBits.SHARE,
});
viewerRole = await AccessRole.create({
accessRoleId: AccessRoleIds.PROMPTGROUP_VIEWER,
name: 'Viewer',
description: 'Can view promptGroups',
resourceType: ResourceType.PROMPTGROUP,
permBits: PermissionBits.VIEW,
});
await AccessRole.create({
accessRoleId: AccessRoleIds.PROMPTGROUP_EDITOR,
name: 'Editor',
description: 'Can view and edit promptGroups',
resourceType: ResourceType.PROMPTGROUP,
permBits: PermissionBits.VIEW | PermissionBits.EDIT,
});
// Import migration function
const migration = require('../../config/migrate-prompt-permissions');
migrateToPromptGroupPermissions = migration.migrateToPromptGroupPermissions;
});
afterAll(async () => {
await mongoose.disconnect();
await mongoServer.stop();
});
beforeEach(async () => {
// Clean up before each test
await Prompt.deleteMany({});
await PromptGroup.deleteMany({});
await AclEntry.deleteMany({});
// Reset the project's promptGroupIds array
testProject.promptGroupIds = [];
await testProject.save();
});
it('should categorize promptGroups correctly in dry run', async () => {
// Create global prompt group (in Global project)
const globalPromptGroup = await PromptGroup.create({
name: 'Global Group',
author: testOwner._id,
authorName: testOwner.name,
productionId: new ObjectId(),
});
// Create private prompt group (not in any project)
await PromptGroup.create({
name: 'Private Group',
author: testOwner._id,
authorName: testOwner.name,
productionId: new ObjectId(),
});
// Add global group to project's promptGroupIds array
testProject.promptGroupIds = [globalPromptGroup._id];
await testProject.save();
const result = await migrateToPromptGroupPermissions({ dryRun: true });
expect(result.dryRun).toBe(true);
expect(result.summary.total).toBe(2);
expect(result.summary.globalViewAccess).toBe(1);
expect(result.summary.privateGroups).toBe(1);
});
it('should grant appropriate permissions during migration', async () => {
// Create prompt groups
const globalPromptGroup = await PromptGroup.create({
name: 'Global Group',
author: testOwner._id,
authorName: testOwner.name,
productionId: new ObjectId(),
});
const privatePromptGroup = await PromptGroup.create({
name: 'Private Group',
author: testOwner._id,
authorName: testOwner.name,
productionId: new ObjectId(),
});
// Add global group to project's promptGroupIds array
testProject.promptGroupIds = [globalPromptGroup._id];
await testProject.save();
const result = await migrateToPromptGroupPermissions({ dryRun: false });
expect(result.migrated).toBe(2);
expect(result.errors).toBe(0);
expect(result.ownerGrants).toBe(2);
expect(result.publicViewGrants).toBe(1);
// Check global promptGroup permissions
const globalOwnerEntry = await AclEntry.findOne({
resourceType: ResourceType.PROMPTGROUP,
resourceId: globalPromptGroup._id,
principalType: PrincipalType.USER,
principalId: testOwner._id,
});
expect(globalOwnerEntry).toBeTruthy();
expect(globalOwnerEntry.permBits).toBe(ownerRole.permBits);
const globalPublicEntry = await AclEntry.findOne({
resourceType: ResourceType.PROMPTGROUP,
resourceId: globalPromptGroup._id,
principalType: PrincipalType.PUBLIC,
});
expect(globalPublicEntry).toBeTruthy();
expect(globalPublicEntry.permBits).toBe(viewerRole.permBits);
// Check private promptGroup permissions
const privateOwnerEntry = await AclEntry.findOne({
resourceType: ResourceType.PROMPTGROUP,
resourceId: privatePromptGroup._id,
principalType: PrincipalType.USER,
principalId: testOwner._id,
});
expect(privateOwnerEntry).toBeTruthy();
expect(privateOwnerEntry.permBits).toBe(ownerRole.permBits);
const privatePublicEntry = await AclEntry.findOne({
resourceType: ResourceType.PROMPTGROUP,
resourceId: privatePromptGroup._id,
principalType: PrincipalType.PUBLIC,
});
expect(privatePublicEntry).toBeNull();
});
it('should skip promptGroups that already have ACL entries', async () => {
// Create prompt groups
const promptGroup1 = await PromptGroup.create({
name: 'Group 1',
author: testOwner._id,
authorName: testOwner.name,
productionId: new ObjectId(),
});
const promptGroup2 = await PromptGroup.create({
name: 'Group 2',
author: testOwner._id,
authorName: testOwner.name,
productionId: new ObjectId(),
});
// Grant permission to one promptGroup manually (simulating it already has ACL)
await AclEntry.create({
principalType: PrincipalType.USER,
principalId: testOwner._id,
principalModel: PrincipalModel.USER,
resourceType: ResourceType.PROMPTGROUP,
resourceId: promptGroup1._id,
permBits: ownerRole.permBits,
roleId: ownerRole._id,
grantedBy: testOwner._id,
grantedAt: new Date(),
});
const result = await migrateToPromptGroupPermissions({ dryRun: false });
// Should only migrate promptGroup2, skip promptGroup1
expect(result.migrated).toBe(1);
expect(result.errors).toBe(0);
// Verify promptGroup2 now has permissions
const group2Entry = await AclEntry.findOne({
resourceType: ResourceType.PROMPTGROUP,
resourceId: promptGroup2._id,
});
expect(group2Entry).toBeTruthy();
});
it('should handle promptGroups with prompts correctly', async () => {
// Create a promptGroup with some prompts
const promptGroup = await PromptGroup.create({
name: 'Group with Prompts',
author: testOwner._id,
authorName: testOwner.name,
productionId: new ObjectId(),
});
// Create some prompts in this group
await Prompt.create({
prompt: 'First prompt',
author: testOwner._id,
groupId: promptGroup._id,
type: 'text',
});
await Prompt.create({
prompt: 'Second prompt',
author: testOwner._id,
groupId: promptGroup._id,
type: 'text',
});
const result = await migrateToPromptGroupPermissions({ dryRun: false });
expect(result.migrated).toBe(1);
expect(result.errors).toBe(0);
// Verify the promptGroup has permissions
const groupEntry = await AclEntry.findOne({
resourceType: ResourceType.PROMPTGROUP,
resourceId: promptGroup._id,
});
expect(groupEntry).toBeTruthy();
// Verify no prompt-level permissions were created
const promptEntries = await AclEntry.find({
resourceType: 'prompt',
});
expect(promptEntries).toHaveLength(0);
});
});

View File

@@ -2,7 +2,6 @@ const {
CacheKeys,
SystemRoles,
roleDefaults,
PermissionTypes,
permissionsSchema,
removeNullishValues,
} = require('librechat-data-provider');
@@ -17,7 +16,7 @@ const { Role } = require('~/db/models');
*
* @param {string} roleName - The name of the role to find or create.
* @param {string|string[]} [fieldsToSelect] - The fields to include or exclude in the returned document.
* @returns {Promise<Object>} A plain object representing the role document.
* @returns {Promise<IRole>} Role document.
*/
const getRoleByName = async function (roleName, fieldsToSelect = null) {
const cache = getLogStores(CacheKeys.ROLES);
@@ -73,8 +72,9 @@ const updateRoleByName = async function (roleName, updates) {
* Updates access permissions for a specific role and multiple permission types.
* @param {string} roleName - The role to update.
* @param {Object.<PermissionTypes, Object.<Permissions, boolean>>} permissionsUpdate - Permissions to update and their values.
* @param {IRole} [roleData] - Optional role data to use instead of fetching from the database.
*/
async function updateAccessPermissions(roleName, permissionsUpdate) {
async function updateAccessPermissions(roleName, permissionsUpdate, roleData) {
// Filter and clean the permission updates based on our schema definition.
const updates = {};
for (const [permissionType, permissions] of Object.entries(permissionsUpdate)) {
@@ -87,7 +87,7 @@ async function updateAccessPermissions(roleName, permissionsUpdate) {
}
try {
const role = await getRoleByName(roleName);
const role = roleData ?? (await getRoleByName(roleName));
if (!role) {
return;
}
@@ -114,7 +114,6 @@ async function updateAccessPermissions(roleName, permissionsUpdate) {
}
}
// Process the current updates
for (const [permissionType, permissions] of Object.entries(updates)) {
const currentTypePermissions = currentPermissions[permissionType] || {};
updatedPermissions[permissionType] = { ...currentTypePermissions };

View File

@@ -1,5 +1,4 @@
const { logger } = require('@librechat/data-schemas');
const { getBalanceConfig } = require('~/server/services/Config');
const { getMultiplier, getCacheMultiplier } = require('./tx');
const { Transaction, Balance } = require('~/db/models');
@@ -187,9 +186,10 @@ async function createAutoRefillTransaction(txData) {
/**
* Static method to create a transaction and update the balance
* @param {txData} txData - Transaction data.
* @param {txData} _txData - Transaction data.
*/
async function createTransaction(txData) {
async function createTransaction(_txData) {
const { balance, ...txData } = _txData;
if (txData.rawAmount != null && isNaN(txData.rawAmount)) {
return;
}
@@ -199,8 +199,6 @@ async function createTransaction(txData) {
calculateTokenValue(transaction);
await transaction.save();
const balance = await getBalanceConfig();
if (!balance?.enabled) {
return;
}
@@ -221,9 +219,10 @@ async function createTransaction(txData) {
/**
* Static method to create a structured transaction and update the balance
* @param {txData} txData - Transaction data.
* @param {txData} _txData - Transaction data.
*/
async function createStructuredTransaction(txData) {
async function createStructuredTransaction(_txData) {
const { balance, ...txData } = _txData;
const transaction = new Transaction({
...txData,
endpointTokenConfig: txData.endpointTokenConfig,
@@ -233,7 +232,6 @@ async function createStructuredTransaction(txData) {
await transaction.save();
const balance = await getBalanceConfig();
if (!balance?.enabled) {
return;
}

View File

@@ -1,14 +1,11 @@
const mongoose = require('mongoose');
const { MongoMemoryServer } = require('mongodb-memory-server');
const { spendTokens, spendStructuredTokens } = require('./spendTokens');
const { getBalanceConfig } = require('~/server/services/Config');
const { getMultiplier, getCacheMultiplier } = require('./tx');
const { createTransaction } = require('./Transaction');
const { Balance } = require('~/db/models');
// Mock the custom config module so we can control the balance flag.
jest.mock('~/server/services/Config');
let mongoServer;
beforeAll(async () => {
mongoServer = await MongoMemoryServer.create();
@@ -23,8 +20,6 @@ afterAll(async () => {
beforeEach(async () => {
await mongoose.connection.dropDatabase();
// Default: enable balance updates in tests.
getBalanceConfig.mockResolvedValue({ enabled: true });
});
describe('Regular Token Spending Tests', () => {
@@ -41,6 +36,7 @@ describe('Regular Token Spending Tests', () => {
model,
context: 'test',
endpointTokenConfig: null,
balance: { enabled: true },
};
const tokenUsage = {
@@ -74,6 +70,7 @@ describe('Regular Token Spending Tests', () => {
model,
context: 'test',
endpointTokenConfig: null,
balance: { enabled: true },
};
const tokenUsage = {
@@ -104,6 +101,7 @@ describe('Regular Token Spending Tests', () => {
model,
context: 'test',
endpointTokenConfig: null,
balance: { enabled: true },
};
const tokenUsage = {};
@@ -128,6 +126,7 @@ describe('Regular Token Spending Tests', () => {
model,
context: 'test',
endpointTokenConfig: null,
balance: { enabled: true },
};
const tokenUsage = { promptTokens: 100 };
@@ -143,8 +142,7 @@ describe('Regular Token Spending Tests', () => {
});
test('spendTokens should not update balance when balance feature is disabled', async () => {
// Arrange: Override the config to disable balance updates.
getBalanceConfig.mockResolvedValue({ balance: { enabled: false } });
// Arrange: Balance config is now passed directly in txData
const userId = new mongoose.Types.ObjectId();
const initialBalance = 10000000;
await Balance.create({ user: userId, tokenCredits: initialBalance });
@@ -156,6 +154,7 @@ describe('Regular Token Spending Tests', () => {
model,
context: 'test',
endpointTokenConfig: null,
balance: { enabled: false },
};
const tokenUsage = {
@@ -186,6 +185,7 @@ describe('Structured Token Spending Tests', () => {
model,
context: 'message',
endpointTokenConfig: null,
balance: { enabled: true },
};
const tokenUsage = {
@@ -239,6 +239,7 @@ describe('Structured Token Spending Tests', () => {
conversationId: 'test-convo',
model,
context: 'message',
balance: { enabled: true },
};
const tokenUsage = {
@@ -271,6 +272,7 @@ describe('Structured Token Spending Tests', () => {
conversationId: 'test-convo',
model,
context: 'message',
balance: { enabled: true },
};
const tokenUsage = {
@@ -302,6 +304,7 @@ describe('Structured Token Spending Tests', () => {
conversationId: 'test-convo',
model,
context: 'message',
balance: { enabled: true },
};
const tokenUsage = {};
@@ -328,6 +331,7 @@ describe('Structured Token Spending Tests', () => {
conversationId: 'test-convo',
model,
context: 'incomplete',
balance: { enabled: true },
};
const tokenUsage = {
@@ -364,6 +368,7 @@ describe('NaN Handling Tests', () => {
endpointTokenConfig: null,
rawAmount: NaN,
tokenType: 'prompt',
balance: { enabled: true },
};
// Act

View File

@@ -118,7 +118,7 @@ const addIntervalToDate = (date, value, unit) => {
* @async
* @function
* @param {Object} params - The function parameters.
* @param {Express.Request} params.req - The Express request object.
* @param {ServerRequest} params.req - The Express request object.
* @param {Express.Response} params.res - The Express response object.
* @param {Object} params.txData - The transaction data.
* @param {string} params.txData.user - The user ID or identifier.

View File

@@ -1,47 +1,9 @@
const mongoose = require('mongoose');
const { buildTree } = require('librechat-data-provider');
const { MongoMemoryServer } = require('mongodb-memory-server');
const { getMessages, bulkSaveMessages } = require('./Message');
const { Message } = require('~/db/models');
// Original version of buildTree function
function buildTree({ messages, fileMap }) {
if (messages === null) {
return null;
}
const messageMap = {};
const rootMessages = [];
const childrenCount = {};
messages.forEach((message) => {
const parentId = message.parentMessageId ?? '';
childrenCount[parentId] = (childrenCount[parentId] || 0) + 1;
const extendedMessage = {
...message,
children: [],
depth: 0,
siblingIndex: childrenCount[parentId] - 1,
};
if (message.files && fileMap) {
extendedMessage.files = message.files.map((file) => fileMap[file.file_id ?? ''] ?? file);
}
messageMap[message.messageId] = extendedMessage;
const parentMessage = messageMap[parentId];
if (parentMessage) {
parentMessage.children.push(extendedMessage);
extendedMessage.depth = parentMessage.depth + 1;
} else {
rootMessages.push(extendedMessage);
}
});
return rootMessages;
}
let mongod;
beforeAll(async () => {
mongod = await MongoMemoryServer.create();

View File

@@ -22,9 +22,17 @@ const {
} = require('./Message');
const { getConvoTitle, getConvo, saveConvo, deleteConvos } = require('./Conversation');
const { getPreset, getPresets, savePreset, deletePresets } = require('./Preset');
const { File } = require('~/db/models');
const seedDatabase = async () => {
await methods.initializeRoles();
await methods.seedDefaultRoles();
await methods.ensureDefaultCategories();
};
module.exports = {
...methods,
seedDatabase,
comparePassword,
findFileById,
createFile,
@@ -51,4 +59,6 @@ module.exports = {
getPresets,
savePreset,
deletePresets,
Files: File,
};

24
api/models/interface.js Normal file
View File

@@ -0,0 +1,24 @@
const { logger } = require('@librechat/data-schemas');
const { updateInterfacePermissions: updateInterfacePerms } = require('@librechat/api');
const { getRoleByName, updateAccessPermissions } = require('./Role');
/**
* Update interface permissions based on app configuration.
* Must be done independently from loading the app config.
* @param {AppConfig} appConfig
*/
async function updateInterfacePermissions(appConfig) {
try {
await updateInterfacePerms({
appConfig,
getRoleByName,
updateAccessPermissions,
});
} catch (error) {
logger.error('Error updating interface permissions:', error);
}
}
module.exports = {
updateInterfacePermissions,
};

View File

@@ -5,13 +5,7 @@ const { createTransaction, createStructuredTransaction } = require('./Transactio
*
* @function
* @async
* @param {Object} txData - Transaction data.
* @param {mongoose.Schema.Types.ObjectId} txData.user - The user ID.
* @param {String} txData.conversationId - The ID of the conversation.
* @param {String} txData.model - The model name.
* @param {String} txData.context - The context in which the transaction is made.
* @param {EndpointTokenConfig} [txData.endpointTokenConfig] - The current endpoint token config.
* @param {String} [txData.valueKey] - The value key (optional).
* @param {txData} txData - Transaction data.
* @param {Object} tokenUsage - The number of tokens used.
* @param {Number} tokenUsage.promptTokens - The number of prompt tokens used.
* @param {Number} tokenUsage.completionTokens - The number of completion tokens used.
@@ -69,13 +63,7 @@ const spendTokens = async (txData, tokenUsage) => {
*
* @function
* @async
* @param {Object} txData - Transaction data.
* @param {mongoose.Schema.Types.ObjectId} txData.user - The user ID.
* @param {String} txData.conversationId - The ID of the conversation.
* @param {String} txData.model - The model name.
* @param {String} txData.context - The context in which the transaction is made.
* @param {EndpointTokenConfig} [txData.endpointTokenConfig] - The current endpoint token config.
* @param {String} [txData.valueKey] - The value key (optional).
* @param {txData} txData - Transaction data.
* @param {Object} tokenUsage - The number of tokens used.
* @param {Object} tokenUsage.promptTokens - The number of prompt tokens used.
* @param {Number} tokenUsage.promptTokens.input - The number of input tokens.

View File

@@ -5,7 +5,6 @@ const { createTransaction, createAutoRefillTransaction } = require('./Transactio
require('~/db/models');
// Mock the logger to prevent console output during tests
jest.mock('~/config', () => ({
logger: {
debug: jest.fn(),
@@ -13,10 +12,6 @@ jest.mock('~/config', () => ({
},
}));
// Mock the Config service
const { getBalanceConfig } = require('~/server/services/Config');
jest.mock('~/server/services/Config');
describe('spendTokens', () => {
let mongoServer;
let userId;
@@ -44,8 +39,7 @@ describe('spendTokens', () => {
// Create a new user ID for each test
userId = new mongoose.Types.ObjectId();
// Mock the balance config to be enabled by default
getBalanceConfig.mockResolvedValue({ enabled: true });
// Balance config is now passed directly in txData
});
it('should create transactions for both prompt and completion tokens', async () => {
@@ -60,6 +54,7 @@ describe('spendTokens', () => {
conversationId: 'test-convo',
model: 'gpt-3.5-turbo',
context: 'test',
balance: { enabled: true },
};
const tokenUsage = {
promptTokens: 100,
@@ -98,6 +93,7 @@ describe('spendTokens', () => {
conversationId: 'test-convo',
model: 'gpt-3.5-turbo',
context: 'test',
balance: { enabled: true },
};
const tokenUsage = {
promptTokens: 100,
@@ -127,6 +123,7 @@ describe('spendTokens', () => {
conversationId: 'test-convo',
model: 'gpt-3.5-turbo',
context: 'test',
balance: { enabled: true },
};
const tokenUsage = {};
@@ -138,8 +135,7 @@ describe('spendTokens', () => {
});
it('should not update balance when the balance feature is disabled', async () => {
// Override configuration: disable balance updates
getBalanceConfig.mockResolvedValue({ enabled: false });
// Balance is now passed directly in txData
// Create a balance for the user
await Balance.create({
user: userId,
@@ -151,6 +147,7 @@ describe('spendTokens', () => {
conversationId: 'test-convo',
model: 'gpt-3.5-turbo',
context: 'test',
balance: { enabled: false },
};
const tokenUsage = {
promptTokens: 100,
@@ -180,6 +177,7 @@ describe('spendTokens', () => {
conversationId: 'test-convo',
model: 'gpt-4', // Using a more expensive model
context: 'test',
balance: { enabled: true },
};
// Spending more tokens than the user has balance for
@@ -233,6 +231,7 @@ describe('spendTokens', () => {
conversationId: 'test-convo-1',
model: 'gpt-4',
context: 'test',
balance: { enabled: true },
};
const tokenUsage1 = {
@@ -252,6 +251,7 @@ describe('spendTokens', () => {
conversationId: 'test-convo-2',
model: 'gpt-4',
context: 'test',
balance: { enabled: true },
};
const tokenUsage2 = {
@@ -292,6 +292,7 @@ describe('spendTokens', () => {
tokenType: 'completion',
rawAmount: -100,
context: 'test',
balance: { enabled: true },
});
console.log('Direct Transaction.create result:', directResult);
@@ -316,6 +317,7 @@ describe('spendTokens', () => {
conversationId: `test-convo-${model}`,
model,
context: 'test',
balance: { enabled: true },
};
const tokenUsage = {
@@ -352,6 +354,7 @@ describe('spendTokens', () => {
conversationId: 'test-convo-1',
model: 'claude-3-5-sonnet',
context: 'test',
balance: { enabled: true },
};
const tokenUsage1 = {
@@ -375,6 +378,7 @@ describe('spendTokens', () => {
conversationId: 'test-convo-2',
model: 'claude-3-5-sonnet',
context: 'test',
balance: { enabled: true },
};
const tokenUsage2 = {
@@ -426,6 +430,7 @@ describe('spendTokens', () => {
conversationId: 'test-convo',
model: 'claude-3-5-sonnet', // Using a model that supports structured tokens
context: 'test',
balance: { enabled: true },
};
// Spending more tokens than the user has balance for
@@ -505,6 +510,7 @@ describe('spendTokens', () => {
conversationId,
user: userId,
model: usage.model,
balance: { enabled: true },
};
// Calculate expected spend for this transaction
@@ -617,6 +623,7 @@ describe('spendTokens', () => {
tokenType: 'credits',
context: 'concurrent-refill-test',
rawAmount: refillAmount,
balance: { enabled: true },
}),
);
}
@@ -683,6 +690,7 @@ describe('spendTokens', () => {
conversationId: 'test-convo',
model: 'claude-3-5-sonnet',
context: 'test',
balance: { enabled: true },
};
const tokenUsage = {
promptTokens: {

View File

@@ -1,4 +1,4 @@
const { matchModelName } = require('../utils');
const { matchModelName } = require('../utils/tokens');
const defaultRate = 6;
/**
@@ -87,6 +87,9 @@ const tokenValues = Object.assign(
'gpt-4.1': { prompt: 2, completion: 8 },
'gpt-4.5': { prompt: 75, completion: 150 },
'gpt-4o-mini': { prompt: 0.15, completion: 0.6 },
'gpt-5': { prompt: 1.25, completion: 10 },
'gpt-5-mini': { prompt: 0.25, completion: 2 },
'gpt-5-nano': { prompt: 0.05, completion: 0.4 },
'gpt-4o': { prompt: 2.5, completion: 10 },
'gpt-4o-2024-05-13': { prompt: 5, completion: 15 },
'gpt-4-1106': { prompt: 10, completion: 30 },
@@ -147,6 +150,9 @@ const tokenValues = Object.assign(
codestral: { prompt: 0.3, completion: 0.9 },
'ministral-8b': { prompt: 0.1, completion: 0.1 },
'ministral-3b': { prompt: 0.04, completion: 0.04 },
// GPT-OSS models
'gpt-oss-20b': { prompt: 0.05, completion: 0.2 },
'gpt-oss-120b': { prompt: 0.15, completion: 0.6 },
},
bedrockValues,
);
@@ -214,6 +220,12 @@ const getValueKey = (model, endpoint) => {
return 'gpt-4.1';
} else if (modelName.includes('gpt-4o-2024-05-13')) {
return 'gpt-4o-2024-05-13';
} else if (modelName.includes('gpt-5-nano')) {
return 'gpt-5-nano';
} else if (modelName.includes('gpt-5-mini')) {
return 'gpt-5-mini';
} else if (modelName.includes('gpt-5')) {
return 'gpt-5';
} else if (modelName.includes('gpt-4o-mini')) {
return 'gpt-4o-mini';
} else if (modelName.includes('gpt-4o')) {

View File

@@ -25,8 +25,14 @@ describe('getValueKey', () => {
expect(getValueKey('gpt-4-some-other-info')).toBe('8k');
});
it('should return undefined for model names that do not match any known patterns', () => {
expect(getValueKey('gpt-5-some-other-info')).toBeUndefined();
it('should return "gpt-5" for model name containing "gpt-5"', () => {
expect(getValueKey('gpt-5-some-other-info')).toBe('gpt-5');
expect(getValueKey('gpt-5-2025-01-30')).toBe('gpt-5');
expect(getValueKey('gpt-5-2025-01-30-0130')).toBe('gpt-5');
expect(getValueKey('openai/gpt-5')).toBe('gpt-5');
expect(getValueKey('openai/gpt-5-2025-01-30')).toBe('gpt-5');
expect(getValueKey('gpt-5-turbo')).toBe('gpt-5');
expect(getValueKey('gpt-5-0130')).toBe('gpt-5');
});
it('should return "gpt-3.5-turbo-1106" for model name containing "gpt-3.5-turbo-1106"', () => {
@@ -84,6 +90,29 @@ describe('getValueKey', () => {
expect(getValueKey('gpt-4.1-nano-0125')).toBe('gpt-4.1-nano');
});
it('should return "gpt-5" for model type of "gpt-5"', () => {
expect(getValueKey('gpt-5-2025-01-30')).toBe('gpt-5');
expect(getValueKey('gpt-5-2025-01-30-0130')).toBe('gpt-5');
expect(getValueKey('openai/gpt-5')).toBe('gpt-5');
expect(getValueKey('openai/gpt-5-2025-01-30')).toBe('gpt-5');
expect(getValueKey('gpt-5-turbo')).toBe('gpt-5');
expect(getValueKey('gpt-5-0130')).toBe('gpt-5');
});
it('should return "gpt-5-mini" for model type of "gpt-5-mini"', () => {
expect(getValueKey('gpt-5-mini-2025-01-30')).toBe('gpt-5-mini');
expect(getValueKey('openai/gpt-5-mini')).toBe('gpt-5-mini');
expect(getValueKey('gpt-5-mini-0130')).toBe('gpt-5-mini');
expect(getValueKey('gpt-5-mini-2025-01-30-0130')).toBe('gpt-5-mini');
});
it('should return "gpt-5-nano" for model type of "gpt-5-nano"', () => {
expect(getValueKey('gpt-5-nano-2025-01-30')).toBe('gpt-5-nano');
expect(getValueKey('openai/gpt-5-nano')).toBe('gpt-5-nano');
expect(getValueKey('gpt-5-nano-0130')).toBe('gpt-5-nano');
expect(getValueKey('gpt-5-nano-2025-01-30-0130')).toBe('gpt-5-nano');
});
it('should return "gpt-4o" for model type of "gpt-4o"', () => {
expect(getValueKey('gpt-4o-2024-08-06')).toBe('gpt-4o');
expect(getValueKey('gpt-4o-2024-08-06-0718')).toBe('gpt-4o');
@@ -207,6 +236,48 @@ describe('getMultiplier', () => {
);
});
it('should return the correct multiplier for gpt-5', () => {
const valueKey = getValueKey('gpt-5-2025-01-30');
expect(getMultiplier({ valueKey, tokenType: 'prompt' })).toBe(tokenValues['gpt-5'].prompt);
expect(getMultiplier({ valueKey, tokenType: 'completion' })).toBe(
tokenValues['gpt-5'].completion,
);
expect(getMultiplier({ model: 'gpt-5-preview', tokenType: 'prompt' })).toBe(
tokenValues['gpt-5'].prompt,
);
expect(getMultiplier({ model: 'openai/gpt-5', tokenType: 'completion' })).toBe(
tokenValues['gpt-5'].completion,
);
});
it('should return the correct multiplier for gpt-5-mini', () => {
const valueKey = getValueKey('gpt-5-mini-2025-01-30');
expect(getMultiplier({ valueKey, tokenType: 'prompt' })).toBe(tokenValues['gpt-5-mini'].prompt);
expect(getMultiplier({ valueKey, tokenType: 'completion' })).toBe(
tokenValues['gpt-5-mini'].completion,
);
expect(getMultiplier({ model: 'gpt-5-mini-preview', tokenType: 'prompt' })).toBe(
tokenValues['gpt-5-mini'].prompt,
);
expect(getMultiplier({ model: 'openai/gpt-5-mini', tokenType: 'completion' })).toBe(
tokenValues['gpt-5-mini'].completion,
);
});
it('should return the correct multiplier for gpt-5-nano', () => {
const valueKey = getValueKey('gpt-5-nano-2025-01-30');
expect(getMultiplier({ valueKey, tokenType: 'prompt' })).toBe(tokenValues['gpt-5-nano'].prompt);
expect(getMultiplier({ valueKey, tokenType: 'completion' })).toBe(
tokenValues['gpt-5-nano'].completion,
);
expect(getMultiplier({ model: 'gpt-5-nano-preview', tokenType: 'prompt' })).toBe(
tokenValues['gpt-5-nano'].prompt,
);
expect(getMultiplier({ model: 'openai/gpt-5-nano', tokenType: 'completion' })).toBe(
tokenValues['gpt-5-nano'].completion,
);
});
it('should return the correct multiplier for gpt-4o', () => {
const valueKey = getValueKey('gpt-4o-2024-08-06');
expect(getMultiplier({ valueKey, tokenType: 'prompt' })).toBe(tokenValues['gpt-4o'].prompt);
@@ -307,10 +378,22 @@ describe('getMultiplier', () => {
});
it('should return defaultRate if derived valueKey does not match any known patterns', () => {
expect(getMultiplier({ tokenType: 'prompt', model: 'gpt-5-some-other-info' })).toBe(
expect(getMultiplier({ tokenType: 'prompt', model: 'gpt-10-some-other-info' })).toBe(
defaultRate,
);
});
it('should return correct multipliers for GPT-OSS models', () => {
const models = ['gpt-oss-20b', 'gpt-oss-120b'];
models.forEach((key) => {
const expectedPrompt = tokenValues[key].prompt;
const expectedCompletion = tokenValues[key].completion;
expect(getMultiplier({ valueKey: key, tokenType: 'prompt' })).toBe(expectedPrompt);
expect(getMultiplier({ valueKey: key, tokenType: 'completion' })).toBe(expectedCompletion);
expect(getMultiplier({ model: key, tokenType: 'prompt' })).toBe(expectedPrompt);
expect(getMultiplier({ model: key, tokenType: 'completion' })).toBe(expectedCompletion);
});
});
});
describe('AWS Bedrock Model Tests', () => {

View File

@@ -3,7 +3,7 @@ const bcrypt = require('bcryptjs');
/**
* Compares the provided password with the user's password.
*
* @param {MongoUser} user - The user to compare the password for.
* @param {IUser} user - The user to compare the password for.
* @param {string} candidatePassword - The password to test against the user's password.
* @returns {Promise<boolean>} A promise that resolves to a boolean indicating if the password matches.
*/

View File

@@ -1,6 +1,6 @@
{
"name": "@librechat/backend",
"version": "v0.7.9",
"version": "v0.8.0-rc3",
"description": "",
"scripts": {
"start": "echo 'please run this from the root directory'",
@@ -49,10 +49,12 @@
"@langchain/google-vertexai": "^0.2.13",
"@langchain/openai": "^0.5.18",
"@langchain/textsplitters": "^0.1.0",
"@librechat/agents": "^2.4.67",
"@librechat/agents": "^2.4.76",
"@librechat/api": "*",
"@librechat/data-schemas": "*",
"@node-saml/passport-saml": "^5.0.0",
"@microsoft/microsoft-graph-client": "^3.0.7",
"@modelcontextprotocol/sdk": "^1.17.1",
"@node-saml/passport-saml": "^5.1.0",
"@waylaidwanderer/fetch-event-source": "^3.0.1",
"axios": "^1.8.2",
"bcryptjs": "^2.4.3",
@@ -95,7 +97,6 @@
"nodemailer": "^6.9.15",
"ollama": "^0.5.0",
"openai": "^5.10.1",
"openai-chat-tokens": "^0.2.8",
"openid-client": "^6.5.0",
"passport": "^0.6.0",
"passport-apple": "^2.0.2",
@@ -119,7 +120,7 @@
},
"devDependencies": {
"jest": "^29.7.0",
"mongodb-memory-server": "^10.1.3",
"mongodb-memory-server": "^10.1.4",
"nodemon": "^3.0.3",
"supertest": "^7.1.0"
}

View File

@@ -1,6 +1,6 @@
const { logger } = require('~/config');
const { logger } = require('@librechat/data-schemas');
// WeakMap to hold temporary data associated with requests
/** WeakMap to hold temporary data associated with requests */
const requestDataMap = new WeakMap();
const FinalizationRegistry = global.FinalizationRegistry || null;
@@ -23,7 +23,7 @@ const clientRegistry = FinalizationRegistry
} else {
logger.debug('[FinalizationRegistry] Cleaning up client');
}
} catch (e) {
} catch {
// Ignore errors
}
})
@@ -55,6 +55,9 @@ function disposeClient(client) {
if (client.responseMessageId) {
client.responseMessageId = null;
}
if (client.parentMessageId) {
client.parentMessageId = null;
}
if (client.message_file_map) {
client.message_file_map = null;
}
@@ -334,7 +337,7 @@ function disposeClient(client) {
}
}
client.options = null;
} catch (e) {
} catch {
// Ignore errors during disposal
}
}

View File

@@ -12,6 +12,7 @@ const {
} = require('~/server/services/AuthService');
const { findUser, getUserById, deleteAllUserSessions, findSession } = require('~/models');
const { getOpenIdConfig } = require('~/strategies');
const { getGraphApiToken } = require('~/server/services/GraphTokenService');
const registrationController = async (req, res) => {
try {
@@ -83,7 +84,7 @@ const refreshController = async (req, res) => {
}
try {
const payload = jwt.verify(refreshToken, process.env.JWT_REFRESH_SECRET);
const user = await getUserById(payload.id, '-password -__v -totpSecret');
const user = await getUserById(payload.id, '-password -__v -totpSecret -backupCodes');
if (!user) {
return res.status(401).redirect('/login');
}
@@ -118,9 +119,54 @@ const refreshController = async (req, res) => {
}
};
const graphTokenController = async (req, res) => {
try {
// Validate user is authenticated via Entra ID
if (!req.user.openidId || req.user.provider !== 'openid') {
return res.status(403).json({
message: 'Microsoft Graph access requires Entra ID authentication',
});
}
// Check if OpenID token reuse is active (required for on-behalf-of flow)
if (!isEnabled(process.env.OPENID_REUSE_TOKENS)) {
return res.status(403).json({
message: 'SharePoint integration requires OpenID token reuse to be enabled',
});
}
// Extract access token from Authorization header
const authHeader = req.headers.authorization;
if (!authHeader || !authHeader.startsWith('Bearer ')) {
return res.status(401).json({
message: 'Valid authorization token required',
});
}
// Get scopes from query parameters
const scopes = req.query.scopes;
if (!scopes) {
return res.status(400).json({
message: 'Graph API scopes are required as query parameter',
});
}
const accessToken = authHeader.substring(7); // Remove 'Bearer ' prefix
const tokenResponse = await getGraphApiToken(req.user, accessToken, scopes);
res.json(tokenResponse);
} catch (error) {
logger.error('[graphTokenController] Failed to obtain Graph API token:', error);
res.status(500).json({
message: 'Failed to obtain Microsoft Graph token',
});
}
};
module.exports = {
refreshController,
registrationController,
resetPasswordController,
resetPasswordRequestController,
graphTokenController,
};

View File

@@ -1,46 +0,0 @@
const { logger } = require('~/config');
//handle duplicates
const handleDuplicateKeyError = (err, res) => {
logger.error('Duplicate key error:', err.keyValue);
const field = `${JSON.stringify(Object.keys(err.keyValue))}`;
const code = 409;
res
.status(code)
.send({ messages: `An document with that ${field} already exists.`, fields: field });
};
//handle validation errors
const handleValidationError = (err, res) => {
logger.error('Validation error:', err.errors);
let errors = Object.values(err.errors).map((el) => el.message);
let fields = `${JSON.stringify(Object.values(err.errors).map((el) => el.path))}`;
let code = 400;
if (errors.length > 1) {
errors = errors.join(' ');
res.status(code).send({ messages: `${JSON.stringify(errors)}`, fields: fields });
} else {
res.status(code).send({ messages: `${JSON.stringify(errors)}`, fields: fields });
}
};
module.exports = (err, _req, res, _next) => {
try {
if (err.name === 'ValidationError') {
return handleValidationError(err, res);
}
if (err.code && err.code == 11000) {
return handleDuplicateKeyError(err, res);
}
// Special handling for errors like SyntaxError
if (err.statusCode && err.body) {
return res.status(err.statusCode).send(err.body);
}
logger.error('ErrorController => error', err);
return res.status(500).send('An unknown error occurred.');
} catch (err) {
logger.error('ErrorController => processing error', err);
return res.status(500).send('Processing error in ErrorController.');
}
};

View File

@@ -5,6 +5,7 @@ const { logger } = require('~/config');
/**
* @param {ServerRequest} req
* @returns {Promise<TModelsConfig>} The models config.
*/
const getModelsConfig = async (req) => {
const cache = getLogStores(CacheKeys.CONFIG_STORE);

View File

@@ -1,27 +0,0 @@
const { CacheKeys } = require('librechat-data-provider');
const { loadOverrideConfig } = require('~/server/services/Config');
const { getLogStores } = require('~/cache');
async function overrideController(req, res) {
const cache = getLogStores(CacheKeys.CONFIG_STORE);
let overrideConfig = await cache.get(CacheKeys.OVERRIDE_CONFIG);
if (overrideConfig) {
res.send(overrideConfig);
return;
} else if (overrideConfig === false) {
res.send(false);
return;
}
overrideConfig = await loadOverrideConfig();
const { endpointsConfig, modelsConfig } = overrideConfig;
if (endpointsConfig) {
await cache.set(CacheKeys.ENDPOINT_CONFIG, endpointsConfig);
}
if (modelsConfig) {
await cache.set(CacheKeys.MODELS_CONFIG, modelsConfig);
}
await cache.set(CacheKeys.OVERRIDE_CONFIG, overrideConfig);
res.send(JSON.stringify(overrideConfig));
}
module.exports = overrideController;

View File

@@ -0,0 +1,484 @@
/**
* @import { TUpdateResourcePermissionsRequest, TUpdateResourcePermissionsResponse } from 'librechat-data-provider'
*/
const mongoose = require('mongoose');
const { logger } = require('@librechat/data-schemas');
const { ResourceType, PrincipalType } = require('librechat-data-provider');
const {
bulkUpdateResourcePermissions,
ensureGroupPrincipalExists,
getEffectivePermissions,
ensurePrincipalExists,
getAvailableRoles,
} = require('~/server/services/PermissionService');
const { AclEntry } = require('~/db/models');
const {
searchPrincipals: searchLocalPrincipals,
sortPrincipalsByRelevance,
calculateRelevanceScore,
} = require('~/models');
const {
entraIdPrincipalFeatureEnabled,
searchEntraIdPrincipals,
} = require('~/server/services/GraphApiService');
/**
* Generic controller for resource permission endpoints
* Delegates validation and logic to PermissionService
*/
/**
* Validates that the resourceType is one of the supported enum values
* @param {string} resourceType - The resource type to validate
* @throws {Error} If resourceType is not valid
*/
const validateResourceType = (resourceType) => {
const validTypes = Object.values(ResourceType);
if (!validTypes.includes(resourceType)) {
throw new Error(`Invalid resourceType: ${resourceType}. Valid types: ${validTypes.join(', ')}`);
}
};
/**
* Bulk update permissions for a resource (grant, update, remove)
* @route PUT /api/{resourceType}/{resourceId}/permissions
* @param {Object} req - Express request object
* @param {Object} req.params - Route parameters
* @param {string} req.params.resourceType - Resource type (e.g., 'agent')
* @param {string} req.params.resourceId - Resource ID
* @param {TUpdateResourcePermissionsRequest} req.body - Request body
* @param {Object} res - Express response object
* @returns {Promise<TUpdateResourcePermissionsResponse>} Updated permissions response
*/
const updateResourcePermissions = async (req, res) => {
try {
const { resourceType, resourceId } = req.params;
validateResourceType(resourceType);
/** @type {TUpdateResourcePermissionsRequest} */
const { updated, removed, public: isPublic, publicAccessRoleId } = req.body;
const { id: userId } = req.user;
// Prepare principals for the service call
const updatedPrincipals = [];
const revokedPrincipals = [];
// Add updated principals
if (updated && Array.isArray(updated)) {
updatedPrincipals.push(...updated);
}
// Add public permission if enabled
if (isPublic && publicAccessRoleId) {
updatedPrincipals.push({
type: PrincipalType.PUBLIC,
id: null,
accessRoleId: publicAccessRoleId,
});
}
// Prepare authentication context for enhanced group member fetching
const useEntraId = entraIdPrincipalFeatureEnabled(req.user);
const authHeader = req.headers.authorization;
const accessToken =
authHeader && authHeader.startsWith('Bearer ') ? authHeader.substring(7) : null;
const authContext =
useEntraId && accessToken
? {
accessToken,
sub: req.user.openidId,
}
: null;
// Ensure updated principals exist in the database before processing permissions
const validatedPrincipals = [];
for (const principal of updatedPrincipals) {
try {
let principalId;
if (principal.type === PrincipalType.PUBLIC) {
principalId = null; // Public principals don't need database records
} else if (principal.type === PrincipalType.ROLE) {
principalId = principal.id; // Role principals use role name as ID
} else if (principal.type === PrincipalType.USER) {
principalId = await ensurePrincipalExists(principal);
} else if (principal.type === PrincipalType.GROUP) {
// Pass authContext to enable member fetching for Entra ID groups when available
principalId = await ensureGroupPrincipalExists(principal, authContext);
} else {
logger.error(`Unsupported principal type: ${principal.type}`);
continue; // Skip invalid principal types
}
// Update the principal with the validated ID for ACL operations
validatedPrincipals.push({
...principal,
id: principalId,
});
} catch (error) {
logger.error('Error ensuring principal exists:', {
principal: {
type: principal.type,
id: principal.id,
name: principal.name,
source: principal.source,
},
error: error.message,
});
// Continue with other principals instead of failing the entire operation
continue;
}
}
// Add removed principals
if (removed && Array.isArray(removed)) {
revokedPrincipals.push(...removed);
}
// If public is disabled, add public to revoked list
if (!isPublic) {
revokedPrincipals.push({
type: PrincipalType.PUBLIC,
id: null,
});
}
const results = await bulkUpdateResourcePermissions({
resourceType,
resourceId,
updatedPrincipals: validatedPrincipals,
revokedPrincipals,
grantedBy: userId,
});
/** @type {TUpdateResourcePermissionsResponse} */
const response = {
message: 'Permissions updated successfully',
results: {
principals: results.granted,
public: isPublic || false,
publicAccessRoleId: isPublic ? publicAccessRoleId : undefined,
},
};
res.status(200).json(response);
} catch (error) {
logger.error('Error updating resource permissions:', error);
res.status(400).json({
error: 'Failed to update permissions',
details: error.message,
});
}
};
/**
* Get principals with their permission roles for a resource (UI-friendly format)
* Uses efficient aggregation pipeline to join User/Group data in single query
* @route GET /api/permissions/{resourceType}/{resourceId}
*/
const getResourcePermissions = async (req, res) => {
try {
const { resourceType, resourceId } = req.params;
validateResourceType(resourceType);
// Use aggregation pipeline for efficient single-query data retrieval
const results = await AclEntry.aggregate([
// Match ACL entries for this resource
{
$match: {
resourceType,
resourceId: mongoose.Types.ObjectId.isValid(resourceId)
? mongoose.Types.ObjectId.createFromHexString(resourceId)
: resourceId,
},
},
// Lookup AccessRole information
{
$lookup: {
from: 'accessroles',
localField: 'roleId',
foreignField: '_id',
as: 'role',
},
},
// Lookup User information (for user principals)
{
$lookup: {
from: 'users',
localField: 'principalId',
foreignField: '_id',
as: 'userInfo',
},
},
// Lookup Group information (for group principals)
{
$lookup: {
from: 'groups',
localField: 'principalId',
foreignField: '_id',
as: 'groupInfo',
},
},
// Project final structure
{
$project: {
principalType: 1,
principalId: 1,
accessRoleId: { $arrayElemAt: ['$role.accessRoleId', 0] },
userInfo: { $arrayElemAt: ['$userInfo', 0] },
groupInfo: { $arrayElemAt: ['$groupInfo', 0] },
},
},
]);
const principals = [];
let publicPermission = null;
// Process aggregation results
for (const result of results) {
if (result.principalType === PrincipalType.PUBLIC) {
publicPermission = {
public: true,
publicAccessRoleId: result.accessRoleId,
};
} else if (result.principalType === PrincipalType.USER && result.userInfo) {
principals.push({
type: PrincipalType.USER,
id: result.userInfo._id.toString(),
name: result.userInfo.name || result.userInfo.username,
email: result.userInfo.email,
avatar: result.userInfo.avatar,
source: !result.userInfo._id ? 'entra' : 'local',
idOnTheSource: result.userInfo.idOnTheSource || result.userInfo._id.toString(),
accessRoleId: result.accessRoleId,
});
} else if (result.principalType === PrincipalType.GROUP && result.groupInfo) {
principals.push({
type: PrincipalType.GROUP,
id: result.groupInfo._id.toString(),
name: result.groupInfo.name,
email: result.groupInfo.email,
description: result.groupInfo.description,
avatar: result.groupInfo.avatar,
source: result.groupInfo.source || 'local',
idOnTheSource: result.groupInfo.idOnTheSource || result.groupInfo._id.toString(),
accessRoleId: result.accessRoleId,
});
} else if (result.principalType === PrincipalType.ROLE) {
principals.push({
type: PrincipalType.ROLE,
/** Role name as ID */
id: result.principalId,
/** Display the role name */
name: result.principalId,
description: `System role: ${result.principalId}`,
accessRoleId: result.accessRoleId,
});
}
}
// Return response in format expected by frontend
const response = {
resourceType,
resourceId,
principals,
public: publicPermission?.public || false,
...(publicPermission?.publicAccessRoleId && {
publicAccessRoleId: publicPermission.publicAccessRoleId,
}),
};
res.status(200).json(response);
} catch (error) {
logger.error('Error getting resource permissions principals:', error);
res.status(500).json({
error: 'Failed to get permissions principals',
details: error.message,
});
}
};
/**
* Get available roles for a resource type
* @route GET /api/{resourceType}/roles
*/
const getResourceRoles = async (req, res) => {
try {
const { resourceType } = req.params;
validateResourceType(resourceType);
const roles = await getAvailableRoles({ resourceType });
res.status(200).json(
roles.map((role) => ({
accessRoleId: role.accessRoleId,
name: role.name,
description: role.description,
permBits: role.permBits,
})),
);
} catch (error) {
logger.error('Error getting resource roles:', error);
res.status(500).json({
error: 'Failed to get roles',
details: error.message,
});
}
};
/**
* Get user's effective permission bitmask for a resource
* @route GET /api/{resourceType}/{resourceId}/effective
*/
const getUserEffectivePermissions = async (req, res) => {
try {
const { resourceType, resourceId } = req.params;
validateResourceType(resourceType);
const { id: userId } = req.user;
const permissionBits = await getEffectivePermissions({
userId,
role: req.user.role,
resourceType,
resourceId,
});
res.status(200).json({
permissionBits,
});
} catch (error) {
logger.error('Error getting user effective permissions:', error);
res.status(500).json({
error: 'Failed to get effective permissions',
details: error.message,
});
}
};
/**
* Search for users and groups to grant permissions
* Supports hybrid local database + Entra ID search when configured
* @route GET /api/permissions/search-principals
*/
const searchPrincipals = async (req, res) => {
try {
const { q: query, limit = 20, types } = req.query;
if (!query || query.trim().length === 0) {
return res.status(400).json({
error: 'Query parameter "q" is required and must not be empty',
});
}
if (query.trim().length < 2) {
return res.status(400).json({
error: 'Query must be at least 2 characters long',
});
}
const searchLimit = Math.min(Math.max(1, parseInt(limit) || 10), 50);
let typeFilters = null;
if (types) {
const typesArray = Array.isArray(types) ? types : types.split(',');
const validTypes = typesArray.filter((t) =>
[PrincipalType.USER, PrincipalType.GROUP, PrincipalType.ROLE].includes(t),
);
typeFilters = validTypes.length > 0 ? validTypes : null;
}
const localResults = await searchLocalPrincipals(query.trim(), searchLimit, typeFilters);
let allPrincipals = [...localResults];
const useEntraId = entraIdPrincipalFeatureEnabled(req.user);
if (useEntraId && localResults.length < searchLimit) {
try {
let graphType = 'all';
if (typeFilters && typeFilters.length === 1) {
const graphTypeMap = {
[PrincipalType.USER]: 'users',
[PrincipalType.GROUP]: 'groups',
};
const mappedType = graphTypeMap[typeFilters[0]];
if (mappedType) {
graphType = mappedType;
}
}
const authHeader = req.headers.authorization;
const accessToken =
authHeader && authHeader.startsWith('Bearer ') ? authHeader.substring(7) : null;
if (accessToken) {
const graphResults = await searchEntraIdPrincipals(
accessToken,
req.user.openidId,
query.trim(),
graphType,
searchLimit - localResults.length,
);
const localEmails = new Set(
localResults.map((p) => p.email?.toLowerCase()).filter(Boolean),
);
const localGroupSourceIds = new Set(
localResults.map((p) => p.idOnTheSource).filter(Boolean),
);
for (const principal of graphResults) {
const isDuplicateByEmail =
principal.email && localEmails.has(principal.email.toLowerCase());
const isDuplicateBySourceId =
principal.idOnTheSource && localGroupSourceIds.has(principal.idOnTheSource);
if (!isDuplicateByEmail && !isDuplicateBySourceId) {
allPrincipals.push(principal);
}
}
}
} catch (graphError) {
logger.warn('Graph API search failed, falling back to local results:', graphError.message);
}
}
const scoredResults = allPrincipals.map((item) => ({
...item,
_searchScore: calculateRelevanceScore(item, query.trim()),
}));
const finalResults = sortPrincipalsByRelevance(scoredResults)
.slice(0, searchLimit)
.map((result) => {
const { _searchScore, ...resultWithoutScore } = result;
return resultWithoutScore;
});
res.status(200).json({
query: query.trim(),
limit: searchLimit,
types: typeFilters,
results: finalResults,
count: finalResults.length,
sources: {
local: finalResults.filter((r) => r.source === 'local').length,
entra: finalResults.filter((r) => r.source === 'entra').length,
},
});
} catch (error) {
logger.error('Error searching principals:', error);
res.status(500).json({
error: 'Failed to search principals',
details: error.message,
});
}
};
module.exports = {
updateResourcePermissions,
getResourcePermissions,
getResourceRoles,
getUserEffectivePermissions,
searchPrincipals,
};

View File

@@ -1,54 +1,18 @@
const { logger } = require('@librechat/data-schemas');
const { CacheKeys, AuthType, Constants } = require('librechat-data-provider');
const { getCustomConfig, getCachedTools } = require('~/server/services/Config');
const { getToolkitKey } = require('~/server/services/ToolService');
const { getMCPManager, getFlowStateManager } = require('~/config');
const { availableTools } = require('~/app/clients/tools');
const { CacheKeys, Constants } = require('librechat-data-provider');
const {
getToolkitKey,
checkPluginAuth,
filterUniquePlugins,
convertMCPToolToPlugin,
convertMCPToolsToPlugins,
} = require('@librechat/api');
const { getCachedTools, setCachedTools, mergeUserTools } = require('~/server/services/Config');
const { availableTools, toolkits } = require('~/app/clients/tools');
const { getAppConfig } = require('~/server/services/Config');
const { getMCPManager } = require('~/config');
const { getLogStores } = require('~/cache');
/**
* Filters out duplicate plugins from the list of plugins.
*
* @param {TPlugin[]} plugins The list of plugins to filter.
* @returns {TPlugin[]} The list of plugins with duplicates removed.
*/
const filterUniquePlugins = (plugins) => {
const seen = new Set();
return plugins.filter((plugin) => {
const duplicate = seen.has(plugin.pluginKey);
seen.add(plugin.pluginKey);
return !duplicate;
});
};
/**
* Determines if a plugin is authenticated by checking if all required authentication fields have non-empty values.
* Supports alternate authentication fields, allowing validation against multiple possible environment variables.
*
* @param {TPlugin} plugin The plugin object containing the authentication configuration.
* @returns {boolean} True if the plugin is authenticated for all required fields, false otherwise.
*/
const checkPluginAuth = (plugin) => {
if (!plugin.authConfig || plugin.authConfig.length === 0) {
return false;
}
return plugin.authConfig.every((authFieldObj) => {
const authFieldOptions = authFieldObj.authField.split('||');
let isFieldAuthenticated = false;
for (const fieldOption of authFieldOptions) {
const envValue = process.env[fieldOption];
if (envValue && envValue.trim() !== '' && envValue !== AuthType.USER_PROVIDED) {
isFieldAuthenticated = true;
break;
}
}
return isFieldAuthenticated;
});
};
const getAvailablePluginsController = async (req, res) => {
try {
const cache = getLogStores(CacheKeys.CONFIG_STORE);
@@ -58,8 +22,10 @@ const getAvailablePluginsController = async (req, res) => {
return;
}
const appConfig = await getAppConfig({ role: req.user?.role });
/** @type {{ filteredTools: string[], includedTools: string[] }} */
const { filteredTools = [], includedTools = [] } = req.app.locals;
const { filteredTools = [], includedTools = [] } = appConfig;
/** @type {import('@librechat/api').LCManifestTool[]} */
const pluginManifest = availableTools;
const uniquePlugins = filterUniquePlugins(pluginManifest);
@@ -85,45 +51,6 @@ const getAvailablePluginsController = async (req, res) => {
}
};
function createServerToolsCallback() {
/**
* @param {string} serverName
* @param {TPlugin[] | null} serverTools
*/
return async function (serverName, serverTools) {
try {
const mcpToolsCache = getLogStores(CacheKeys.MCP_TOOLS);
if (!serverName || !mcpToolsCache) {
return;
}
await mcpToolsCache.set(serverName, serverTools);
logger.debug(`MCP tools for ${serverName} added to cache.`);
} catch (error) {
logger.error('Error retrieving MCP tools from cache:', error);
}
};
}
function createGetServerTools() {
/**
* Retrieves cached server tools
* @param {string} serverName
* @returns {Promise<TPlugin[] | null>}
*/
return async function (serverName) {
try {
const mcpToolsCache = getLogStores(CacheKeys.MCP_TOOLS);
if (!mcpToolsCache) {
return null;
}
return await mcpToolsCache.get(serverName);
} catch (error) {
logger.error('Error retrieving MCP tools from cache:', error);
return null;
}
};
}
/**
* Retrieves and returns a list of available tools, either from a cache or by reading a plugin manifest file.
*
@@ -139,37 +66,62 @@ function createGetServerTools() {
const getAvailableTools = async (req, res) => {
try {
const userId = req.user?.id;
const customConfig = await getCustomConfig();
if (!userId) {
logger.warn('[getAvailableTools] User ID not found in request');
return res.status(401).json({ message: 'Unauthorized' });
}
const cache = getLogStores(CacheKeys.CONFIG_STORE);
const cachedToolsArray = await cache.get(CacheKeys.TOOLS);
const cachedUserTools = await getCachedTools({ userId });
const userPlugins = convertMCPToolsToPlugins(cachedUserTools, customConfig);
if (cachedToolsArray && userPlugins) {
const mcpManager = getMCPManager();
const userPlugins =
cachedUserTools != null
? convertMCPToolsToPlugins({ functionTools: cachedUserTools, mcpManager })
: undefined;
if (cachedToolsArray != null && userPlugins != null) {
const dedupedTools = filterUniquePlugins([...userPlugins, ...cachedToolsArray]);
res.status(200).json(dedupedTools);
return;
}
// If not in cache, build from manifest
/** @type {Record<string, FunctionTool> | null} Get tool definitions to filter which tools are actually available */
let toolDefinitions = await getCachedTools({ includeGlobal: true });
let prelimCachedTools;
/** @type {import('@librechat/api').LCManifestTool[]} */
let pluginManifest = availableTools;
if (customConfig?.mcpServers != null) {
const mcpManager = getMCPManager();
const flowsCache = getLogStores(CacheKeys.FLOWS);
const flowManager = flowsCache ? getFlowStateManager(flowsCache) : null;
const serverToolsCallback = createServerToolsCallback();
const getServerTools = createGetServerTools();
const mcpTools = await mcpManager.loadManifestTools({
flowManager,
serverToolsCallback,
getServerTools,
});
pluginManifest = [...mcpTools, ...pluginManifest];
const appConfig = req.config ?? (await getAppConfig({ role: req.user?.role }));
if (appConfig?.mcpConfig != null) {
try {
const mcpTools = await mcpManager.getAllToolFunctions(userId);
prelimCachedTools = prelimCachedTools ?? {};
for (const [toolKey, toolData] of Object.entries(mcpTools)) {
const plugin = convertMCPToolToPlugin({
toolKey,
toolData,
mcpManager,
});
if (plugin) {
pluginManifest.push(plugin);
}
prelimCachedTools[toolKey] = toolData;
}
await mergeUserTools({ userId, cachedUserTools, userTools: prelimCachedTools });
} catch (error) {
logger.error(
'[getAvailableTools] Error loading MCP Tools, servers may still be initializing:',
error,
);
}
} else if (prelimCachedTools != null) {
await setCachedTools(prelimCachedTools, { isGlobal: true });
}
/** @type {TPlugin[]} */
/** @type {TPlugin[]} Deduplicate and authenticate plugins */
const uniquePlugins = filterUniquePlugins(pluginManifest);
const authenticatedPlugins = uniquePlugins.map((plugin) => {
if (checkPluginAuth(plugin)) {
return { ...plugin, authenticated: true };
@@ -178,14 +130,15 @@ const getAvailableTools = async (req, res) => {
}
});
const toolDefinitions = (await getCachedTools({ includeGlobal: true })) || {};
/** Filter plugins based on availability and add MCP-specific auth config */
const toolsOutput = [];
for (const plugin of authenticatedPlugins) {
const isToolDefined = toolDefinitions[plugin.pluginKey] !== undefined;
const isToolkit =
plugin.toolkit === true &&
Object.keys(toolDefinitions).some((key) => getToolkitKey(key) === plugin.pluginKey);
Object.keys(toolDefinitions).some(
(key) => getToolkitKey({ toolkits, toolName: key }) === plugin.pluginKey,
);
if (!isToolDefined && !isToolkit) {
continue;
@@ -193,41 +146,36 @@ const getAvailableTools = async (req, res) => {
const toolToAdd = { ...plugin };
if (!plugin.pluginKey.includes(Constants.mcp_delimiter)) {
toolsOutput.push(toolToAdd);
continue;
}
if (plugin.pluginKey.includes(Constants.mcp_delimiter)) {
const parts = plugin.pluginKey.split(Constants.mcp_delimiter);
const serverName = parts[parts.length - 1];
const serverConfig = appConfig?.mcpConfig?.[serverName];
const parts = plugin.pluginKey.split(Constants.mcp_delimiter);
const serverName = parts[parts.length - 1];
const serverConfig = customConfig?.mcpServers?.[serverName];
if (!serverConfig?.customUserVars) {
toolsOutput.push(toolToAdd);
continue;
}
const customVarKeys = Object.keys(serverConfig.customUserVars);
if (customVarKeys.length === 0) {
toolToAdd.authConfig = [];
toolToAdd.authenticated = true;
} else {
toolToAdd.authConfig = Object.entries(serverConfig.customUserVars).map(([key, value]) => ({
authField: key,
label: value.title || key,
description: value.description || '',
}));
toolToAdd.authenticated = false;
if (serverConfig?.customUserVars) {
const customVarKeys = Object.keys(serverConfig.customUserVars);
if (customVarKeys.length === 0) {
toolToAdd.authConfig = [];
toolToAdd.authenticated = true;
} else {
toolToAdd.authConfig = Object.entries(serverConfig.customUserVars).map(
([key, value]) => ({
authField: key,
label: value.title || key,
description: value.description || '',
}),
);
toolToAdd.authenticated = false;
}
}
}
toolsOutput.push(toolToAdd);
}
const finalTools = filterUniquePlugins(toolsOutput);
await cache.set(CacheKeys.TOOLS, finalTools);
const dedupedTools = filterUniquePlugins([...userPlugins, ...finalTools]);
const dedupedTools = filterUniquePlugins([...(userPlugins ?? []), ...finalTools]);
res.status(200).json(dedupedTools);
} catch (error) {
logger.error('[getAvailableTools]', error);
@@ -235,58 +183,6 @@ const getAvailableTools = async (req, res) => {
}
};
/**
* Converts MCP function format tools to plugin format
* @param {Object} functionTools - Object with function format tools
* @param {Object} customConfig - Custom configuration for MCP servers
* @returns {Array} Array of plugin objects
*/
function convertMCPToolsToPlugins(functionTools, customConfig) {
const plugins = [];
for (const [toolKey, toolData] of Object.entries(functionTools)) {
if (!toolData.function || !toolKey.includes(Constants.mcp_delimiter)) {
continue;
}
const functionData = toolData.function;
const parts = toolKey.split(Constants.mcp_delimiter);
const serverName = parts[parts.length - 1];
const serverConfig = customConfig?.mcpServers?.[serverName];
const plugin = {
name: parts[0], // Use the tool name without server suffix
pluginKey: toolKey,
description: functionData.description || '',
authenticated: true,
icon: serverConfig?.iconPath,
};
// Build authConfig for MCP tools
if (!serverConfig?.customUserVars) {
plugin.authConfig = [];
plugins.push(plugin);
continue;
}
const customVarKeys = Object.keys(serverConfig.customUserVars);
if (customVarKeys.length === 0) {
plugin.authConfig = [];
} else {
plugin.authConfig = Object.entries(serverConfig.customUserVars).map(([key, value]) => ({
authField: key,
label: value.title || key,
description: value.description || '',
}));
}
plugins.push(plugin);
}
return plugins;
}
module.exports = {
getAvailableTools,
getAvailablePluginsController,

View File

@@ -1,89 +1,668 @@
const { Constants } = require('librechat-data-provider');
const { getCustomConfig, getCachedTools } = require('~/server/services/Config');
const { getCachedTools, getAppConfig } = require('~/server/services/Config');
const { getLogStores } = require('~/cache');
// Mock the dependencies
jest.mock('@librechat/data-schemas', () => ({
logger: {
debug: jest.fn(),
error: jest.fn(),
warn: jest.fn(),
},
}));
jest.mock('~/server/services/Config', () => ({
getCustomConfig: jest.fn(),
getCachedTools: jest.fn(),
getAppConfig: jest.fn().mockResolvedValue({
filteredTools: [],
includedTools: [],
}),
setCachedTools: jest.fn(),
mergeUserTools: jest.fn(),
}));
jest.mock('~/server/services/ToolService', () => ({
getToolkitKey: jest.fn(),
}));
// loadAndFormatTools mock removed - no longer used in PluginController
jest.mock('~/config', () => ({
getMCPManager: jest.fn(() => ({
loadManifestTools: jest.fn().mockResolvedValue([]),
getAllToolFunctions: jest.fn().mockResolvedValue({}),
getRawConfig: jest.fn().mockReturnValue({}),
})),
getFlowStateManager: jest.fn(),
}));
jest.mock('~/app/clients/tools', () => ({
availableTools: [],
toolkits: [],
}));
jest.mock('~/cache', () => ({
getLogStores: jest.fn(),
}));
// Import the actual module with the function we want to test
const { getAvailableTools } = require('./PluginController');
const { getAvailableTools, getAvailablePluginsController } = require('./PluginController');
describe('PluginController', () => {
describe('plugin.icon behavior', () => {
let mockReq, mockRes, mockCache;
let mockReq, mockRes, mockCache;
beforeEach(() => {
jest.clearAllMocks();
mockReq = {
user: { id: 'test-user-id' },
config: {
filteredTools: [],
includedTools: [],
},
};
mockRes = { status: jest.fn().mockReturnThis(), json: jest.fn() };
mockCache = { get: jest.fn(), set: jest.fn() };
getLogStores.mockReturnValue(mockCache);
// Clear availableTools and toolkits arrays before each test
require('~/app/clients/tools').availableTools.length = 0;
require('~/app/clients/tools').toolkits.length = 0;
// Reset getCachedTools mock to ensure clean state
getCachedTools.mockReset();
// Reset getAppConfig mock to ensure clean state with default values
getAppConfig.mockReset();
getAppConfig.mockResolvedValue({
filteredTools: [],
includedTools: [],
});
});
describe('getAvailablePluginsController', () => {
it('should use filterUniquePlugins to remove duplicate plugins', async () => {
// Add plugins with duplicates to availableTools
const mockPlugins = [
{ name: 'Plugin1', pluginKey: 'key1', description: 'First' },
{ name: 'Plugin1', pluginKey: 'key1', description: 'First duplicate' },
{ name: 'Plugin2', pluginKey: 'key2', description: 'Second' },
];
require('~/app/clients/tools').availableTools.push(...mockPlugins);
const callGetAvailableToolsWithMCPServer = async (mcpServers) => {
mockCache.get.mockResolvedValue(null);
getCustomConfig.mockResolvedValue({ mcpServers });
const functionTools = {
[`test-tool${Constants.mcp_delimiter}test-server`]: {
function: { name: 'test-tool', description: 'A test tool' },
// Configure getAppConfig to return the expected config
getAppConfig.mockResolvedValueOnce({
filteredTools: [],
includedTools: [],
});
await getAvailablePluginsController(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(200);
const responseData = mockRes.json.mock.calls[0][0];
// The real filterUniquePlugins should have removed the duplicate
expect(responseData).toHaveLength(2);
expect(responseData[0].pluginKey).toBe('key1');
expect(responseData[1].pluginKey).toBe('key2');
});
it('should use checkPluginAuth to verify plugin authentication', async () => {
// checkPluginAuth returns false for plugins without authConfig
// so authenticated property won't be added
const mockPlugin = { name: 'Plugin1', pluginKey: 'key1', description: 'First' };
require('~/app/clients/tools').availableTools.push(mockPlugin);
mockCache.get.mockResolvedValue(null);
// Configure getAppConfig to return the expected config
getAppConfig.mockResolvedValueOnce({
filteredTools: [],
includedTools: [],
});
await getAvailablePluginsController(mockReq, mockRes);
const responseData = mockRes.json.mock.calls[0][0];
// The real checkPluginAuth returns false for plugins without authConfig, so authenticated property is not added
expect(responseData[0].authenticated).toBeUndefined();
});
it('should return cached plugins when available', async () => {
const cachedPlugins = [
{ name: 'CachedPlugin', pluginKey: 'cached', description: 'Cached plugin' },
];
mockCache.get.mockResolvedValue(cachedPlugins);
await getAvailablePluginsController(mockReq, mockRes);
// When cache is hit, we return immediately without processing
expect(mockRes.json).toHaveBeenCalledWith(cachedPlugins);
});
it('should filter plugins based on includedTools', async () => {
const mockPlugins = [
{ name: 'Plugin1', pluginKey: 'key1', description: 'First' },
{ name: 'Plugin2', pluginKey: 'key2', description: 'Second' },
];
require('~/app/clients/tools').availableTools.push(...mockPlugins);
mockCache.get.mockResolvedValue(null);
// Configure getAppConfig to return config with includedTools
getAppConfig.mockResolvedValueOnce({
filteredTools: [],
includedTools: ['key1'],
});
await getAvailablePluginsController(mockReq, mockRes);
const responseData = mockRes.json.mock.calls[0][0];
expect(responseData).toHaveLength(1);
expect(responseData[0].pluginKey).toBe('key1');
});
});
describe('getAvailableTools', () => {
it('should use convertMCPToolsToPlugins for user-specific MCP tools', async () => {
const mockUserTools = {
[`tool1${Constants.mcp_delimiter}server1`]: {
type: 'function',
function: {
name: `tool1${Constants.mcp_delimiter}server1`,
description: 'Tool 1',
parameters: { type: 'object', properties: {} },
},
},
};
getCachedTools.mockResolvedValueOnce(functionTools);
mockCache.get.mockResolvedValue(null);
getCachedTools.mockResolvedValueOnce(mockUserTools);
mockReq.config = {
mcpConfig: null,
paths: { structuredTools: '/mock/path' },
};
// Mock second call to return tool definitions (includeGlobal: true)
getCachedTools.mockResolvedValueOnce(mockUserTools);
await getAvailableTools(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(200);
const responseData = mockRes.json.mock.calls[0][0];
expect(responseData).toBeDefined();
expect(Array.isArray(responseData)).toBe(true);
expect(responseData.length).toBeGreaterThan(0);
const convertedTool = responseData.find(
(tool) => tool.pluginKey === `tool1${Constants.mcp_delimiter}server1`,
);
expect(convertedTool).toBeDefined();
// The real convertMCPToolsToPlugins extracts the name from the delimiter
expect(convertedTool.name).toBe('tool1');
});
it('should use filterUniquePlugins to deduplicate combined tools', async () => {
const mockUserTools = {
'user-tool': {
type: 'function',
function: {
name: 'user-tool',
description: 'User tool',
parameters: { type: 'object', properties: {} },
},
},
};
const mockCachedPlugins = [
{ name: 'user-tool', pluginKey: 'user-tool', description: 'Duplicate user tool' },
{ name: 'ManifestTool', pluginKey: 'manifest-tool', description: 'Manifest tool' },
];
mockCache.get.mockResolvedValue(mockCachedPlugins);
getCachedTools.mockResolvedValueOnce(mockUserTools);
mockReq.config = {
mcpConfig: null,
paths: { structuredTools: '/mock/path' },
};
// Mock second call to return tool definitions
getCachedTools.mockResolvedValueOnce(mockUserTools);
await getAvailableTools(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(200);
const responseData = mockRes.json.mock.calls[0][0];
expect(Array.isArray(responseData)).toBe(true);
// The real filterUniquePlugins should have deduplicated tools with same pluginKey
const userToolCount = responseData.filter((tool) => tool.pluginKey === 'user-tool').length;
expect(userToolCount).toBe(1);
});
it('should use checkPluginAuth to verify authentication status', async () => {
// Add a plugin to availableTools that will be checked
const mockPlugin = {
name: 'Tool1',
pluginKey: 'tool1',
description: 'Tool 1',
// No authConfig means checkPluginAuth returns false
};
require('~/app/clients/tools').availableTools.push(mockPlugin);
mockCache.get.mockResolvedValue(null);
// First call returns null for user tools
getCachedTools.mockResolvedValueOnce(null);
mockReq.config = {
mcpConfig: null,
paths: { structuredTools: '/mock/path' },
};
// Second call (with includeGlobal: true) returns the tool definitions
getCachedTools.mockResolvedValueOnce({
[`test-tool${Constants.mcp_delimiter}test-server`]: true,
tool1: {
type: 'function',
function: {
name: 'tool1',
description: 'Tool 1',
parameters: {},
},
},
});
await getAvailableTools(mockReq, mockRes);
const responseData = mockRes.json.mock.calls[0][0];
return responseData.find((tool) => tool.name === 'test-tool');
};
beforeEach(() => {
jest.clearAllMocks();
mockReq = { user: { id: 'test-user-id' } };
mockRes = { status: jest.fn().mockReturnThis(), json: jest.fn() };
mockCache = { get: jest.fn(), set: jest.fn() };
getLogStores.mockReturnValue(mockCache);
expect(mockRes.status).toHaveBeenCalledWith(200);
const responseData = mockRes.json.mock.calls[0][0];
expect(Array.isArray(responseData)).toBe(true);
const tool = responseData.find((t) => t.pluginKey === 'tool1');
expect(tool).toBeDefined();
// The real checkPluginAuth returns false for plugins without authConfig, so authenticated property is not added
expect(tool.authenticated).toBeUndefined();
});
it('should set plugin.icon when iconPath is defined', async () => {
const mcpServers = {
'test-server': {
iconPath: '/path/to/icon.png',
it('should use getToolkitKey for toolkit validation', async () => {
const mockToolkit = {
name: 'Toolkit1',
pluginKey: 'toolkit1',
description: 'Toolkit 1',
toolkit: true,
};
require('~/app/clients/tools').availableTools.push(mockToolkit);
// Mock toolkits to have a mapping
require('~/app/clients/tools').toolkits.push({
name: 'Toolkit1',
pluginKey: 'toolkit1',
tools: ['toolkit1_function'],
});
mockCache.get.mockResolvedValue(null);
// First call returns null for user tools
getCachedTools.mockResolvedValueOnce(null);
mockReq.config = {
mcpConfig: null,
paths: { structuredTools: '/mock/path' },
};
// Second call (with includeGlobal: true) returns the tool definitions
getCachedTools.mockResolvedValueOnce({
toolkit1_function: {
type: 'function',
function: {
name: 'toolkit1_function',
description: 'Toolkit function',
parameters: {},
},
},
});
await getAvailableTools(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(200);
const responseData = mockRes.json.mock.calls[0][0];
expect(Array.isArray(responseData)).toBe(true);
const toolkit = responseData.find((t) => t.pluginKey === 'toolkit1');
expect(toolkit).toBeDefined();
});
});
describe('plugin.icon behavior', () => {
const callGetAvailableToolsWithMCPServer = async (serverConfig) => {
mockCache.get.mockResolvedValue(null);
const functionTools = {
[`test-tool${Constants.mcp_delimiter}test-server`]: {
type: 'function',
function: {
name: `test-tool${Constants.mcp_delimiter}test-server`,
description: 'A test tool',
parameters: { type: 'object', properties: {} },
},
},
};
const testTool = await callGetAvailableToolsWithMCPServer(mcpServers);
// Mock the MCP manager to return tools and server config
const mockMCPManager = {
getAllToolFunctions: jest.fn().mockResolvedValue(functionTools),
getRawConfig: jest.fn().mockReturnValue(serverConfig),
};
require('~/config').getMCPManager.mockReturnValue(mockMCPManager);
// First call returns empty user tools
getCachedTools.mockResolvedValueOnce({});
// Mock getAppConfig to return the mcpConfig
mockReq.config = {
mcpConfig: {
'test-server': serverConfig,
},
};
// Second call (with includeGlobal: true) returns the tool definitions
getCachedTools.mockResolvedValueOnce(functionTools);
await getAvailableTools(mockReq, mockRes);
const responseData = mockRes.json.mock.calls[0][0];
return responseData.find(
(tool) => tool.pluginKey === `test-tool${Constants.mcp_delimiter}test-server`,
);
};
it('should set plugin.icon when iconPath is defined', async () => {
const serverConfig = {
iconPath: '/path/to/icon.png',
};
const testTool = await callGetAvailableToolsWithMCPServer(serverConfig);
expect(testTool.icon).toBe('/path/to/icon.png');
});
it('should set plugin.icon to undefined when iconPath is not defined', async () => {
const mcpServers = {
'test-server': {},
};
const testTool = await callGetAvailableToolsWithMCPServer(mcpServers);
const serverConfig = {};
const testTool = await callGetAvailableToolsWithMCPServer(serverConfig);
expect(testTool.icon).toBeUndefined();
});
});
describe('helper function integration', () => {
it('should properly handle MCP tools with custom user variables', async () => {
const appConfig = {
mcpConfig: {
'test-server': {
customUserVars: {
API_KEY: { title: 'API Key', description: 'Your API key' },
},
},
},
};
// Mock MCP tools returned by getAllToolFunctions
const mcpToolFunctions = {
[`tool1${Constants.mcp_delimiter}test-server`]: {
type: 'function',
function: {
name: `tool1${Constants.mcp_delimiter}test-server`,
description: 'Tool 1',
parameters: {},
},
},
};
// Mock the MCP manager to return tools
const mockMCPManager = {
getAllToolFunctions: jest.fn().mockResolvedValue(mcpToolFunctions),
getRawConfig: jest.fn().mockReturnValue({
customUserVars: {
API_KEY: { title: 'API Key', description: 'Your API key' },
},
}),
};
require('~/config').getMCPManager.mockReturnValue(mockMCPManager);
mockCache.get.mockResolvedValue(null);
mockReq.config = appConfig;
// First call returns user tools (empty in this case)
getCachedTools.mockResolvedValueOnce({});
// Second call (with includeGlobal: true) returns tool definitions including our MCP tool
getCachedTools.mockResolvedValueOnce(mcpToolFunctions);
await getAvailableTools(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(200);
const responseData = mockRes.json.mock.calls[0][0];
expect(Array.isArray(responseData)).toBe(true);
// Find the MCP tool in the response
const mcpTool = responseData.find(
(tool) => tool.pluginKey === `tool1${Constants.mcp_delimiter}test-server`,
);
// The actual implementation adds authConfig and sets authenticated to false when customUserVars exist
expect(mcpTool).toBeDefined();
expect(mcpTool.authConfig).toEqual([
{ authField: 'API_KEY', label: 'API Key', description: 'Your API key' },
]);
expect(mcpTool.authenticated).toBe(false);
});
it('should handle error cases gracefully', async () => {
mockCache.get.mockRejectedValue(new Error('Cache error'));
await getAvailableTools(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(500);
expect(mockRes.json).toHaveBeenCalledWith({ message: 'Cache error' });
});
});
describe('edge cases with undefined/null values', () => {
it('should handle undefined cache gracefully', async () => {
getLogStores.mockReturnValue(undefined);
await getAvailableTools(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(500);
});
it('should handle null cachedTools and cachedUserTools', async () => {
mockCache.get.mockResolvedValue(null);
// First call returns null for user tools
getCachedTools.mockResolvedValueOnce(null);
mockReq.config = {
mcpConfig: null,
paths: { structuredTools: '/mock/path' },
};
// Mock MCP manager to return no tools
const mockMCPManager = {
getAllToolFunctions: jest.fn().mockResolvedValue({}),
getRawConfig: jest.fn().mockReturnValue({}),
};
require('~/config').getMCPManager.mockReturnValue(mockMCPManager);
// Second call (with includeGlobal: true) returns empty object instead of null
getCachedTools.mockResolvedValueOnce({});
await getAvailableTools(mockReq, mockRes);
// Should handle null values gracefully
expect(mockRes.status).toHaveBeenCalledWith(200);
expect(mockRes.json).toHaveBeenCalledWith([]);
});
it('should handle when getCachedTools returns undefined', async () => {
mockCache.get.mockResolvedValue(null);
mockReq.config = {
mcpConfig: null,
paths: { structuredTools: '/mock/path' },
};
// Mock getCachedTools to return undefined for both calls
getCachedTools.mockReset();
getCachedTools.mockResolvedValueOnce(undefined).mockResolvedValueOnce(undefined);
await getAvailableTools(mockReq, mockRes);
// Should handle undefined values gracefully
expect(mockRes.status).toHaveBeenCalledWith(200);
expect(mockRes.json).toHaveBeenCalledWith([]);
});
it('should handle cachedToolsArray and userPlugins both being defined', async () => {
const cachedTools = [{ name: 'CachedTool', pluginKey: 'cached-tool', description: 'Cached' }];
// Use MCP delimiter for the user tool so convertMCPToolsToPlugins works
const userTools = {
[`user-tool${Constants.mcp_delimiter}server1`]: {
type: 'function',
function: {
name: `user-tool${Constants.mcp_delimiter}server1`,
description: 'User tool',
parameters: {},
},
},
};
mockCache.get.mockResolvedValue(cachedTools);
getCachedTools.mockResolvedValueOnce(userTools);
mockReq.config = {
mcpConfig: null,
paths: { structuredTools: '/mock/path' },
};
// The controller expects a second call to getCachedTools
getCachedTools.mockResolvedValueOnce({
'cached-tool': { type: 'function', function: { name: 'cached-tool' } },
[`user-tool${Constants.mcp_delimiter}server1`]:
userTools[`user-tool${Constants.mcp_delimiter}server1`],
});
await getAvailableTools(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(200);
const responseData = mockRes.json.mock.calls[0][0];
// Should have both cached and user tools
expect(responseData.length).toBeGreaterThanOrEqual(2);
});
it('should handle empty toolDefinitions object', async () => {
mockCache.get.mockResolvedValue(null);
// Reset getCachedTools to ensure clean state
getCachedTools.mockReset();
getCachedTools.mockResolvedValue({});
mockReq.config = {}; // No mcpConfig at all
// Ensure no plugins are available
require('~/app/clients/tools').availableTools.length = 0;
// Reset MCP manager to default state
const mockMCPManager = {
getAllToolFunctions: jest.fn().mockResolvedValue({}),
getRawConfig: jest.fn().mockReturnValue({}),
};
require('~/config').getMCPManager.mockReturnValue(mockMCPManager);
await getAvailableTools(mockReq, mockRes);
// With empty tool definitions, no tools should be in the final output
expect(mockRes.json).toHaveBeenCalledWith([]);
});
it('should handle MCP tools without customUserVars', async () => {
const appConfig = {
mcpConfig: {
'test-server': {
// No customUserVars defined
},
},
};
const mockUserTools = {
[`tool1${Constants.mcp_delimiter}test-server`]: {
type: 'function',
function: {
name: `tool1${Constants.mcp_delimiter}test-server`,
description: 'Tool 1',
parameters: { type: 'object', properties: {} },
},
},
};
// Mock the MCP manager to return the tools
const mockMCPManager = {
getAllToolFunctions: jest.fn().mockResolvedValue(mockUserTools),
getRawConfig: jest.fn().mockReturnValue({
// No customUserVars defined
}),
};
require('~/config').getMCPManager.mockReturnValue(mockMCPManager);
mockCache.get.mockResolvedValue(null);
mockReq.config = appConfig;
// First call returns empty user tools
getCachedTools.mockResolvedValueOnce({});
// Second call (with includeGlobal: true) returns the tool definitions
getCachedTools.mockResolvedValueOnce(mockUserTools);
// Ensure no plugins in availableTools for clean test
require('~/app/clients/tools').availableTools.length = 0;
await getAvailableTools(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(200);
const responseData = mockRes.json.mock.calls[0][0];
expect(Array.isArray(responseData)).toBe(true);
expect(responseData.length).toBeGreaterThan(0);
const mcpTool = responseData.find(
(tool) => tool.pluginKey === `tool1${Constants.mcp_delimiter}test-server`,
);
expect(mcpTool).toBeDefined();
expect(mcpTool.authenticated).toBe(true);
// The actual implementation sets authConfig to empty array when no customUserVars
expect(mcpTool.authConfig).toEqual([]);
});
it('should handle undefined filteredTools and includedTools', async () => {
mockReq.config = {};
mockCache.get.mockResolvedValue(null);
// Configure getAppConfig to return config with undefined properties
// The controller will use default values [] for filteredTools and includedTools
getAppConfig.mockResolvedValueOnce({});
await getAvailablePluginsController(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(200);
expect(mockRes.json).toHaveBeenCalledWith([]);
});
it('should handle toolkit with undefined toolDefinitions keys', async () => {
const mockToolkit = {
name: 'Toolkit1',
pluginKey: 'toolkit1',
description: 'Toolkit 1',
toolkit: true,
};
// No need to mock app.locals anymore as it's not used
// Add the toolkit to availableTools
require('~/app/clients/tools').availableTools.push(mockToolkit);
mockCache.get.mockResolvedValue(null);
// First call returns empty object
getCachedTools.mockResolvedValueOnce({});
mockReq.config = {
mcpConfig: null,
paths: { structuredTools: '/mock/path' },
};
// Second call (with includeGlobal: true) returns empty object to avoid null reference error
getCachedTools.mockResolvedValueOnce({});
await getAvailableTools(mockReq, mockRes);
// Should handle null toolDefinitions gracefully
expect(mockRes.status).toHaveBeenCalledWith(200);
});
});
});

View File

@@ -47,7 +47,7 @@ const verify2FA = async (req, res) => {
try {
const userId = req.user.id;
const { token, backupCode } = req.body;
const user = await getUserById(userId);
const user = await getUserById(userId, '_id totpSecret backupCodes');
if (!user || !user.totpSecret) {
return res.status(400).json({ message: '2FA not initiated' });
@@ -79,7 +79,7 @@ const confirm2FA = async (req, res) => {
try {
const userId = req.user.id;
const { token } = req.body;
const user = await getUserById(userId);
const user = await getUserById(userId, '_id totpSecret');
if (!user || !user.totpSecret) {
return res.status(400).json({ message: '2FA not initiated' });
@@ -99,10 +99,36 @@ const confirm2FA = async (req, res) => {
/**
* Disable 2FA by clearing the stored secret and backup codes.
* Requires verification with either TOTP token or backup code if 2FA is fully enabled.
*/
const disable2FA = async (req, res) => {
try {
const userId = req.user.id;
const { token, backupCode } = req.body;
const user = await getUserById(userId, '_id totpSecret backupCodes');
if (!user || !user.totpSecret) {
return res.status(400).json({ message: '2FA is not setup for this user' });
}
if (user.twoFactorEnabled) {
const secret = await getTOTPSecret(user.totpSecret);
let isVerified = false;
if (token) {
isVerified = await verifyTOTP(secret, token);
} else if (backupCode) {
isVerified = await verifyBackupCode({ user, backupCode });
} else {
return res
.status(400)
.json({ message: 'Either token or backup code is required to disable 2FA' });
}
if (!isVerified) {
return res.status(401).json({ message: 'Invalid token or backup code' });
}
}
await updateUser(userId, { totpSecret: null, backupCodes: [], twoFactorEnabled: false });
return res.status(200).json();
} catch (err) {

View File

@@ -1,5 +1,5 @@
const { logger } = require('@librechat/data-schemas');
const { webSearchKeys, extractWebSearchEnvVars } = require('@librechat/api');
const { webSearchKeys, extractWebSearchEnvVars, normalizeHttpError } = require('@librechat/api');
const {
getFiles,
updateUser,
@@ -17,15 +17,23 @@ const { needsRefresh, getNewS3URL } = require('~/server/services/Files/S3/crud')
const { Tools, Constants, FileSources } = require('librechat-data-provider');
const { processDeleteRequest } = require('~/server/services/Files/process');
const { Transaction, Balance, User } = require('~/db/models');
const { getAppConfig } = require('~/server/services/Config');
const { deleteToolCalls } = require('~/models/ToolCall');
const { deleteAllSharedLinks } = require('~/models');
const { getMCPManager } = require('~/config');
const getUserController = async (req, res) => {
/** @type {MongoUser} */
const appConfig = await getAppConfig({ role: req.user?.role });
/** @type {IUser} */
const userData = req.user.toObject != null ? req.user.toObject() : { ...req.user };
/**
* These fields should not exist due to secure field selection, but deletion
* is done in case of alternate database incompatibility with Mongo API
* */
delete userData.password;
delete userData.totpSecret;
if (req.app.locals.fileStrategy === FileSources.s3 && userData.avatar) {
delete userData.backupCodes;
if (appConfig.fileStrategy === FileSources.s3 && userData.avatar) {
const avatarNeedsRefresh = needsRefresh(userData.avatar, 3600);
if (!avatarNeedsRefresh) {
return res.status(200).send(userData);
@@ -81,6 +89,7 @@ const deleteUserFiles = async (req) => {
};
const updateUserPluginsController = async (req, res) => {
const appConfig = await getAppConfig({ role: req.user?.role });
const { user } = req;
const { pluginKey, action, auth, isEntityTool } = req.body;
try {
@@ -89,8 +98,8 @@ const updateUserPluginsController = async (req, res) => {
if (userPluginsService instanceof Error) {
logger.error('[userPluginsService]', userPluginsService);
const { status, message } = userPluginsService;
res.status(status).send({ message });
const { status, message } = normalizeHttpError(userPluginsService);
return res.status(status).send({ message });
}
}
@@ -125,7 +134,7 @@ const updateUserPluginsController = async (req, res) => {
if (pluginKey === Tools.web_search) {
/** @type {TCustomConfig['webSearch']} */
const webSearchConfig = req.app.locals?.webSearch;
const webSearchConfig = appConfig?.webSearch;
keys = extractWebSearchEnvVars({
keys: action === 'install' ? keys : webSearchKeys,
config: webSearchConfig,
@@ -137,7 +146,7 @@ const updateUserPluginsController = async (req, res) => {
authService = await updateUserPluginAuth(user.id, keys[i], pluginKey, values[i]);
if (authService instanceof Error) {
logger.error('[authService]', authService);
({ status, message } = authService);
({ status, message } = normalizeHttpError(authService));
}
}
} else if (action === 'uninstall') {
@@ -151,7 +160,7 @@ const updateUserPluginsController = async (req, res) => {
`[authService] Error deleting all auth for MCP tool ${pluginKey}:`,
authService,
);
({ status, message } = authService);
({ status, message } = normalizeHttpError(authService));
}
} else {
// This handles:
@@ -163,7 +172,7 @@ const updateUserPluginsController = async (req, res) => {
authService = await deleteUserPluginAuth(user.id, keys[i]); // Deletes by authField name
if (authService instanceof Error) {
logger.error('[authService] Error deleting specific auth key:', authService);
({ status, message } = authService);
({ status, message } = normalizeHttpError(authService));
}
}
}
@@ -193,7 +202,8 @@ const updateUserPluginsController = async (req, res) => {
return res.status(status).send();
}
res.status(status).send({ message });
const normalized = normalizeHttpError({ status, message });
return res.status(normalized.status).send({ message: normalized.message });
} catch (err) {
logger.error('[updateUserPluginsController]', err);
return res.status(500).json({ message: 'Something went wrong.' });

View File

@@ -11,6 +11,7 @@ const {
handleToolCalls,
ChatModelStreamHandler,
} = require('@librechat/agents');
const { processFileCitations } = require('~/server/services/Files/Citations');
const { processCodeOutput } = require('~/server/services/Files/Code/process');
const { loadAuthValues } = require('~/server/services/Tools/credentials');
const { saveBase64Image } = require('~/server/services/Files/process');
@@ -238,6 +239,32 @@ function createToolEndCallback({ req, res, artifactPromises }) {
return;
}
if (output.artifact[Tools.file_search]) {
artifactPromises.push(
(async () => {
const user = req.user;
const attachment = await processFileCitations({
user,
metadata,
appConfig: req.config,
toolArtifact: output.artifact,
toolCallId: output.tool_call_id,
});
if (!attachment) {
return null;
}
if (!res.headersSent) {
return attachment;
}
res.write(`event: attachment\ndata: ${JSON.stringify(attachment)}\n\n`);
return attachment;
})().catch((error) => {
logger.error('Error processing file citations:', error);
return null;
}),
);
}
if (output.artifact[Tools.web_search]) {
artifactPromises.push(
(async () => {

View File

@@ -7,6 +7,8 @@ const {
createRun,
Tokenizer,
checkAccess,
resolveHeaders,
getBalanceConfig,
memoryInstructions,
formatContentStrings,
createMemoryProcessor,
@@ -33,18 +35,13 @@ const {
bedrockInputSchema,
removeNullishValues,
} = require('librechat-data-provider');
const {
findPluginAuthsByKeys,
getFormattedMemories,
deleteMemory,
setMemory,
} = require('~/models');
const { getMCPAuthMap, checkCapability, hasCustomUserVars } = require('~/server/services/Config');
const { addCacheControl, createContextHandlers } = require('~/app/clients/prompts');
const { initializeAgent } = require('~/server/services/Endpoints/agents/agent');
const { spendTokens, spendStructuredTokens } = require('~/models/spendTokens');
const { getFormattedMemories, deleteMemory, setMemory } = require('~/models');
const { encodeAndFormat } = require('~/server/services/Files/images/encode');
const { getProviderConfig } = require('~/server/services/Endpoints');
const { checkCapability } = require('~/server/services/Config');
const BaseClient = require('~/app/clients/BaseClient');
const { getRoleByName } = require('~/models/Role');
const { loadAgent } = require('~/models/Agent');
@@ -402,6 +399,34 @@ class AgentClient extends BaseClient {
return result;
}
/**
* Creates a promise that resolves with the memory promise result or undefined after a timeout
* @param {Promise<(TAttachment | null)[] | undefined>} memoryPromise - The memory promise to await
* @param {number} timeoutMs - Timeout in milliseconds (default: 3000)
* @returns {Promise<(TAttachment | null)[] | undefined>}
*/
async awaitMemoryWithTimeout(memoryPromise, timeoutMs = 3000) {
if (!memoryPromise) {
return;
}
try {
const timeoutPromise = new Promise((_, reject) =>
setTimeout(() => reject(new Error('Memory processing timeout')), timeoutMs),
);
const attachments = await Promise.race([memoryPromise, timeoutPromise]);
return attachments;
} catch (error) {
if (error.message === 'Memory processing timeout') {
logger.warn('[AgentClient] Memory processing timed out after 3 seconds');
} else {
logger.error('[AgentClient] Error processing memory:', error);
}
return;
}
}
/**
* @returns {Promise<string | undefined>}
*/
@@ -423,8 +448,8 @@ class AgentClient extends BaseClient {
);
return;
}
/** @type {TCustomConfig['memory']} */
const memoryConfig = this.options.req?.app?.locals?.memory;
const appConfig = this.options.req.config;
const memoryConfig = appConfig.memory;
if (!memoryConfig || memoryConfig.disabled === true) {
return;
}
@@ -432,7 +457,7 @@ class AgentClient extends BaseClient {
/** @type {Agent} */
let prelimAgent;
const allowedProviders = new Set(
this.options.req?.app?.locals?.[EModelEndpoint.agents]?.allowedProviders,
appConfig?.endpoints?.[EModelEndpoint.agents]?.allowedProviders,
);
try {
if (memoryConfig.agent?.id != null && memoryConfig.agent.id !== this.options.agent.id) {
@@ -512,6 +537,39 @@ class AgentClient extends BaseClient {
return withoutKeys;
}
/**
* Filters out image URLs from message content
* @param {BaseMessage} message - The message to filter
* @returns {BaseMessage} - A new message with image URLs removed
*/
filterImageUrls(message) {
if (!message.content || typeof message.content === 'string') {
return message;
}
if (Array.isArray(message.content)) {
const filteredContent = message.content.filter(
(part) => part.type !== ContentTypes.IMAGE_URL,
);
if (filteredContent.length === 1 && filteredContent[0].type === ContentTypes.TEXT) {
const MessageClass = message.constructor;
return new MessageClass({
content: filteredContent[0].text,
additional_kwargs: message.additional_kwargs,
});
}
const MessageClass = message.constructor;
return new MessageClass({
content: filteredContent,
additional_kwargs: message.additional_kwargs,
});
}
return message;
}
/**
* @param {BaseMessage[]} messages
* @returns {Promise<void | (TAttachment | null)[]>}
@@ -521,8 +579,8 @@ class AgentClient extends BaseClient {
if (this.processMemory == null) {
return;
}
/** @type {TCustomConfig['memory']} */
const memoryConfig = this.options.req?.app?.locals?.memory;
const appConfig = this.options.req.config;
const memoryConfig = appConfig.memory;
const messageWindowSize = memoryConfig?.messageWindowSize ?? 5;
let messagesToProcess = [...messages];
@@ -540,7 +598,8 @@ class AgentClient extends BaseClient {
}
}
const bufferString = getBufferString(messagesToProcess);
const filteredMessages = messagesToProcess.map((msg) => this.filterImageUrls(msg));
const bufferString = getBufferString(filteredMessages);
const bufferMessage = new HumanMessage(`# Current Chat:\n\n${bufferString}`);
return await this.processMemory([bufferMessage]);
} catch (error) {
@@ -553,6 +612,7 @@ class AgentClient extends BaseClient {
await this.chatCompletion({
payload,
onProgress: opts.onProgress,
userMCPAuthMap: opts.userMCPAuthMap,
abortController: opts.abortController,
});
return this.contentParts;
@@ -562,9 +622,15 @@ class AgentClient extends BaseClient {
* @param {Object} params
* @param {string} [params.model]
* @param {string} [params.context='message']
* @param {AppConfig['balance']} [params.balance]
* @param {UsageMetadata[]} [params.collectedUsage=this.collectedUsage]
*/
async recordCollectedUsage({ model, context = 'message', collectedUsage = this.collectedUsage }) {
async recordCollectedUsage({
model,
balance,
context = 'message',
collectedUsage = this.collectedUsage,
}) {
if (!collectedUsage || !collectedUsage.length) {
return;
}
@@ -586,6 +652,7 @@ class AgentClient extends BaseClient {
const txMetadata = {
context,
balance,
conversationId: this.conversationId,
user: this.user ?? this.options.req.user?.id,
endpointTokenConfig: this.options.endpointTokenConfig,
@@ -685,7 +752,13 @@ class AgentClient extends BaseClient {
return currentMessageTokens > 0 ? currentMessageTokens : originalEstimate;
}
async chatCompletion({ payload, abortController = null }) {
/**
* @param {object} params
* @param {string | ChatCompletionMessageParam[]} params.payload
* @param {Record<string, Record<string, string>>} [params.userMCPAuthMap]
* @param {AbortController} [params.abortController]
*/
async chatCompletion({ payload, userMCPAuthMap, abortController = null }) {
/** @type {Partial<GraphRunnableConfig>} */
let config;
/** @type {ReturnType<createRun>} */
@@ -697,8 +770,9 @@ class AgentClient extends BaseClient {
abortController = new AbortController();
}
/** @type {TCustomConfig['endpoints']['agents']} */
const agentsEConfig = this.options.req.app.locals[EModelEndpoint.agents];
const appConfig = this.options.req.config;
/** @type {AppConfig['endpoints']['agents']} */
const agentsEConfig = appConfig.endpoints?.[EModelEndpoint.agents];
config = {
configurable: {
@@ -706,6 +780,11 @@ class AgentClient extends BaseClient {
last_agent_index: this.agentConfigs?.size ?? 0,
user_id: this.user ?? this.options.req.user?.id,
hide_sequential_outputs: this.options.agent.hide_sequential_outputs,
requestBody: {
messageId: this.responseMessageId,
conversationId: this.conversationId,
parentMessageId: this.parentMessageId,
},
user: this.options.req.user,
},
recursionLimit: agentsEConfig?.recursionLimit ?? 25,
@@ -776,7 +855,7 @@ class AgentClient extends BaseClient {
if (noSystemMessages === true && systemContent?.length) {
const latestMessageContent = _messages.pop().content;
if (typeof latestMessage !== 'string') {
if (typeof latestMessageContent !== 'string') {
latestMessageContent[0].text = [systemContent, latestMessageContent[0].text].join('\n');
_messages.push(new HumanMessage({ content: latestMessageContent }));
} else {
@@ -801,6 +880,16 @@ class AgentClient extends BaseClient {
memoryPromise = this.runMemory(messages);
}
/** Resolve request-based headers for Custom Endpoints. Note: if this is added to
* non-custom endpoints, needs consideration of varying provider header configs.
*/
if (agent.model_parameters?.configuration?.defaultHeaders != null) {
agent.model_parameters.configuration.defaultHeaders = resolveHeaders({
headers: agent.model_parameters.configuration.defaultHeaders,
body: config.configurable.requestBody,
});
}
run = await createRun({
agent,
req: this.options.req,
@@ -836,21 +925,9 @@ class AgentClient extends BaseClient {
run.Graph.contentData = contentData;
}
try {
if (await hasCustomUserVars()) {
config.configurable.userMCPAuthMap = await getMCPAuthMap({
tools: agent.tools,
userId: this.options.req.user.id,
findPluginAuthsByKeys,
});
}
} catch (err) {
logger.error(
`[api/server/controllers/agents/client.js #chatCompletion] Error getting custom user vars for agent ${agent.id}`,
err,
);
if (userMCPAuthMap != null) {
config.configurable.userMCPAuthMap = userMCPAuthMap;
}
await run.processStream({ messages }, config, {
keepContent: i !== 0,
tokenCounter: createTokenCounter(this.getEncoding()),
@@ -968,13 +1045,13 @@ class AgentClient extends BaseClient {
});
try {
if (memoryPromise) {
const attachments = await memoryPromise;
if (attachments && attachments.length > 0) {
this.artifactPromises.push(...attachments);
}
const attachments = await this.awaitMemoryWithTimeout(memoryPromise);
if (attachments && attachments.length > 0) {
this.artifactPromises.push(...attachments);
}
await this.recordCollectedUsage({ context: 'message' });
const balanceConfig = getBalanceConfig(appConfig);
await this.recordCollectedUsage({ context: 'message', balance: balanceConfig });
} catch (err) {
logger.error(
'[api/server/controllers/agents/client.js #chatCompletion] Error recording collected usage',
@@ -982,11 +1059,9 @@ class AgentClient extends BaseClient {
);
}
} catch (err) {
if (memoryPromise) {
const attachments = await memoryPromise;
if (attachments && attachments.length > 0) {
this.artifactPromises.push(...attachments);
}
const attachments = await this.awaitMemoryWithTimeout(memoryPromise);
if (attachments && attachments.length > 0) {
this.artifactPromises.push(...attachments);
}
logger.error(
'[api/server/controllers/agents/client.js #sendCompletion] Operation aborted',
@@ -1017,19 +1092,21 @@ class AgentClient extends BaseClient {
}
const { handleLLMEnd, collected: collectedMetadata } = createMetadataAggregator();
const { req, res, agent } = this.options;
const appConfig = req.config;
let endpoint = agent.endpoint;
/** @type {import('@librechat/agents').ClientOptions} */
let clientOptions = {
maxTokens: 75,
model: agent.model_parameters.model,
model: agent.model || agent.model_parameters.model,
};
let titleProviderConfig = await getProviderConfig(endpoint);
let titleProviderConfig = getProviderConfig({ provider: endpoint, appConfig });
/** @type {TEndpoint | undefined} */
const endpointConfig =
req.app.locals.all ?? req.app.locals[endpoint] ?? titleProviderConfig.customEndpointConfig;
appConfig.endpoints?.all ??
appConfig.endpoints?.[endpoint] ??
titleProviderConfig.customEndpointConfig;
if (!endpointConfig) {
logger.warn(
'[api/server/controllers/agents/client.js #titleConvo] Error getting endpoint config',
@@ -1038,7 +1115,10 @@ class AgentClient extends BaseClient {
if (endpointConfig?.titleEndpoint && endpointConfig.titleEndpoint !== endpoint) {
try {
titleProviderConfig = await getProviderConfig(endpointConfig.titleEndpoint);
titleProviderConfig = getProviderConfig({
provider: endpointConfig.titleEndpoint,
appConfig,
});
endpoint = endpointConfig.titleEndpoint;
} catch (error) {
logger.warn(
@@ -1047,7 +1127,7 @@ class AgentClient extends BaseClient {
);
// Fall back to original provider config
endpoint = agent.endpoint;
titleProviderConfig = await getProviderConfig(endpoint);
titleProviderConfig = getProviderConfig({ provider: endpoint, appConfig });
}
}
@@ -1088,12 +1168,15 @@ class AgentClient extends BaseClient {
clientOptions.configuration = options.configOptions;
}
// Ensure maxTokens is set for non-o1 models
if (!/\b(o\d)\b/i.test(clientOptions.model) && !clientOptions.maxTokens) {
clientOptions.maxTokens = 75;
} else if (/\b(o\d)\b/i.test(clientOptions.model) && clientOptions.maxTokens != null) {
if (clientOptions.maxTokens != null) {
delete clientOptions.maxTokens;
}
if (clientOptions?.modelKwargs?.max_completion_tokens != null) {
delete clientOptions.modelKwargs.max_completion_tokens;
}
if (clientOptions?.modelKwargs?.max_output_tokens != null) {
delete clientOptions.modelKwargs.max_output_tokens;
}
clientOptions = Object.assign(
Object.fromEntries(
@@ -1109,6 +1192,20 @@ class AgentClient extends BaseClient {
clientOptions.json = true;
}
/** Resolve request-based headers for Custom Endpoints. Note: if this is added to
* non-custom endpoints, needs consideration of varying provider header configs.
*/
if (clientOptions?.configuration?.defaultHeaders != null) {
clientOptions.configuration.defaultHeaders = resolveHeaders({
headers: clientOptions.configuration.defaultHeaders,
body: {
messageId: this.responseMessageId,
conversationId: this.conversationId,
parentMessageId: this.parentMessageId,
},
});
}
try {
const titleResult = await this.run.generateTitle({
provider,
@@ -1147,10 +1244,12 @@ class AgentClient extends BaseClient {
};
});
const balanceConfig = getBalanceConfig(appConfig);
await this.recordCollectedUsage({
model: clientOptions.model,
context: 'title',
collectedUsage,
context: 'title',
model: clientOptions.model,
balance: balanceConfig,
}).catch((err) => {
logger.error(
'[api/server/controllers/agents/client.js #titleConvo] Error recording collected usage',
@@ -1169,17 +1268,26 @@ class AgentClient extends BaseClient {
* @param {object} params
* @param {number} params.promptTokens
* @param {number} params.completionTokens
* @param {OpenAIUsageMetadata} [params.usage]
* @param {string} [params.model]
* @param {OpenAIUsageMetadata} [params.usage]
* @param {AppConfig['balance']} [params.balance]
* @param {string} [params.context='message']
* @returns {Promise<void>}
*/
async recordTokenUsage({ model, promptTokens, completionTokens, usage, context = 'message' }) {
async recordTokenUsage({
model,
usage,
balance,
promptTokens,
completionTokens,
context = 'message',
}) {
try {
await spendTokens(
{
model,
context,
balance,
conversationId: this.conversationId,
user: this.user ?? this.options.req.user?.id,
endpointTokenConfig: this.options.endpointTokenConfig,
@@ -1196,6 +1304,7 @@ class AgentClient extends BaseClient {
await spendTokens(
{
model,
balance,
context: 'reasoning',
conversationId: this.conversationId,
user: this.user ?? this.options.req.user?.id,

View File

@@ -41,8 +41,16 @@ describe('AgentClient - titleConvo', () => {
// Mock request and response
mockReq = {
app: {
locals: {
user: {
id: 'user-123',
},
body: {
model: 'gpt-4',
endpoint: EModelEndpoint.openAI,
key: null,
},
config: {
endpoints: {
[EModelEndpoint.openAI]: {
// Match the agent endpoint
titleModel: 'gpt-3.5-turbo',
@@ -52,14 +60,6 @@ describe('AgentClient - titleConvo', () => {
},
},
},
user: {
id: 'user-123',
},
body: {
model: 'gpt-4',
endpoint: EModelEndpoint.openAI,
key: null,
},
};
mockRes = {};
@@ -143,7 +143,7 @@ describe('AgentClient - titleConvo', () => {
it('should handle missing endpoint config gracefully', async () => {
// Remove endpoint config
mockReq.app.locals[EModelEndpoint.openAI] = undefined;
mockReq.config = { endpoints: {} };
const text = 'Test conversation text';
const abortController = new AbortController();
@@ -161,7 +161,16 @@ describe('AgentClient - titleConvo', () => {
it('should use agent model when titleModel is not provided', async () => {
// Remove titleModel from config
delete mockReq.app.locals[EModelEndpoint.openAI].titleModel;
mockReq.config = {
endpoints: {
[EModelEndpoint.openAI]: {
titlePrompt: 'Custom title prompt',
titleMethod: 'structured',
titlePromptTemplate: 'Template: {{content}}',
// titleModel is omitted
},
},
};
const text = 'Test conversation text';
const abortController = new AbortController();
@@ -173,7 +182,16 @@ describe('AgentClient - titleConvo', () => {
});
it('should not use titleModel when it equals CURRENT_MODEL constant', async () => {
mockReq.app.locals[EModelEndpoint.openAI].titleModel = Constants.CURRENT_MODEL;
mockReq.config = {
endpoints: {
[EModelEndpoint.openAI]: {
titleModel: Constants.CURRENT_MODEL,
titlePrompt: 'Custom title prompt',
titleMethod: 'structured',
titlePromptTemplate: 'Template: {{content}}',
},
},
};
const text = 'Test conversation text';
const abortController = new AbortController();
@@ -216,6 +234,9 @@ describe('AgentClient - titleConvo', () => {
model: 'gpt-3.5-turbo',
context: 'title',
collectedUsage: expect.any(Array),
balance: {
enabled: false,
},
});
});
@@ -245,10 +266,17 @@ describe('AgentClient - titleConvo', () => {
process.env.ANTHROPIC_API_KEY = 'test-api-key';
// Add titleEndpoint to the config
mockReq.app.locals[EModelEndpoint.openAI].titleEndpoint = EModelEndpoint.anthropic;
mockReq.app.locals[EModelEndpoint.openAI].titleMethod = 'structured';
mockReq.app.locals[EModelEndpoint.openAI].titlePrompt = 'Custom title prompt';
mockReq.app.locals[EModelEndpoint.openAI].titlePromptTemplate = 'Custom template';
mockReq.config = {
endpoints: {
[EModelEndpoint.openAI]: {
titleModel: 'gpt-3.5-turbo',
titleEndpoint: EModelEndpoint.anthropic,
titleMethod: 'structured',
titlePrompt: 'Custom title prompt',
titlePromptTemplate: 'Custom template',
},
},
};
const text = 'Test conversation text';
const abortController = new AbortController();
@@ -274,18 +302,16 @@ describe('AgentClient - titleConvo', () => {
});
it('should use all config when endpoint config is missing', async () => {
// Remove endpoint-specific config
delete mockReq.app.locals[EModelEndpoint.openAI].titleModel;
delete mockReq.app.locals[EModelEndpoint.openAI].titlePrompt;
delete mockReq.app.locals[EModelEndpoint.openAI].titleMethod;
delete mockReq.app.locals[EModelEndpoint.openAI].titlePromptTemplate;
// Set 'all' config
mockReq.app.locals.all = {
titleModel: 'gpt-4o-mini',
titlePrompt: 'All config title prompt',
titleMethod: 'completion',
titlePromptTemplate: 'All config template: {{content}}',
// Set 'all' config without endpoint-specific config
mockReq.config = {
endpoints: {
all: {
titleModel: 'gpt-4o-mini',
titlePrompt: 'All config title prompt',
titleMethod: 'completion',
titlePromptTemplate: 'All config template: {{content}}',
},
},
};
const text = 'Test conversation text';
@@ -309,17 +335,21 @@ describe('AgentClient - titleConvo', () => {
it('should prioritize all config over endpoint config for title settings', async () => {
// Set both endpoint and 'all' config
mockReq.app.locals[EModelEndpoint.openAI].titleModel = 'gpt-3.5-turbo';
mockReq.app.locals[EModelEndpoint.openAI].titlePrompt = 'Endpoint title prompt';
mockReq.app.locals[EModelEndpoint.openAI].titleMethod = 'structured';
// Remove titlePromptTemplate from endpoint config to test fallback
delete mockReq.app.locals[EModelEndpoint.openAI].titlePromptTemplate;
mockReq.app.locals.all = {
titleModel: 'gpt-4o-mini',
titlePrompt: 'All config title prompt',
titleMethod: 'completion',
titlePromptTemplate: 'All config template',
mockReq.config = {
endpoints: {
[EModelEndpoint.openAI]: {
titleModel: 'gpt-3.5-turbo',
titlePrompt: 'Endpoint title prompt',
titleMethod: 'structured',
// titlePromptTemplate is omitted to test fallback
},
all: {
titleModel: 'gpt-4o-mini',
titlePrompt: 'All config title prompt',
titleMethod: 'completion',
titlePromptTemplate: 'All config template',
},
},
};
const text = 'Test conversation text';
@@ -346,17 +376,18 @@ describe('AgentClient - titleConvo', () => {
const originalApiKey = process.env.ANTHROPIC_API_KEY;
process.env.ANTHROPIC_API_KEY = 'test-anthropic-key';
// Remove endpoint-specific config to test 'all' config
delete mockReq.app.locals[EModelEndpoint.openAI];
// Set comprehensive 'all' config with all new title options
mockReq.app.locals.all = {
titleConvo: true,
titleModel: 'claude-3-haiku-20240307',
titleMethod: 'completion', // Testing the new default method
titlePrompt: 'Generate a concise, descriptive title for this conversation',
titlePromptTemplate: 'Conversation summary: {{content}}',
titleEndpoint: EModelEndpoint.anthropic, // Should switch provider to Anthropic
mockReq.config = {
endpoints: {
all: {
titleConvo: true,
titleModel: 'claude-3-haiku-20240307',
titleMethod: 'completion', // Testing the new default method
titlePrompt: 'Generate a concise, descriptive title for this conversation',
titlePromptTemplate: 'Conversation summary: {{content}}',
titleEndpoint: EModelEndpoint.anthropic, // Should switch provider to Anthropic
},
},
};
const text = 'Test conversation about AI and machine learning';
@@ -402,15 +433,16 @@ describe('AgentClient - titleConvo', () => {
// Clear previous calls
mockRun.generateTitle.mockClear();
// Remove endpoint config
delete mockReq.app.locals[EModelEndpoint.openAI];
// Set 'all' config with specific titleMethod
mockReq.app.locals.all = {
titleModel: 'gpt-4o-mini',
titleMethod: method,
titlePrompt: `Testing ${method} method`,
titlePromptTemplate: `Template for ${method}: {{content}}`,
mockReq.config = {
endpoints: {
all: {
titleModel: 'gpt-4o-mini',
titleMethod: method,
titlePrompt: `Testing ${method} method`,
titlePromptTemplate: `Template for ${method}: {{content}}`,
},
},
};
const text = `Test conversation for ${method} method`;
@@ -455,29 +487,33 @@ describe('AgentClient - titleConvo', () => {
// Set up Azure endpoint with serverless config
mockAgent.endpoint = EModelEndpoint.azureOpenAI;
mockAgent.provider = EModelEndpoint.azureOpenAI;
mockReq.app.locals[EModelEndpoint.azureOpenAI] = {
titleConvo: true,
titleModel: 'grok-3',
titleMethod: 'completion',
titlePrompt: 'Azure serverless title prompt',
streamRate: 35,
modelGroupMap: {
'grok-3': {
group: 'Azure AI Foundry',
deploymentName: 'grok-3',
},
},
groupMap: {
'Azure AI Foundry': {
apiKey: '${AZURE_API_KEY}',
baseURL: 'https://test.services.ai.azure.com/models',
version: '2024-05-01-preview',
serverless: true,
models: {
mockReq.config = {
endpoints: {
[EModelEndpoint.azureOpenAI]: {
titleConvo: true,
titleModel: 'grok-3',
titleMethod: 'completion',
titlePrompt: 'Azure serverless title prompt',
streamRate: 35,
modelGroupMap: {
'grok-3': {
group: 'Azure AI Foundry',
deploymentName: 'grok-3',
},
},
groupMap: {
'Azure AI Foundry': {
apiKey: '${AZURE_API_KEY}',
baseURL: 'https://test.services.ai.azure.com/models',
version: '2024-05-01-preview',
serverless: true,
models: {
'grok-3': {
deploymentName: 'grok-3',
},
},
},
},
},
},
};
@@ -503,28 +539,32 @@ describe('AgentClient - titleConvo', () => {
// Set up Azure endpoint
mockAgent.endpoint = EModelEndpoint.azureOpenAI;
mockAgent.provider = EModelEndpoint.azureOpenAI;
mockReq.app.locals[EModelEndpoint.azureOpenAI] = {
titleConvo: true,
titleModel: 'gpt-4o',
titleMethod: 'structured',
titlePrompt: 'Azure instance title prompt',
streamRate: 35,
modelGroupMap: {
'gpt-4o': {
group: 'eastus',
deploymentName: 'gpt-4o',
},
},
groupMap: {
eastus: {
apiKey: '${EASTUS_API_KEY}',
instanceName: 'region-instance',
version: '2024-02-15-preview',
models: {
mockReq.config = {
endpoints: {
[EModelEndpoint.azureOpenAI]: {
titleConvo: true,
titleModel: 'gpt-4o',
titleMethod: 'structured',
titlePrompt: 'Azure instance title prompt',
streamRate: 35,
modelGroupMap: {
'gpt-4o': {
group: 'eastus',
deploymentName: 'gpt-4o',
},
},
groupMap: {
eastus: {
apiKey: '${EASTUS_API_KEY}',
instanceName: 'region-instance',
version: '2024-02-15-preview',
models: {
'gpt-4o': {
deploymentName: 'gpt-4o',
},
},
},
},
},
},
};
@@ -551,29 +591,33 @@ describe('AgentClient - titleConvo', () => {
mockAgent.endpoint = EModelEndpoint.azureOpenAI;
mockAgent.provider = EModelEndpoint.azureOpenAI;
mockAgent.model_parameters.model = 'gpt-4o-latest';
mockReq.app.locals[EModelEndpoint.azureOpenAI] = {
titleConvo: true,
titleModel: Constants.CURRENT_MODEL,
titleMethod: 'functions',
streamRate: 35,
modelGroupMap: {
'gpt-4o-latest': {
group: 'region-eastus',
deploymentName: 'gpt-4o-mini',
version: '2024-02-15-preview',
},
},
groupMap: {
'region-eastus': {
apiKey: '${EASTUS2_API_KEY}',
instanceName: 'test-instance',
version: '2024-12-01-preview',
models: {
mockReq.config = {
endpoints: {
[EModelEndpoint.azureOpenAI]: {
titleConvo: true,
titleModel: Constants.CURRENT_MODEL,
titleMethod: 'functions',
streamRate: 35,
modelGroupMap: {
'gpt-4o-latest': {
group: 'region-eastus',
deploymentName: 'gpt-4o-mini',
version: '2024-02-15-preview',
},
},
groupMap: {
'region-eastus': {
apiKey: '${EASTUS2_API_KEY}',
instanceName: 'test-instance',
version: '2024-12-01-preview',
models: {
'gpt-4o-latest': {
deploymentName: 'gpt-4o-mini',
version: '2024-02-15-preview',
},
},
},
},
},
},
};
@@ -598,56 +642,60 @@ describe('AgentClient - titleConvo', () => {
// Set up Azure endpoint
mockAgent.endpoint = EModelEndpoint.azureOpenAI;
mockAgent.provider = EModelEndpoint.azureOpenAI;
mockReq.app.locals[EModelEndpoint.azureOpenAI] = {
titleConvo: true,
titleModel: 'o1-mini',
titleMethod: 'completion',
streamRate: 35,
modelGroupMap: {
'gpt-4o': {
group: 'eastus',
deploymentName: 'gpt-4o',
},
'o1-mini': {
group: 'region-eastus',
deploymentName: 'o1-mini',
},
'codex-mini': {
group: 'codex-mini',
deploymentName: 'codex-mini',
},
},
groupMap: {
eastus: {
apiKey: '${EASTUS_API_KEY}',
instanceName: 'region-eastus',
version: '2024-02-15-preview',
models: {
mockReq.config = {
endpoints: {
[EModelEndpoint.azureOpenAI]: {
titleConvo: true,
titleModel: 'o1-mini',
titleMethod: 'completion',
streamRate: 35,
modelGroupMap: {
'gpt-4o': {
group: 'eastus',
deploymentName: 'gpt-4o',
},
},
},
'region-eastus': {
apiKey: '${EASTUS2_API_KEY}',
instanceName: 'region-eastus2',
version: '2024-12-01-preview',
models: {
'o1-mini': {
group: 'region-eastus',
deploymentName: 'o1-mini',
},
},
},
'codex-mini': {
apiKey: '${AZURE_API_KEY}',
baseURL: 'https://example.cognitiveservices.azure.com/openai/',
version: '2025-04-01-preview',
serverless: true,
models: {
'codex-mini': {
group: 'codex-mini',
deploymentName: 'codex-mini',
},
},
groupMap: {
eastus: {
apiKey: '${EASTUS_API_KEY}',
instanceName: 'region-eastus',
version: '2024-02-15-preview',
models: {
'gpt-4o': {
deploymentName: 'gpt-4o',
},
},
},
'region-eastus': {
apiKey: '${EASTUS2_API_KEY}',
instanceName: 'region-eastus2',
version: '2024-12-01-preview',
models: {
'o1-mini': {
deploymentName: 'o1-mini',
},
},
},
'codex-mini': {
apiKey: '${AZURE_API_KEY}',
baseURL: 'https://example.cognitiveservices.azure.com/openai/',
version: '2025-04-01-preview',
serverless: true,
models: {
'codex-mini': {
deploymentName: 'codex-mini',
},
},
},
},
},
},
};
@@ -679,33 +727,34 @@ describe('AgentClient - titleConvo', () => {
mockReq.body.endpoint = EModelEndpoint.azureOpenAI;
mockReq.body.model = 'gpt-4';
// Remove Azure-specific config
delete mockReq.app.locals[EModelEndpoint.azureOpenAI];
// Set 'all' config as fallback with a serverless Azure config
mockReq.app.locals.all = {
titleConvo: true,
titleModel: 'gpt-4',
titleMethod: 'structured',
titlePrompt: 'Fallback title prompt from all config',
titlePromptTemplate: 'Template: {{content}}',
modelGroupMap: {
'gpt-4': {
group: 'default-group',
deploymentName: 'gpt-4',
},
},
groupMap: {
'default-group': {
apiKey: '${AZURE_API_KEY}',
baseURL: 'https://default.openai.azure.com/',
version: '2024-02-15-preview',
serverless: true,
models: {
mockReq.config = {
endpoints: {
all: {
titleConvo: true,
titleModel: 'gpt-4',
titleMethod: 'structured',
titlePrompt: 'Fallback title prompt from all config',
titlePromptTemplate: 'Template: {{content}}',
modelGroupMap: {
'gpt-4': {
group: 'default-group',
deploymentName: 'gpt-4',
},
},
groupMap: {
'default-group': {
apiKey: '${AZURE_API_KEY}',
baseURL: 'https://default.openai.azure.com/',
version: '2024-02-15-preview',
serverless: true,
models: {
'gpt-4': {
deploymentName: 'gpt-4',
},
},
},
},
},
},
};
@@ -727,4 +776,464 @@ describe('AgentClient - titleConvo', () => {
});
});
});
describe('getOptions method - GPT-5+ model handling', () => {
let mockReq;
let mockRes;
let mockAgent;
let mockOptions;
beforeEach(() => {
jest.clearAllMocks();
mockAgent = {
id: 'agent-123',
endpoint: EModelEndpoint.openAI,
provider: EModelEndpoint.openAI,
model_parameters: {
model: 'gpt-5',
},
};
mockReq = {
app: {
locals: {},
},
user: {
id: 'user-123',
},
};
mockRes = {};
mockOptions = {
req: mockReq,
res: mockRes,
agent: mockAgent,
};
client = new AgentClient(mockOptions);
});
it('should move maxTokens to modelKwargs.max_completion_tokens for GPT-5 models', () => {
const clientOptions = {
model: 'gpt-5',
maxTokens: 2048,
temperature: 0.7,
};
// Simulate the getOptions logic that handles GPT-5+ models
if (/\bgpt-[5-9]\b/i.test(clientOptions.model) && clientOptions.maxTokens != null) {
clientOptions.modelKwargs = clientOptions.modelKwargs ?? {};
clientOptions.modelKwargs.max_completion_tokens = clientOptions.maxTokens;
delete clientOptions.maxTokens;
}
expect(clientOptions.maxTokens).toBeUndefined();
expect(clientOptions.modelKwargs).toBeDefined();
expect(clientOptions.modelKwargs.max_completion_tokens).toBe(2048);
expect(clientOptions.temperature).toBe(0.7); // Other options should remain
});
it('should move maxTokens to modelKwargs.max_output_tokens for GPT-5 models with useResponsesApi', () => {
const clientOptions = {
model: 'gpt-5',
maxTokens: 2048,
temperature: 0.7,
useResponsesApi: true,
};
if (/\bgpt-[5-9]\b/i.test(clientOptions.model) && clientOptions.maxTokens != null) {
clientOptions.modelKwargs = clientOptions.modelKwargs ?? {};
const paramName =
clientOptions.useResponsesApi === true ? 'max_output_tokens' : 'max_completion_tokens';
clientOptions.modelKwargs[paramName] = clientOptions.maxTokens;
delete clientOptions.maxTokens;
}
expect(clientOptions.maxTokens).toBeUndefined();
expect(clientOptions.modelKwargs).toBeDefined();
expect(clientOptions.modelKwargs.max_output_tokens).toBe(2048);
expect(clientOptions.temperature).toBe(0.7); // Other options should remain
});
it('should handle GPT-5+ models with existing modelKwargs', () => {
const clientOptions = {
model: 'gpt-6',
maxTokens: 1500,
temperature: 0.8,
modelKwargs: {
customParam: 'value',
},
};
// Simulate the getOptions logic
if (/\bgpt-[5-9]\b/i.test(clientOptions.model) && clientOptions.maxTokens != null) {
clientOptions.modelKwargs = clientOptions.modelKwargs ?? {};
clientOptions.modelKwargs.max_completion_tokens = clientOptions.maxTokens;
delete clientOptions.maxTokens;
}
expect(clientOptions.maxTokens).toBeUndefined();
expect(clientOptions.modelKwargs).toEqual({
customParam: 'value',
max_completion_tokens: 1500,
});
});
it('should not modify maxTokens for non-GPT-5+ models', () => {
const clientOptions = {
model: 'gpt-4',
maxTokens: 2048,
temperature: 0.7,
};
// Simulate the getOptions logic
if (/\bgpt-[5-9]\b/i.test(clientOptions.model) && clientOptions.maxTokens != null) {
clientOptions.modelKwargs = clientOptions.modelKwargs ?? {};
clientOptions.modelKwargs.max_completion_tokens = clientOptions.maxTokens;
delete clientOptions.maxTokens;
}
// Should not be modified since it's GPT-4
expect(clientOptions.maxTokens).toBe(2048);
expect(clientOptions.modelKwargs).toBeUndefined();
});
it('should handle various GPT-5+ model formats', () => {
const testCases = [
{ model: 'gpt-5', shouldTransform: true },
{ model: 'gpt-5-turbo', shouldTransform: true },
{ model: 'gpt-6', shouldTransform: true },
{ model: 'gpt-7-preview', shouldTransform: true },
{ model: 'gpt-8', shouldTransform: true },
{ model: 'gpt-9-mini', shouldTransform: true },
{ model: 'gpt-4', shouldTransform: false },
{ model: 'gpt-4o', shouldTransform: false },
{ model: 'gpt-3.5-turbo', shouldTransform: false },
{ model: 'claude-3', shouldTransform: false },
];
testCases.forEach(({ model, shouldTransform }) => {
const clientOptions = {
model,
maxTokens: 1000,
};
// Simulate the getOptions logic
if (/\bgpt-[5-9]\b/i.test(clientOptions.model) && clientOptions.maxTokens != null) {
clientOptions.modelKwargs = clientOptions.modelKwargs ?? {};
clientOptions.modelKwargs.max_completion_tokens = clientOptions.maxTokens;
delete clientOptions.maxTokens;
}
if (shouldTransform) {
expect(clientOptions.maxTokens).toBeUndefined();
expect(clientOptions.modelKwargs?.max_completion_tokens).toBe(1000);
} else {
expect(clientOptions.maxTokens).toBe(1000);
expect(clientOptions.modelKwargs).toBeUndefined();
}
});
});
it('should not swap max token param for older models when using useResponsesApi', () => {
const testCases = [
{ model: 'gpt-5', shouldTransform: true },
{ model: 'gpt-5-turbo', shouldTransform: true },
{ model: 'gpt-6', shouldTransform: true },
{ model: 'gpt-7-preview', shouldTransform: true },
{ model: 'gpt-8', shouldTransform: true },
{ model: 'gpt-9-mini', shouldTransform: true },
{ model: 'gpt-4', shouldTransform: false },
{ model: 'gpt-4o', shouldTransform: false },
{ model: 'gpt-3.5-turbo', shouldTransform: false },
{ model: 'claude-3', shouldTransform: false },
];
testCases.forEach(({ model, shouldTransform }) => {
const clientOptions = {
model,
maxTokens: 1000,
useResponsesApi: true,
};
if (/\bgpt-[5-9]\b/i.test(clientOptions.model) && clientOptions.maxTokens != null) {
clientOptions.modelKwargs = clientOptions.modelKwargs ?? {};
const paramName =
clientOptions.useResponsesApi === true ? 'max_output_tokens' : 'max_completion_tokens';
clientOptions.modelKwargs[paramName] = clientOptions.maxTokens;
delete clientOptions.maxTokens;
}
if (shouldTransform) {
expect(clientOptions.maxTokens).toBeUndefined();
expect(clientOptions.modelKwargs?.max_output_tokens).toBe(1000);
} else {
expect(clientOptions.maxTokens).toBe(1000);
expect(clientOptions.modelKwargs).toBeUndefined();
}
});
});
it('should not transform if maxTokens is null or undefined', () => {
const testCases = [
{ model: 'gpt-5', maxTokens: null },
{ model: 'gpt-5', maxTokens: undefined },
{ model: 'gpt-6', maxTokens: 0 }, // Should transform even if 0
];
testCases.forEach(({ model, maxTokens }, index) => {
const clientOptions = {
model,
maxTokens,
temperature: 0.7,
};
// Simulate the getOptions logic
if (/\bgpt-[5-9]\b/i.test(clientOptions.model) && clientOptions.maxTokens != null) {
clientOptions.modelKwargs = clientOptions.modelKwargs ?? {};
clientOptions.modelKwargs.max_completion_tokens = clientOptions.maxTokens;
delete clientOptions.maxTokens;
}
if (index < 2) {
// null or undefined cases
expect(clientOptions.maxTokens).toBe(maxTokens);
expect(clientOptions.modelKwargs).toBeUndefined();
} else {
// 0 case - should transform
expect(clientOptions.maxTokens).toBeUndefined();
expect(clientOptions.modelKwargs?.max_completion_tokens).toBe(0);
}
});
});
});
describe('runMemory method', () => {
let client;
let mockReq;
let mockRes;
let mockAgent;
let mockOptions;
let mockProcessMemory;
beforeEach(() => {
jest.clearAllMocks();
mockAgent = {
id: 'agent-123',
endpoint: EModelEndpoint.openAI,
provider: EModelEndpoint.openAI,
model_parameters: {
model: 'gpt-4',
},
};
mockReq = {
user: {
id: 'user-123',
personalization: {
memories: true,
},
},
};
// Mock getAppConfig for memory tests
mockReq.config = {
memory: {
messageWindowSize: 3,
},
};
mockRes = {};
mockOptions = {
req: mockReq,
res: mockRes,
agent: mockAgent,
};
mockProcessMemory = jest.fn().mockResolvedValue([]);
client = new AgentClient(mockOptions);
client.processMemory = mockProcessMemory;
client.conversationId = 'convo-123';
client.responseMessageId = 'response-123';
});
it('should filter out image URLs from message content', async () => {
const { HumanMessage, AIMessage } = require('@langchain/core/messages');
const messages = [
new HumanMessage({
content: [
{
type: 'text',
text: 'What is in this image?',
},
{
type: 'image_url',
image_url: {
url: 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNkYPhfDwAChwGA60e6kgAAAABJRU5ErkJggg==',
detail: 'auto',
},
},
],
}),
new AIMessage('I can see a small red pixel in the image.'),
new HumanMessage({
content: [
{
type: 'text',
text: 'What about this one?',
},
{
type: 'image_url',
image_url: {
url: 'data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEAYABgAAD/',
detail: 'high',
},
},
],
}),
];
await client.runMemory(messages);
expect(mockProcessMemory).toHaveBeenCalledTimes(1);
const processedMessage = mockProcessMemory.mock.calls[0][0][0];
// Verify the buffer message was created
expect(processedMessage.constructor.name).toBe('HumanMessage');
expect(processedMessage.content).toContain('# Current Chat:');
// Verify that image URLs are not in the buffer string
expect(processedMessage.content).not.toContain('image_url');
expect(processedMessage.content).not.toContain('data:image');
expect(processedMessage.content).not.toContain('base64');
// Verify text content is preserved
expect(processedMessage.content).toContain('What is in this image?');
expect(processedMessage.content).toContain('I can see a small red pixel in the image.');
expect(processedMessage.content).toContain('What about this one?');
});
it('should handle messages with only text content', async () => {
const { HumanMessage, AIMessage } = require('@langchain/core/messages');
const messages = [
new HumanMessage('Hello, how are you?'),
new AIMessage('I am doing well, thank you!'),
new HumanMessage('That is great to hear.'),
];
await client.runMemory(messages);
expect(mockProcessMemory).toHaveBeenCalledTimes(1);
const processedMessage = mockProcessMemory.mock.calls[0][0][0];
expect(processedMessage.content).toContain('Hello, how are you?');
expect(processedMessage.content).toContain('I am doing well, thank you!');
expect(processedMessage.content).toContain('That is great to hear.');
});
it('should handle mixed content types correctly', async () => {
const { HumanMessage } = require('@langchain/core/messages');
const { ContentTypes } = require('librechat-data-provider');
const messages = [
new HumanMessage({
content: [
{
type: 'text',
text: 'Here is some text',
},
{
type: ContentTypes.IMAGE_URL,
image_url: {
url: 'https://example.com/image.png',
},
},
{
type: 'text',
text: ' and more text',
},
],
}),
];
await client.runMemory(messages);
expect(mockProcessMemory).toHaveBeenCalledTimes(1);
const processedMessage = mockProcessMemory.mock.calls[0][0][0];
// Should contain text parts but not image URLs
expect(processedMessage.content).toContain('Here is some text');
expect(processedMessage.content).toContain('and more text');
expect(processedMessage.content).not.toContain('example.com/image.png');
expect(processedMessage.content).not.toContain('IMAGE_URL');
});
it('should preserve original messages without mutation', async () => {
const { HumanMessage } = require('@langchain/core/messages');
const originalContent = [
{
type: 'text',
text: 'Original text',
},
{
type: 'image_url',
image_url: {
url: 'data:image/png;base64,ABC123',
},
},
];
const messages = [
new HumanMessage({
content: [...originalContent],
}),
];
await client.runMemory(messages);
// Verify original message wasn't mutated
expect(messages[0].content).toHaveLength(2);
expect(messages[0].content[1].type).toBe('image_url');
expect(messages[0].content[1].image_url.url).toBe('data:image/png;base64,ABC123');
});
it('should handle message window size correctly', async () => {
const { HumanMessage, AIMessage } = require('@langchain/core/messages');
const messages = [
new HumanMessage('Message 1'),
new AIMessage('Response 1'),
new HumanMessage('Message 2'),
new AIMessage('Response 2'),
new HumanMessage('Message 3'),
new AIMessage('Response 3'),
];
// Window size is set to 3 in mockReq
await client.runMemory(messages);
expect(mockProcessMemory).toHaveBeenCalledTimes(1);
const processedMessage = mockProcessMemory.mock.calls[0][0][0];
// Should only include last 3 messages due to window size
expect(processedMessage.content).toContain('Message 3');
expect(processedMessage.content).toContain('Response 3');
expect(processedMessage.content).not.toContain('Message 1');
expect(processedMessage.content).not.toContain('Response 1');
});
it('should return early if processMemory is not set', async () => {
const { HumanMessage } = require('@langchain/core/messages');
client.processMemory = null;
const result = await client.runMemory([new HumanMessage('Test')]);
expect(result).toBeUndefined();
expect(mockProcessMemory).not.toHaveBeenCalled();
});
});
});

View File

@@ -21,7 +21,7 @@ const getLogStores = require('~/cache/getLogStores');
/**
* @typedef {Object} ErrorHandlerDependencies
* @property {Express.Request} req - The Express request object
* @property {ServerRequest} req - The Express request object
* @property {Express.Response} res - The Express response object
* @property {() => ErrorHandlerContext} getContext - Function to get the current context
* @property {string} [originPath] - The origin path for the error handler
@@ -105,8 +105,6 @@ const createErrorHandler = ({ req, res, getContext, originPath = '/assistants/ch
return res.end();
}
await cache.delete(cacheKey);
// const cancelledRun = await openai.beta.threads.runs.cancel(thread_id, run_id);
// logger.debug(`[${originPath}] Cancelled run:`, cancelledRun);
} catch (error) {
logger.error(`[${originPath}] Error cancelling run`, error);
}
@@ -115,7 +113,6 @@ const createErrorHandler = ({ req, res, getContext, originPath = '/assistants/ch
let run;
try {
// run = await openai.beta.threads.runs.retrieve(thread_id, run_id);
await recordUsage({
...run.usage,
model: run.model,
@@ -128,18 +125,9 @@ const createErrorHandler = ({ req, res, getContext, originPath = '/assistants/ch
let finalEvent;
try {
// const errorContentPart = {
// text: {
// value:
// error?.message ?? 'There was an error processing your request. Please try again later.',
// },
// type: ContentTypes.ERROR,
// };
finalEvent = {
final: true,
conversation: await getConvo(req.user.id, conversationId),
// runMessages,
};
} catch (error) {
logger.error(`[${originPath}] Error finalizing error process`, error);

View File

@@ -9,6 +9,24 @@ const {
const { disposeClient, clientRegistry, requestDataMap } = require('~/server/cleanup');
const { saveMessage } = require('~/models');
function createCloseHandler(abortController) {
return function (manual) {
if (!manual) {
logger.debug('[AgentController] Request closed');
}
if (!abortController) {
return;
} else if (abortController.signal.aborted) {
return;
} else if (abortController.requestCompleted) {
return;
}
abortController.abort();
logger.debug('[AgentController] Request aborted on close');
};
}
const AgentController = async (req, res, next, initializeClient, addTitle) => {
let {
text,
@@ -31,7 +49,6 @@ const AgentController = async (req, res, next, initializeClient, addTitle) => {
let userMessagePromise;
let getAbortData;
let client = null;
// Initialize as an array
let cleanupHandlers = [];
const newConvo = !conversationId;
@@ -62,9 +79,7 @@ const AgentController = async (req, res, next, initializeClient, addTitle) => {
// Create a function to handle final cleanup
const performCleanup = () => {
logger.debug('[AgentController] Performing cleanup');
// Make sure cleanupHandlers is an array before iterating
if (Array.isArray(cleanupHandlers)) {
// Execute all cleanup handlers
for (const handler of cleanupHandlers) {
try {
if (typeof handler === 'function') {
@@ -105,8 +120,33 @@ const AgentController = async (req, res, next, initializeClient, addTitle) => {
};
try {
/** @type {{ client: TAgentClient }} */
const result = await initializeClient({ req, res, endpointOption });
let prelimAbortController = new AbortController();
const prelimCloseHandler = createCloseHandler(prelimAbortController);
res.on('close', prelimCloseHandler);
const removePrelimHandler = (manual) => {
try {
prelimCloseHandler(manual);
res.removeListener('close', prelimCloseHandler);
} catch (e) {
logger.error('[AgentController] Error removing close listener', e);
}
};
cleanupHandlers.push(removePrelimHandler);
/** @type {{ client: TAgentClient; userMCPAuthMap?: Record<string, Record<string, string>> }} */
const result = await initializeClient({
req,
res,
endpointOption,
signal: prelimAbortController.signal,
});
if (prelimAbortController.signal?.aborted) {
prelimAbortController = null;
throw new Error('Request was aborted before initialization could complete');
} else {
prelimAbortController = null;
removePrelimHandler(true);
cleanupHandlers.pop();
}
client = result.client;
// Register client with finalization registry if available
@@ -138,22 +178,7 @@ const AgentController = async (req, res, next, initializeClient, addTitle) => {
};
const { abortController, onStart } = createAbortController(req, res, getAbortData, getReqData);
// Simple handler to avoid capturing scope
const closeHandler = () => {
logger.debug('[AgentController] Request closed');
if (!abortController) {
return;
} else if (abortController.signal.aborted) {
return;
} else if (abortController.requestCompleted) {
return;
}
abortController.abort();
logger.debug('[AgentController] Request aborted on close');
};
const closeHandler = createCloseHandler(abortController);
res.on('close', closeHandler);
cleanupHandlers.push(() => {
try {
@@ -175,6 +200,7 @@ const AgentController = async (req, res, next, initializeClient, addTitle) => {
abortController,
overrideParentMessageId,
isEdited: !!editedContent,
userMCPAuthMap: result.userMCPAuthMap,
responseMessageId: editedResponseMessageId,
progressOptions: {
res,
@@ -233,6 +259,26 @@ const AgentController = async (req, res, next, initializeClient, addTitle) => {
);
}
}
// Edge case: sendMessage completed but abort happened during sendCompletion
// We need to ensure a final event is sent
else if (!res.headersSent && !res.finished) {
logger.debug(
'[AgentController] Handling edge case: `sendMessage` completed but aborted during `sendCompletion`',
);
const finalResponse = { ...response };
finalResponse.error = true;
sendEvent(res, {
final: true,
conversation,
title: conversation.title,
requestMessage: userMessage,
responseMessage: finalResponse,
error: { message: 'Request was aborted during completion' },
});
res.end();
}
// Save user message if needed
if (!client.skipSaveUserMessage) {

View File

@@ -5,30 +5,40 @@ const { logger } = require('@librechat/data-schemas');
const { agentCreateSchema, agentUpdateSchema } = require('@librechat/api');
const {
Tools,
Constants,
FileSources,
SystemRoles,
FileSources,
ResourceType,
AccessRoleIds,
PrincipalType,
EToolResources,
PermissionBits,
actionDelimiter,
removeNullishValues,
} = require('librechat-data-provider');
const {
getAgent,
getListAgentsByAccess,
countPromotedAgents,
revertAgentVersion,
createAgent,
updateAgent,
deleteAgent,
getListAgents,
getAgent,
} = require('~/models/Agent');
const {
findPubliclyAccessibleResources,
findAccessibleResources,
hasPublicPermission,
grantPermission,
} = require('~/server/services/PermissionService');
const { getStrategyFunctions } = require('~/server/services/Files/strategies');
const { resizeAvatar } = require('~/server/services/Files/images/avatar');
const { getFileStrategy } = require('~/server/utils/getFileStrategy');
const { refreshS3Url } = require('~/server/services/Files/S3/crud');
const { filterFile } = require('~/server/services/Files/process');
const { updateAction, getActions } = require('~/models/Action');
const { getCachedTools } = require('~/server/services/Config');
const { updateAgentProjects } = require('~/models/Agent');
const { getProjectByName } = require('~/models/Project');
const { revertAgentVersion } = require('~/models/Agent');
const { deleteFileByFilter } = require('~/models/File');
const { getCategoriesWithCounts } = require('~/models');
const systemTools = {
[Tools.execute_code]: true,
@@ -42,7 +52,7 @@ const systemTools = {
* @param {ServerRequest} req - The request object.
* @param {AgentCreateParams} req.body - The request body.
* @param {ServerResponse} res - The response object.
* @returns {Agent} 201 - success response - application/json
* @returns {Promise<Agent>} 201 - success response - application/json
*/
const createAgentHandler = async (req, res) => {
try {
@@ -67,6 +77,27 @@ const createAgentHandler = async (req, res) => {
}
const agent = await createAgent(agentData);
// Automatically grant owner permissions to the creator
try {
await grantPermission({
principalType: PrincipalType.USER,
principalId: userId,
resourceType: ResourceType.AGENT,
resourceId: agent._id,
accessRoleId: AccessRoleIds.AGENT_OWNER,
grantedBy: userId,
});
logger.debug(
`[createAgent] Granted owner permissions to user ${userId} for agent ${agent.id}`,
);
} catch (permissionError) {
logger.error(
`[createAgent] Failed to grant owner permissions for agent ${agent.id}:`,
permissionError,
);
}
res.status(201).json(agent);
} catch (error) {
if (error instanceof z.ZodError) {
@@ -89,21 +120,14 @@ const createAgentHandler = async (req, res) => {
* @returns {Promise<Agent>} 200 - success response - application/json
* @returns {Error} 404 - Agent not found
*/
const getAgentHandler = async (req, res) => {
const getAgentHandler = async (req, res, expandProperties = false) => {
try {
const id = req.params.id;
const author = req.user.id;
let query = { id, author };
const globalProject = await getProjectByName(Constants.GLOBAL_PROJECT_NAME, ['agentIds']);
if (globalProject && (globalProject.agentIds?.length ?? 0) > 0) {
query = {
$or: [{ id, $in: globalProject.agentIds }, query],
};
}
const agent = await getAgent(query);
// Permissions are validated by middleware before calling this function
// Simply load the agent by ID
const agent = await getAgent({ id });
if (!agent) {
return res.status(404).json({ error: 'Agent not found' });
@@ -120,23 +144,45 @@ const getAgentHandler = async (req, res) => {
}
agent.author = agent.author.toString();
// @deprecated - isCollaborative replaced by ACL permissions
agent.isCollaborative = !!agent.isCollaborative;
// Check if agent is public
const isPublic = await hasPublicPermission({
resourceType: ResourceType.AGENT,
resourceId: agent._id,
requiredPermissions: PermissionBits.VIEW,
});
agent.isPublic = isPublic;
if (agent.author !== author) {
delete agent.author;
}
if (!agent.isCollaborative && agent.author !== author && req.user.role !== SystemRoles.ADMIN) {
if (!expandProperties) {
// VIEW permission: Basic agent info only
return res.status(200).json({
_id: agent._id,
id: agent.id,
name: agent.name,
description: agent.description,
avatar: agent.avatar,
author: agent.author,
provider: agent.provider,
model: agent.model,
projectIds: agent.projectIds,
// @deprecated - isCollaborative replaced by ACL permissions
isCollaborative: agent.isCollaborative,
isPublic: agent.isPublic,
version: agent.version,
// Safe metadata
createdAt: agent.createdAt,
updatedAt: agent.updatedAt,
});
}
// EDIT permission: Full agent details including sensitive configuration
return res.status(200).json(agent);
} catch (error) {
logger.error('[/Agents/:id] Error retrieving agent', error);
@@ -157,42 +203,22 @@ const updateAgentHandler = async (req, res) => {
try {
const id = req.params.id;
const validatedData = agentUpdateSchema.parse(req.body);
const { projectIds, removeProjectIds, ...updateData } = removeNullishValues(validatedData);
const isAdmin = req.user.role === SystemRoles.ADMIN;
const { _id, ...updateData } = removeNullishValues(validatedData);
const existingAgent = await getAgent({ id });
if (!existingAgent) {
return res.status(404).json({ error: 'Agent not found' });
}
const isAuthor = existingAgent.author.toString() === req.user.id;
const hasEditPermission = existingAgent.isCollaborative || isAdmin || isAuthor;
if (!hasEditPermission) {
return res.status(403).json({
error: 'You do not have permission to modify this non-collaborative agent',
});
}
/** @type {boolean} */
const isProjectUpdate = (projectIds?.length ?? 0) > 0 || (removeProjectIds?.length ?? 0) > 0;
let updatedAgent =
Object.keys(updateData).length > 0
? await updateAgent({ id }, updateData, {
updatingUserId: req.user.id,
skipVersioning: isProjectUpdate,
})
: existingAgent;
if (isProjectUpdate) {
updatedAgent = await updateAgentProjects({
user: req.user,
agentId: id,
projectIds,
removeProjectIds,
});
}
// Add version count to the response
updatedAgent.version = updatedAgent.versions ? updatedAgent.versions.length : 0;
if (updatedAgent.author) {
updatedAgent.author = updatedAgent.author.toString();
@@ -318,6 +344,26 @@ const duplicateAgentHandler = async (req, res) => {
newAgentData.actions = agentActions;
const newAgent = await createAgent(newAgentData);
// Automatically grant owner permissions to the duplicator
try {
await grantPermission({
principalType: PrincipalType.USER,
principalId: userId,
resourceType: ResourceType.AGENT,
resourceId: newAgent._id,
accessRoleId: AccessRoleIds.AGENT_OWNER,
grantedBy: userId,
});
logger.debug(
`[duplicateAgent] Granted owner permissions to user ${userId} for duplicated agent ${newAgent.id}`,
);
} catch (permissionError) {
logger.error(
`[duplicateAgent] Failed to grant owner permissions for duplicated agent ${newAgent.id}:`,
permissionError,
);
}
return res.status(201).json({
agent: newAgent,
actions: newActionsList,
@@ -344,7 +390,7 @@ const deleteAgentHandler = async (req, res) => {
if (!agent) {
return res.status(404).json({ error: 'Agent not found' });
}
await deleteAgent({ id, author: req.user.id });
await deleteAgent({ id });
return res.json({ message: 'Agent deleted' });
} catch (error) {
logger.error('[/Agents/:id] Error deleting Agent', error);
@@ -353,7 +399,7 @@ const deleteAgentHandler = async (req, res) => {
};
/**
*
* Lists agents using ACL-aware permissions (ownership + explicit shares).
* @route GET /Agents
* @param {object} req - Express Request
* @param {object} req.query - Request query
@@ -362,9 +408,65 @@ const deleteAgentHandler = async (req, res) => {
*/
const getListAgentsHandler = async (req, res) => {
try {
const data = await getListAgents({
author: req.user.id,
const userId = req.user.id;
const { category, search, limit, cursor, promoted } = req.query;
let requiredPermission = req.query.requiredPermission;
if (typeof requiredPermission === 'string') {
requiredPermission = parseInt(requiredPermission, 10);
if (isNaN(requiredPermission)) {
requiredPermission = PermissionBits.VIEW;
}
} else if (typeof requiredPermission !== 'number') {
requiredPermission = PermissionBits.VIEW;
}
// Base filter
const filter = {};
// Handle category filter - only apply if category is defined
if (category !== undefined && category.trim() !== '') {
filter.category = category;
}
// Handle promoted filter - only from query param
if (promoted === '1') {
filter.is_promoted = true;
} else if (promoted === '0') {
filter.is_promoted = { $ne: true };
}
// Handle search filter
if (search && search.trim() !== '') {
filter.$or = [
{ name: { $regex: search.trim(), $options: 'i' } },
{ description: { $regex: search.trim(), $options: 'i' } },
];
}
// Get agent IDs the user has VIEW access to via ACL
const accessibleIds = await findAccessibleResources({
userId,
role: req.user.role,
resourceType: ResourceType.AGENT,
requiredPermissions: requiredPermission,
});
const publiclyAccessibleIds = await findPubliclyAccessibleResources({
resourceType: ResourceType.AGENT,
requiredPermissions: PermissionBits.VIEW,
});
// Use the new ACL-aware function
const data = await getListAgentsByAccess({
accessibleIds,
otherParams: filter,
limit,
after: cursor,
});
if (data?.data?.length) {
data.data = data.data.map((agent) => {
if (publiclyAccessibleIds.some((id) => id.equals(agent._id))) {
agent.isPublic = true;
}
return agent;
});
}
return res.json(data);
} catch (error) {
logger.error('[/Agents] Error listing Agents', error);
@@ -385,6 +487,7 @@ const getListAgentsHandler = async (req, res) => {
*/
const uploadAgentAvatarHandler = async (req, res) => {
try {
const appConfig = req.config;
filterFile({ req, file: req.file, image: true, isAvatar: true });
const { agent_id } = req.params;
if (!agent_id) {
@@ -398,7 +501,7 @@ const uploadAgentAvatarHandler = async (req, res) => {
return res.status(404).json({ error: 'Agent not found' });
}
const isAuthor = existingAgent.author.toString() === req.user.id;
const isAuthor = existingAgent.author.toString() === req.user.id.toString();
const hasEditPermission = existingAgent.isCollaborative || isAdmin || isAuthor;
if (!hasEditPermission) {
@@ -408,9 +511,7 @@ const uploadAgentAvatarHandler = async (req, res) => {
}
const buffer = await fs.readFile(req.file.path);
const fileStrategy = req.app.locals.fileStrategy;
const fileStrategy = getFileStrategy(appConfig, { isAvatar: true });
const resizedBuffer = await resizeAvatar({
userId: req.user.id,
input: buffer,
@@ -506,7 +607,7 @@ const revertAgentVersionHandler = async (req, res) => {
return res.status(404).json({ error: 'Agent not found' });
}
const isAuthor = existingAgent.author.toString() === req.user.id;
const isAuthor = existingAgent.author.toString() === req.user.id.toString();
const hasEditPermission = existingAgent.isCollaborative || isAdmin || isAuthor;
if (!hasEditPermission) {
@@ -531,7 +632,48 @@ const revertAgentVersionHandler = async (req, res) => {
res.status(500).json({ error: error.message });
}
};
/**
* Get all agent categories with counts
*
* @param {Object} _req - Express request object (unused)
* @param {Object} res - Express response object
*/
const getAgentCategories = async (_req, res) => {
try {
const categories = await getCategoriesWithCounts();
const promotedCount = await countPromotedAgents();
const formattedCategories = categories.map((category) => ({
value: category.value,
label: category.label,
count: category.agentCount,
description: category.description,
}));
if (promotedCount > 0) {
formattedCategories.unshift({
value: 'promoted',
label: 'Promoted',
count: promotedCount,
description: 'Our recommended agents',
});
}
formattedCategories.push({
value: 'all',
label: 'All',
description: 'All available agents',
});
res.status(200).json(formattedCategories);
} catch (error) {
logger.error('[/Agents/Marketplace] Error fetching agent categories:', error);
res.status(500).json({
error: 'Failed to fetch agent categories',
userMessage: 'Unable to load categories. Please refresh the page.',
suggestion: 'Try refreshing the page or check your network connection',
});
}
};
module.exports = {
createAgent: createAgentHandler,
getAgent: getAgentHandler,
@@ -541,4 +683,5 @@ module.exports = {
getListAgents: getListAgentsHandler,
uploadAgentAvatar: uploadAgentAvatarHandler,
revertAgentVersion: revertAgentVersionHandler,
getAgentCategories,
};

View File

@@ -1,5 +1,6 @@
const mongoose = require('mongoose');
const { v4: uuidv4 } = require('uuid');
const { nanoid } = require('nanoid');
const { MongoMemoryServer } = require('mongodb-memory-server');
const { agentSchema } = require('@librechat/data-schemas');
@@ -41,7 +42,27 @@ jest.mock('~/models/File', () => ({
deleteFileByFilter: jest.fn(),
}));
const { createAgent: createAgentHandler, updateAgent: updateAgentHandler } = require('./v1');
jest.mock('~/server/services/PermissionService', () => ({
findAccessibleResources: jest.fn().mockResolvedValue([]),
findPubliclyAccessibleResources: jest.fn().mockResolvedValue([]),
grantPermission: jest.fn(),
hasPublicPermission: jest.fn().mockResolvedValue(false),
}));
jest.mock('~/models', () => ({
getCategoriesWithCounts: jest.fn(),
}));
const {
createAgent: createAgentHandler,
updateAgent: updateAgentHandler,
getListAgents: getListAgentsHandler,
} = require('./v1');
const {
findAccessibleResources,
findPubliclyAccessibleResources,
} = require('~/server/services/PermissionService');
/**
* @type {import('mongoose').Model<import('@librechat/data-schemas').IAgent>}
@@ -79,6 +100,7 @@ describe('Agent Controllers - Mass Assignment Protection', () => {
},
body: {},
params: {},
query: {},
app: {
locals: {
fileStrategy: 'local',
@@ -235,6 +257,81 @@ describe('Agent Controllers - Mass Assignment Protection', () => {
expect(agentInDb.tool_resources.invalid_resource).toBeUndefined();
});
test('should handle support_contact with empty strings', async () => {
const dataWithEmptyContact = {
provider: 'openai',
model: 'gpt-4',
name: 'Agent with Empty Contact',
support_contact: {
name: '',
email: '',
},
};
mockReq.body = dataWithEmptyContact;
await createAgentHandler(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(201);
const createdAgent = mockRes.json.mock.calls[0][0];
expect(createdAgent.name).toBe('Agent with Empty Contact');
expect(createdAgent.support_contact).toBeDefined();
expect(createdAgent.support_contact.name).toBe('');
expect(createdAgent.support_contact.email).toBe('');
});
test('should handle support_contact with valid email', async () => {
const dataWithValidContact = {
provider: 'openai',
model: 'gpt-4',
name: 'Agent with Valid Contact',
support_contact: {
name: 'Support Team',
email: 'support@example.com',
},
};
mockReq.body = dataWithValidContact;
await createAgentHandler(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(201);
const createdAgent = mockRes.json.mock.calls[0][0];
expect(createdAgent.support_contact).toBeDefined();
expect(createdAgent.support_contact.name).toBe('Support Team');
expect(createdAgent.support_contact.email).toBe('support@example.com');
});
test('should reject support_contact with invalid email', async () => {
const dataWithInvalidEmail = {
provider: 'openai',
model: 'gpt-4',
name: 'Agent with Invalid Email',
support_contact: {
name: 'Support',
email: 'not-an-email',
},
};
mockReq.body = dataWithInvalidEmail;
await createAgentHandler(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(400);
expect(mockRes.json).toHaveBeenCalledWith(
expect.objectContaining({
error: 'Invalid request data',
details: expect.arrayContaining([
expect.objectContaining({
path: ['support_contact', 'email'],
}),
]),
}),
);
});
test('should handle avatar validation', async () => {
const dataWithAvatar = {
provider: 'openai',
@@ -372,52 +469,6 @@ describe('Agent Controllers - Mass Assignment Protection', () => {
expect(agentInDb.id).toBe(existingAgentId);
});
test('should reject update from non-author when not collaborative', async () => {
const differentUserId = new mongoose.Types.ObjectId().toString();
mockReq.user.id = differentUserId; // Different user
mockReq.params.id = existingAgentId;
mockReq.body = {
name: 'Unauthorized Update',
};
await updateAgentHandler(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(403);
expect(mockRes.json).toHaveBeenCalledWith({
error: 'You do not have permission to modify this non-collaborative agent',
});
// Verify agent was not modified in database
const agentInDb = await Agent.findOne({ id: existingAgentId });
expect(agentInDb.name).toBe('Original Agent');
});
test('should allow update from non-author when collaborative', async () => {
// First make the agent collaborative
await Agent.updateOne({ id: existingAgentId }, { isCollaborative: true });
const differentUserId = new mongoose.Types.ObjectId().toString();
mockReq.user.id = differentUserId; // Different user
mockReq.params.id = existingAgentId;
mockReq.body = {
name: 'Collaborative Update',
};
await updateAgentHandler(mockReq, mockRes);
expect(mockRes.status).not.toHaveBeenCalledWith(403);
expect(mockRes.json).toHaveBeenCalled();
const updatedAgent = mockRes.json.mock.calls[0][0];
expect(updatedAgent.name).toBe('Collaborative Update');
// Author field should be removed for non-author
expect(updatedAgent.author).toBeUndefined();
// Verify in database
const agentInDb = await Agent.findOne({ id: existingAgentId });
expect(agentInDb.name).toBe('Collaborative Update');
});
test('should allow admin to update any agent', async () => {
const adminUserId = new mongoose.Types.ObjectId().toString();
mockReq.user.id = adminUserId;
@@ -498,6 +549,28 @@ describe('Agent Controllers - Mass Assignment Protection', () => {
expect(mockRes.json).toHaveBeenCalledWith({ error: 'Agent not found' });
});
test('should include version field in update response', async () => {
mockReq.user.id = existingAgentAuthorId.toString();
mockReq.params.id = existingAgentId;
mockReq.body = {
name: 'Updated with Version Check',
};
await updateAgentHandler(mockReq, mockRes);
expect(mockRes.json).toHaveBeenCalled();
const updatedAgent = mockRes.json.mock.calls[0][0];
// Verify version field is included and is a number
expect(updatedAgent).toHaveProperty('version');
expect(typeof updatedAgent.version).toBe('number');
expect(updatedAgent.version).toBeGreaterThanOrEqual(1);
// Verify in database
const agentInDb = await Agent.findOne({ id: existingAgentId });
expect(updatedAgent.version).toBe(agentInDb.versions.length);
});
test('should handle validation errors properly', async () => {
mockReq.user.id = existingAgentAuthorId.toString();
mockReq.params.id = existingAgentId;
@@ -555,45 +628,6 @@ describe('Agent Controllers - Mass Assignment Protection', () => {
expect(agentInDb.__v).not.toBe(99);
});
test('should prevent privilege escalation through isCollaborative', async () => {
// Create a non-collaborative agent
const authorId = new mongoose.Types.ObjectId();
const agent = await Agent.create({
id: `agent_${uuidv4()}`,
name: 'Private Agent',
provider: 'openai',
model: 'gpt-4',
author: authorId,
isCollaborative: false,
versions: [
{
name: 'Private Agent',
provider: 'openai',
model: 'gpt-4',
createdAt: new Date(),
updatedAt: new Date(),
},
],
});
// Try to make it collaborative as a different user
const attackerId = new mongoose.Types.ObjectId().toString();
mockReq.user.id = attackerId;
mockReq.params.id = agent.id;
mockReq.body = {
isCollaborative: true, // Trying to escalate privileges
};
await updateAgentHandler(mockReq, mockRes);
// Should be rejected
expect(mockRes.status).toHaveBeenCalledWith(403);
// Verify in database that it's still not collaborative
const agentInDb = await Agent.findOne({ id: agent.id });
expect(agentInDb.isCollaborative).toBe(false);
});
test('should prevent author hijacking', async () => {
const originalAuthorId = new mongoose.Types.ObjectId();
const attackerId = new mongoose.Types.ObjectId();
@@ -656,4 +690,373 @@ describe('Agent Controllers - Mass Assignment Protection', () => {
expect(agentInDb.futureFeature).toBeUndefined();
});
});
describe('getListAgentsHandler - Security Tests', () => {
let userA, userB;
let agentA1, agentA2, agentA3, agentB1;
beforeEach(async () => {
await Agent.deleteMany({});
jest.clearAllMocks();
// Create two test users
userA = new mongoose.Types.ObjectId();
userB = new mongoose.Types.ObjectId();
// Create agents for User A
agentA1 = await Agent.create({
id: `agent_${nanoid(12)}`,
name: 'Agent A1',
description: 'User A agent 1',
provider: 'openai',
model: 'gpt-4',
author: userA,
versions: [
{
name: 'Agent A1',
description: 'User A agent 1',
provider: 'openai',
model: 'gpt-4',
createdAt: new Date(),
updatedAt: new Date(),
},
],
});
agentA2 = await Agent.create({
id: `agent_${nanoid(12)}`,
name: 'Agent A2',
description: 'User A agent 2',
provider: 'openai',
model: 'gpt-4',
author: userA,
versions: [
{
name: 'Agent A2',
description: 'User A agent 2',
provider: 'openai',
model: 'gpt-4',
createdAt: new Date(),
updatedAt: new Date(),
},
],
});
agentA3 = await Agent.create({
id: `agent_${nanoid(12)}`,
name: 'Agent A3',
description: 'User A agent 3',
provider: 'openai',
model: 'gpt-4',
author: userA,
category: 'productivity',
versions: [
{
name: 'Agent A3',
description: 'User A agent 3',
provider: 'openai',
model: 'gpt-4',
category: 'productivity',
createdAt: new Date(),
updatedAt: new Date(),
},
],
});
// Create an agent for User B
agentB1 = await Agent.create({
id: `agent_${nanoid(12)}`,
name: 'Agent B1',
description: 'User B agent 1',
provider: 'openai',
model: 'gpt-4',
author: userB,
versions: [
{
name: 'Agent B1',
description: 'User B agent 1',
provider: 'openai',
model: 'gpt-4',
createdAt: new Date(),
updatedAt: new Date(),
},
],
});
});
test('should return empty list when user has no accessible agents', async () => {
// User B has no permissions and no owned agents
mockReq.user.id = userB.toString();
findAccessibleResources.mockResolvedValue([]);
findPubliclyAccessibleResources.mockResolvedValue([]);
await getListAgentsHandler(mockReq, mockRes);
expect(findAccessibleResources).toHaveBeenCalledWith({
userId: userB.toString(),
role: 'USER',
resourceType: 'agent',
requiredPermissions: 1, // VIEW permission
});
expect(mockRes.json).toHaveBeenCalledWith({
object: 'list',
data: [],
first_id: null,
last_id: null,
has_more: false,
after: null,
});
});
test('should not return other users agents when accessibleIds is empty', async () => {
// User B trying to see agents with no permissions
mockReq.user.id = userB.toString();
findAccessibleResources.mockResolvedValue([]);
findPubliclyAccessibleResources.mockResolvedValue([]);
await getListAgentsHandler(mockReq, mockRes);
const response = mockRes.json.mock.calls[0][0];
expect(response.data).toHaveLength(0);
// Verify User A's agents are not included
const agentIds = response.data.map((a) => a.id);
expect(agentIds).not.toContain(agentA1.id);
expect(agentIds).not.toContain(agentA2.id);
expect(agentIds).not.toContain(agentA3.id);
});
test('should only return agents user has access to', async () => {
// User B has access to one of User A's agents
mockReq.user.id = userB.toString();
findAccessibleResources.mockResolvedValue([agentA1._id]);
findPubliclyAccessibleResources.mockResolvedValue([]);
await getListAgentsHandler(mockReq, mockRes);
const response = mockRes.json.mock.calls[0][0];
expect(response.data).toHaveLength(1);
expect(response.data[0].id).toBe(agentA1.id);
expect(response.data[0].name).toBe('Agent A1');
});
test('should return multiple accessible agents', async () => {
// User B has access to multiple agents
mockReq.user.id = userB.toString();
findAccessibleResources.mockResolvedValue([agentA1._id, agentA3._id, agentB1._id]);
findPubliclyAccessibleResources.mockResolvedValue([]);
await getListAgentsHandler(mockReq, mockRes);
const response = mockRes.json.mock.calls[0][0];
expect(response.data).toHaveLength(3);
const agentIds = response.data.map((a) => a.id);
expect(agentIds).toContain(agentA1.id);
expect(agentIds).toContain(agentA3.id);
expect(agentIds).toContain(agentB1.id);
expect(agentIds).not.toContain(agentA2.id);
});
test('should apply category filter correctly with ACL', async () => {
// User has access to all agents but filters by category
mockReq.user.id = userB.toString();
mockReq.query.category = 'productivity';
findAccessibleResources.mockResolvedValue([agentA1._id, agentA2._id, agentA3._id]);
findPubliclyAccessibleResources.mockResolvedValue([]);
await getListAgentsHandler(mockReq, mockRes);
const response = mockRes.json.mock.calls[0][0];
expect(response.data).toHaveLength(1);
expect(response.data[0].id).toBe(agentA3.id);
expect(response.data[0].category).toBe('productivity');
});
test('should apply search filter correctly with ACL', async () => {
// User has access to multiple agents but searches for specific one
mockReq.user.id = userB.toString();
mockReq.query.search = 'A2';
findAccessibleResources.mockResolvedValue([agentA1._id, agentA2._id, agentA3._id]);
findPubliclyAccessibleResources.mockResolvedValue([]);
await getListAgentsHandler(mockReq, mockRes);
const response = mockRes.json.mock.calls[0][0];
expect(response.data).toHaveLength(1);
expect(response.data[0].id).toBe(agentA2.id);
});
test('should handle pagination with ACL filtering', async () => {
// Create more agents for pagination testing
const moreAgents = [];
for (let i = 4; i <= 10; i++) {
const agent = await Agent.create({
id: `agent_${nanoid(12)}`,
name: `Agent A${i}`,
description: `User A agent ${i}`,
provider: 'openai',
model: 'gpt-4',
author: userA,
versions: [
{
name: `Agent A${i}`,
description: `User A agent ${i}`,
provider: 'openai',
model: 'gpt-4',
createdAt: new Date(),
updatedAt: new Date(),
},
],
});
moreAgents.push(agent);
}
// User has access to all agents
const allAgentIds = [agentA1, agentA2, agentA3, ...moreAgents].map((a) => a._id);
mockReq.user.id = userB.toString();
mockReq.query.limit = '5';
findAccessibleResources.mockResolvedValue(allAgentIds);
findPubliclyAccessibleResources.mockResolvedValue([]);
await getListAgentsHandler(mockReq, mockRes);
const response = mockRes.json.mock.calls[0][0];
expect(response.data).toHaveLength(5);
expect(response.has_more).toBe(true);
expect(response.after).toBeTruthy();
});
test('should mark publicly accessible agents', async () => {
// User has access to agents, some are public
mockReq.user.id = userB.toString();
findAccessibleResources.mockResolvedValue([agentA1._id, agentA2._id]);
findPubliclyAccessibleResources.mockResolvedValue([agentA2._id]);
await getListAgentsHandler(mockReq, mockRes);
const response = mockRes.json.mock.calls[0][0];
expect(response.data).toHaveLength(2);
const publicAgent = response.data.find((a) => a.id === agentA2.id);
const privateAgent = response.data.find((a) => a.id === agentA1.id);
expect(publicAgent.isPublic).toBe(true);
expect(privateAgent.isPublic).toBeUndefined();
});
test('should handle requiredPermission parameter', async () => {
// Test with different permission levels
mockReq.user.id = userB.toString();
mockReq.query.requiredPermission = '15'; // FULL_ACCESS
findAccessibleResources.mockResolvedValue([agentA1._id]);
findPubliclyAccessibleResources.mockResolvedValue([]);
await getListAgentsHandler(mockReq, mockRes);
expect(findAccessibleResources).toHaveBeenCalledWith({
userId: userB.toString(),
role: 'USER',
resourceType: 'agent',
requiredPermissions: 15,
});
const response = mockRes.json.mock.calls[0][0];
expect(response.data).toHaveLength(1);
});
test('should handle promoted filter with ACL', async () => {
// Create a promoted agent
const promotedAgent = await Agent.create({
id: `agent_${nanoid(12)}`,
name: 'Promoted Agent',
description: 'A promoted agent',
provider: 'openai',
model: 'gpt-4',
author: userA,
is_promoted: true,
versions: [
{
name: 'Promoted Agent',
description: 'A promoted agent',
provider: 'openai',
model: 'gpt-4',
is_promoted: true,
createdAt: new Date(),
updatedAt: new Date(),
},
],
});
mockReq.user.id = userB.toString();
mockReq.query.promoted = '1';
findAccessibleResources.mockResolvedValue([agentA1._id, agentA2._id, promotedAgent._id]);
findPubliclyAccessibleResources.mockResolvedValue([]);
await getListAgentsHandler(mockReq, mockRes);
const response = mockRes.json.mock.calls[0][0];
expect(response.data).toHaveLength(1);
expect(response.data[0].id).toBe(promotedAgent.id);
expect(response.data[0].is_promoted).toBe(true);
});
test('should handle errors gracefully', async () => {
mockReq.user.id = userB.toString();
findAccessibleResources.mockRejectedValue(new Error('Permission service error'));
await getListAgentsHandler(mockReq, mockRes);
expect(mockRes.status).toHaveBeenCalledWith(500);
expect(mockRes.json).toHaveBeenCalledWith({
error: 'Permission service error',
});
});
test('should respect combined filters with ACL', async () => {
// Create agents with specific attributes
const productivityPromoted = await Agent.create({
id: `agent_${nanoid(12)}`,
name: 'Productivity Pro',
description: 'A promoted productivity agent',
provider: 'openai',
model: 'gpt-4',
author: userA,
category: 'productivity',
is_promoted: true,
versions: [
{
name: 'Productivity Pro',
description: 'A promoted productivity agent',
provider: 'openai',
model: 'gpt-4',
category: 'productivity',
is_promoted: true,
createdAt: new Date(),
updatedAt: new Date(),
},
],
});
mockReq.user.id = userB.toString();
mockReq.query.category = 'productivity';
mockReq.query.promoted = '1';
findAccessibleResources.mockResolvedValue([
agentA1._id,
agentA2._id,
agentA3._id,
productivityPromoted._id,
]);
findPubliclyAccessibleResources.mockResolvedValue([]);
await getListAgentsHandler(mockReq, mockRes);
const response = mockRes.json.mock.calls[0][0];
expect(response.data).toHaveLength(1);
expect(response.data[0].id).toBe(productivityPromoted.id);
expect(response.data[0].category).toBe('productivity');
expect(response.data[0].is_promoted).toBe(true);
});
});
});

View File

@@ -1,7 +1,7 @@
const { v4 } = require('uuid');
const { sleep } = require('@librechat/agents');
const { sendEvent } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { sendEvent, getBalanceConfig } = require('@librechat/api');
const {
Time,
Constants,
@@ -47,6 +47,7 @@ const { getOpenAIClient } = require('./helpers');
* @returns {void}
*/
const chatV1 = async (req, res) => {
const appConfig = req.config;
logger.debug('[/assistants/chat/] req.body', req.body);
const {
@@ -152,7 +153,7 @@ const chatV1 = async (req, res) => {
return res.end();
}
await cache.delete(cacheKey);
const cancelledRun = await openai.beta.threads.runs.cancel(thread_id, run_id);
const cancelledRun = await openai.beta.threads.runs.cancel(run_id, { thread_id });
logger.debug('[/assistants/chat/] Cancelled run:', cancelledRun);
} catch (error) {
logger.error('[/assistants/chat/] Error cancelling run', error);
@@ -162,7 +163,7 @@ const chatV1 = async (req, res) => {
let run;
try {
run = await openai.beta.threads.runs.retrieve(thread_id, run_id);
run = await openai.beta.threads.runs.retrieve(run_id, { thread_id });
await recordUsage({
...run.usage,
model: run.model,
@@ -251,8 +252,8 @@ const chatV1 = async (req, res) => {
}
const checkBalanceBeforeRun = async () => {
const balance = req.app?.locals?.balance;
if (!balance?.enabled) {
const balanceConfig = getBalanceConfig(appConfig);
if (!balanceConfig?.enabled) {
return;
}
const transactions =
@@ -623,7 +624,7 @@ const chatV1 = async (req, res) => {
if (!response.run.usage) {
await sleep(3000);
completedRun = await openai.beta.threads.runs.retrieve(thread_id, response.run.id);
completedRun = await openai.beta.threads.runs.retrieve(response.run.id, { thread_id });
if (completedRun.usage) {
await recordUsage({
...completedRun.usage,

View File

@@ -1,7 +1,7 @@
const { v4 } = require('uuid');
const { sleep } = require('@librechat/agents');
const { sendEvent } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { sendEvent, getBalanceConfig } = require('@librechat/api');
const {
Time,
Constants,
@@ -38,12 +38,13 @@ const { getOpenAIClient } = require('./helpers');
* @route POST /
* @desc Chat with an assistant
* @access Public
* @param {Express.Request} req - The request object, containing the request data.
* @param {ServerRequest} req - The request object, containing the request data.
* @param {Express.Response} res - The response object, used to send back a response.
* @returns {void}
*/
const chatV2 = async (req, res) => {
logger.debug('[/assistants/chat/] req.body', req.body);
const appConfig = req.config;
/** @type {{files: MongoFile[]}} */
const {
@@ -126,8 +127,8 @@ const chatV2 = async (req, res) => {
}
const checkBalanceBeforeRun = async () => {
const balance = req.app?.locals?.balance;
if (!balance?.enabled) {
const balanceConfig = getBalanceConfig(appConfig);
if (!balanceConfig?.enabled) {
return;
}
const transactions =
@@ -374,9 +375,9 @@ const chatV2 = async (req, res) => {
};
/** @type {undefined | TAssistantEndpoint} */
const config = req.app.locals[endpoint] ?? {};
const config = appConfig.endpoints?.[endpoint] ?? {};
/** @type {undefined | TBaseEndpoint} */
const allConfig = req.app.locals.all;
const allConfig = appConfig.endpoints?.all;
const streamRunManager = new StreamRunManager({
req,
@@ -467,7 +468,7 @@ const chatV2 = async (req, res) => {
if (!response.run.usage) {
await sleep(3000);
completedRun = await openai.beta.threads.runs.retrieve(thread_id, response.run.id);
completedRun = await openai.beta.threads.runs.retrieve(response.run.id, { thread_id });
if (completedRun.usage) {
await recordUsage({
...completedRun.usage,

View File

@@ -22,7 +22,7 @@ const getLogStores = require('~/cache/getLogStores');
/**
* @typedef {Object} ErrorHandlerDependencies
* @property {Express.Request} req - The Express request object
* @property {ServerRequest} req - The Express request object
* @property {Express.Response} res - The Express response object
* @property {() => ErrorHandlerContext} getContext - Function to get the current context
* @property {string} [originPath] - The origin path for the error handler
@@ -108,7 +108,7 @@ const createErrorHandler = ({ req, res, getContext, originPath = '/assistants/ch
return res.end();
}
await cache.delete(cacheKey);
const cancelledRun = await openai.beta.threads.runs.cancel(thread_id, run_id);
const cancelledRun = await openai.beta.threads.runs.cancel(run_id, { thread_id });
logger.debug(`[${originPath}] Cancelled run:`, cancelledRun);
} catch (error) {
logger.error(`[${originPath}] Error cancelling run`, error);
@@ -118,7 +118,7 @@ const createErrorHandler = ({ req, res, getContext, originPath = '/assistants/ch
let run;
try {
run = await openai.beta.threads.runs.retrieve(thread_id, run_id);
run = await openai.beta.threads.runs.retrieve(run_id, { thread_id });
await recordUsage({
...run.usage,
model: run.model,

View File

@@ -11,7 +11,7 @@ const { initializeClient } = require('~/server/services/Endpoints/assistants');
const { getEndpointsConfig } = require('~/server/services/Config');
/**
* @param {Express.Request} req
* @param {ServerRequest} req
* @param {string} [endpoint]
* @returns {Promise<string>}
*/
@@ -173,6 +173,16 @@ const listAssistantsForAzure = async ({ req, res, version, azureConfig = {}, que
};
};
/**
* Initializes the OpenAI client.
* @param {object} params - The parameters object.
* @param {ServerRequest} params.req - The request object.
* @param {ServerResponse} params.res - The response object.
* @param {TEndpointOption} params.endpointOption - The endpoint options.
* @param {boolean} params.initAppClient - Whether to initialize the app client.
* @param {string} params.overrideEndpoint - The endpoint to override.
* @returns {Promise<{ openai: OpenAIClient, openAIApiKey: string; client: import('~/app/clients/OpenAIClient') }>} - The initialized OpenAI client.
*/
async function getOpenAIClient({ req, res, endpointOption, initAppClient, overrideEndpoint }) {
let endpoint = overrideEndpoint ?? req.body.endpoint ?? req.query.endpoint;
const version = await getCurrentVersion(req, endpoint);
@@ -200,6 +210,7 @@ async function getOpenAIClient({ req, res, endpointOption, initAppClient, overri
* @returns {Promise<AssistantListResponse>} 200 - success response - application/json
*/
const fetchAssistants = async ({ req, res, overrideEndpoint }) => {
const appConfig = req.config;
const {
limit = 100,
order = 'desc',
@@ -220,20 +231,20 @@ const fetchAssistants = async ({ req, res, overrideEndpoint }) => {
if (endpoint === EModelEndpoint.assistants) {
({ body } = await listAllAssistants({ req, res, version, query }));
} else if (endpoint === EModelEndpoint.azureAssistants) {
const azureConfig = req.app.locals[EModelEndpoint.azureOpenAI];
const azureConfig = appConfig.endpoints?.[EModelEndpoint.azureOpenAI];
body = await listAssistantsForAzure({ req, res, version, azureConfig, query });
}
if (req.user.role === SystemRoles.ADMIN) {
return body;
} else if (!req.app.locals[endpoint]) {
} else if (!appConfig.endpoints?.[endpoint]) {
return body;
}
body.data = filterAssistants({
userId: req.user.id,
assistants: body.data,
assistantsConfig: req.app.locals[endpoint],
assistantsConfig: appConfig.endpoints?.[endpoint],
});
return body;
};

View File

@@ -197,7 +197,7 @@ const deleteAssistant = async (req, res) => {
await validateAuthor({ req, openai });
const assistant_id = req.params.id;
const deletionStatus = await openai.beta.assistants.del(assistant_id);
const deletionStatus = await openai.beta.assistants.delete(assistant_id);
if (deletionStatus?.deleted) {
await deleteAssistantActions({ req, assistant_id });
}
@@ -258,8 +258,9 @@ function filterAssistantDocs({ documents, userId, assistantsConfig = {} }) {
*/
const getAssistantDocuments = async (req, res) => {
try {
const appConfig = req.config;
const endpoint = req.query;
const assistantsConfig = req.app.locals[endpoint];
const assistantsConfig = appConfig.endpoints?.[endpoint];
const documents = await getAssistants(
{},
{
@@ -296,6 +297,7 @@ const getAssistantDocuments = async (req, res) => {
*/
const uploadAssistantAvatar = async (req, res) => {
try {
const appConfig = req.config;
filterFile({ req, file: req.file, image: true, isAvatar: true });
const { assistant_id } = req.params;
if (!assistant_id) {
@@ -337,7 +339,7 @@ const uploadAssistantAvatar = async (req, res) => {
const metadata = {
..._metadata,
avatar: image.filepath,
avatar_source: req.app.locals.fileStrategy,
avatar_source: appConfig.fileStrategy,
};
const promises = [];
@@ -347,7 +349,7 @@ const uploadAssistantAvatar = async (req, res) => {
{
avatar: {
filepath: image.filepath,
source: req.app.locals.fileStrategy,
source: appConfig.fileStrategy,
},
user: req.user.id,
},
@@ -365,7 +367,7 @@ const uploadAssistantAvatar = async (req, res) => {
try {
await fs.unlink(req.file.path);
logger.debug('[/:agent_id/avatar] Temp. image upload file deleted');
} catch (error) {
} catch {
logger.debug('[/:agent_id/avatar] Temp. image upload file already deleted');
}
}

View File

@@ -94,7 +94,7 @@ const createAssistant = async (req, res) => {
/**
* Modifies an assistant.
* @param {object} params
* @param {Express.Request} params.req
* @param {ServerRequest} params.req
* @param {OpenAIClient} params.openai
* @param {string} params.assistant_id
* @param {AssistantUpdateParams} params.updateData
@@ -199,7 +199,7 @@ const updateAssistant = async ({ req, openai, assistant_id, updateData }) => {
/**
* Modifies an assistant with the resource file id.
* @param {object} params
* @param {Express.Request} params.req
* @param {ServerRequest} params.req
* @param {OpenAIClient} params.openai
* @param {string} params.assistant_id
* @param {string} params.tool_resource
@@ -227,7 +227,7 @@ const addResourceFileId = async ({ req, openai, assistant_id, tool_resource, fil
/**
* Deletes a file ID from an assistant's resource.
* @param {object} params
* @param {Express.Request} params.req
* @param {ServerRequest} params.req
* @param {OpenAIClient} params.openai
* @param {string} params.assistant_id
* @param {string} [params.tool_resource]

View File

@@ -22,10 +22,11 @@ const verify2FAWithTempToken = async (req, res) => {
try {
payload = jwt.verify(tempToken, process.env.JWT_SECRET);
} catch (err) {
logger.error('Failed to verify temporary token:', err);
return res.status(401).json({ message: 'Invalid or expired temporary token' });
}
const user = await getUserById(payload.userId);
const user = await getUserById(payload.userId, '+totpSecret +backupCodes');
if (!user || !user.twoFactorEnabled) {
return res.status(400).json({ message: '2FA is not enabled for this user' });
}
@@ -42,11 +43,11 @@ const verify2FAWithTempToken = async (req, res) => {
return res.status(401).json({ message: 'Invalid 2FA code or backup code' });
}
// Prepare user data to return (omit sensitive fields).
const userData = user.toObject ? user.toObject() : { ...user };
delete userData.password;
delete userData.__v;
delete userData.password;
delete userData.totpSecret;
delete userData.backupCodes;
userData.id = user._id.toString();
const authToken = await setAuthTokens(user._id, res);

View File

@@ -35,9 +35,10 @@ const toolAccessPermType = {
*/
const verifyWebSearchAuth = async (req, res) => {
try {
const appConfig = req.config;
const userId = req.user.id;
/** @type {TCustomConfig['webSearch']} */
const webSearchConfig = req.app.locals?.webSearch || {};
const webSearchConfig = appConfig?.webSearch || {};
const result = await loadWebSearchAuth({
userId,
loadAuthValues,
@@ -110,6 +111,7 @@ const verifyToolAuth = async (req, res) => {
*/
const callTool = async (req, res) => {
try {
const appConfig = req.config;
const { toolId = '' } = req.params;
if (!fieldsMap[toolId]) {
logger.warn(`[${toolId}/call] User ${req.user.id} attempted call to invalid tool`);
@@ -155,8 +157,10 @@ const callTool = async (req, res) => {
returnMetadata: true,
processFileURL,
uploadImageBuffer,
fileStrategy: req.app.locals.fileStrategy,
},
webSearch: appConfig.webSearch,
fileStrategy: appConfig.fileStrategy,
imageOutputType: appConfig.imageOutputType,
});
const tool = loadedTools[0];

View File

@@ -8,19 +8,20 @@ const express = require('express');
const passport = require('passport');
const compression = require('compression');
const cookieParser = require('cookie-parser');
const { isEnabled } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const mongoSanitize = require('express-mongo-sanitize');
const { isEnabled, ErrorController } = require('@librechat/api');
const { connectDb, indexSync } = require('~/db');
const validateImageRequest = require('./middleware/validateImageRequest');
const { jwtLogin, ldapLogin, passportLogin } = require('~/strategies');
const errorController = require('./controllers/ErrorController');
const { updateInterfacePermissions } = require('~/models/interface');
const { checkMigrations } = require('./services/start/migration');
const initializeMCPs = require('./services/initializeMCPs');
const configureSocialLogins = require('./socialLogins');
const AppService = require('./services/AppService');
const { getAppConfig } = require('./services/Config');
const staticCache = require('./utils/staticCache');
const noIndex = require('./middleware/noIndex');
const { seedDatabase } = require('~/models');
const routes = require('./routes');
const { PORT, HOST, ALLOW_SOCIAL_LOGIN, DISABLE_COMPRESSION, TRUST_PROXY } = process.env ?? {};
@@ -46,10 +47,25 @@ const startServer = async () => {
app.disable('x-powered-by');
app.set('trust proxy', trusted_proxy);
await AppService(app);
await seedDatabase();
const indexPath = path.join(app.locals.paths.dist, 'index.html');
const indexHTML = fs.readFileSync(indexPath, 'utf8');
const appConfig = await getAppConfig();
await updateInterfacePermissions(appConfig);
const indexPath = path.join(appConfig.paths.dist, 'index.html');
let indexHTML = fs.readFileSync(indexPath, 'utf8');
// In order to provide support to serving the application in a sub-directory
// We need to update the base href if the DOMAIN_CLIENT is specified and not the root path
if (process.env.DOMAIN_CLIENT) {
const clientUrl = new URL(process.env.DOMAIN_CLIENT);
const baseHref = clientUrl.pathname.endsWith('/')
? clientUrl.pathname
: `${clientUrl.pathname}/`;
if (baseHref !== '/') {
logger.info(`Setting base href to ${baseHref}`);
indexHTML = indexHTML.replace(/base href="\/"/, `base href="${baseHref}"`);
}
}
app.get('/health', (_req, res) => res.status(200).send('OK'));
@@ -67,10 +83,9 @@ const startServer = async () => {
console.warn('Response compression has been disabled via DISABLE_COMPRESSION.');
}
// Serve static assets with aggressive caching
app.use(staticCache(app.locals.paths.dist));
app.use(staticCache(app.locals.paths.fonts));
app.use(staticCache(app.locals.paths.assets));
app.use(staticCache(appConfig.paths.dist));
app.use(staticCache(appConfig.paths.fonts));
app.use(staticCache(appConfig.paths.assets));
if (!ALLOW_SOCIAL_LOGIN) {
console.warn('Social logins are disabled. Set ALLOW_SOCIAL_LOGIN=true to enable them.');
@@ -117,11 +132,12 @@ const startServer = async () => {
app.use('/api/agents', routes.agents);
app.use('/api/banner', routes.banner);
app.use('/api/memories', routes.memories);
app.use('/api/permissions', routes.accessPermissions);
app.use('/api/tags', routes.tags);
app.use('/api/mcp', routes.mcp);
// Add the error controller one more time after all routes
app.use(errorController);
app.use(ErrorController);
app.use((req, res) => {
res.set({
@@ -132,7 +148,8 @@ const startServer = async () => {
const lang = req.cookies.lang || req.headers['accept-language']?.split(',')[0] || 'en-US';
const saneLang = lang.replace(/"/g, '&quot;');
const updatedIndexHtml = indexHTML.replace(/lang="en-US"/g, `lang="${saneLang}"`);
let updatedIndexHtml = indexHTML.replace(/lang="en-US"/g, `lang="${saneLang}"`);
res.type('html');
res.send(updatedIndexHtml);
});
@@ -146,7 +163,7 @@ const startServer = async () => {
logger.info(`Server listening at http://${host == '0.0.0.0' ? 'localhost' : host}:${port}`);
}
initializeMCPs(app);
initializeMCPs().then(() => checkMigrations());
});
};

View File

@@ -3,9 +3,27 @@ const request = require('supertest');
const { MongoMemoryServer } = require('mongodb-memory-server');
const mongoose = require('mongoose');
jest.mock('~/server/services/Config/loadCustomConfig', () => {
return jest.fn(() => Promise.resolve({}));
});
jest.mock('~/server/services/Config', () => ({
loadCustomConfig: jest.fn(() => Promise.resolve({})),
getAppConfig: jest.fn().mockResolvedValue({
paths: {
uploads: '/tmp',
dist: '/tmp/dist',
fonts: '/tmp/fonts',
assets: '/tmp/assets',
},
fileStrategy: 'local',
imageOutputType: 'PNG',
}),
setCachedTools: jest.fn(),
}));
jest.mock('~/app/clients/tools', () => ({
createOpenAIImageTools: jest.fn(() => []),
createYouTubeTools: jest.fn(() => []),
manifestToolMap: {},
toolkits: [],
}));
describe('Server Configuration', () => {
// Increase the default timeout to allow for Mongo cleanup
@@ -31,6 +49,22 @@ describe('Server Configuration', () => {
});
beforeAll(async () => {
// Create the required directories and files for the test
const fs = require('fs');
const path = require('path');
const dirs = ['/tmp/dist', '/tmp/fonts', '/tmp/assets'];
dirs.forEach((dir) => {
if (!fs.existsSync(dir)) {
fs.mkdirSync(dir, { recursive: true });
}
});
fs.writeFileSync(
path.join('/tmp/dist', 'index.html'),
'<!DOCTYPE html><html><head><title>LibreChat</title></head><body><div id="root"></div></body></html>',
);
mongoServer = await MongoMemoryServer.create();
process.env.MONGO_URI = mongoServer.getUri();
process.env.PORT = '0'; // Use a random available port
@@ -92,7 +126,7 @@ async function healthCheckPoll(app, retries = 0) {
if (response.status === 200) {
return; // App is healthy
}
} catch (error) {
} catch {
// Ignore connection errors during polling
}

View File

@@ -1,6 +1,6 @@
const { logger } = require('@librechat/data-schemas');
const { countTokens, isEnabled, sendEvent } = require('@librechat/api');
const { isAssistantsEndpoint, ErrorTypes } = require('librechat-data-provider');
const { isAssistantsEndpoint, ErrorTypes, Constants } = require('librechat-data-provider');
const { truncateText, smartTruncateText } = require('~/app/clients/prompts');
const clearPendingReq = require('~/cache/clearPendingReq');
const { sendError } = require('~/server/middleware/error');
@@ -11,6 +11,10 @@ const { abortRun } = require('./abortRun');
const abortDataMap = new WeakMap();
/**
* @param {string} abortKey
* @returns {boolean}
*/
function cleanupAbortController(abortKey) {
if (!abortControllers.has(abortKey)) {
return false;
@@ -71,6 +75,20 @@ function cleanupAbortController(abortKey) {
return true;
}
/**
* @param {string} abortKey
* @returns {function(): void}
*/
function createCleanUpHandler(abortKey) {
return function () {
try {
cleanupAbortController(abortKey);
} catch {
// Ignore cleanup errors
}
};
}
async function abortMessage(req, res) {
let { abortKey, endpoint } = req.body;
@@ -172,11 +190,15 @@ const createAbortController = (req, res, getAbortData, getReqData) => {
/**
* @param {TMessage} userMessage
* @param {string} responseMessageId
* @param {boolean} [isNewConvo]
*/
const onStart = (userMessage, responseMessageId) => {
const onStart = (userMessage, responseMessageId, isNewConvo) => {
sendEvent(res, { message: userMessage, created: true });
const abortKey = userMessage?.conversationId ?? req.user.id;
const prelimAbortKey = userMessage?.conversationId ?? req.user.id;
const abortKey = isNewConvo
? `${prelimAbortKey}${Constants.COMMON_DIVIDER}${Constants.NEW_CONVO}`
: prelimAbortKey;
getReqData({ abortKey });
const prevRequest = abortControllers.get(abortKey);
const { overrideUserMessageId } = req?.body ?? {};
@@ -194,16 +216,7 @@ const createAbortController = (req, res, getAbortData, getReqData) => {
};
abortControllers.set(addedAbortKey, { abortController, ...minimalOptions });
// Use a simple function for cleanup to avoid capturing context
const cleanupHandler = () => {
try {
cleanupAbortController(addedAbortKey);
} catch (e) {
// Ignore cleanup errors
}
};
const cleanupHandler = createCleanUpHandler(addedAbortKey);
res.on('finish', cleanupHandler);
return;
}
@@ -216,16 +229,7 @@ const createAbortController = (req, res, getAbortData, getReqData) => {
};
abortControllers.set(abortKey, { abortController, ...minimalOptions });
// Use a simple function for cleanup to avoid capturing context
const cleanupHandler = () => {
try {
cleanupAbortController(abortKey);
} catch (e) {
// Ignore cleanup errors
}
};
const cleanupHandler = createCleanUpHandler(abortKey);
res.on('finish', cleanupHandler);
};
@@ -364,15 +368,7 @@ const handleAbortError = async (res, req, error, data) => {
};
}
// Create a simple callback without capturing parent scope
const callback = async () => {
try {
cleanupAbortController(conversationId);
} catch (e) {
// Ignore cleanup errors
}
};
const callback = createCleanUpHandler(conversationId);
await sendError(req, res, options, callback);
};

View File

@@ -47,7 +47,7 @@ async function abortRun(req, res) {
try {
await cache.set(cacheKey, 'cancelled', three_minutes);
const cancelledRun = await openai.beta.threads.runs.cancel(thread_id, run_id);
const cancelledRun = await openai.beta.threads.runs.cancel(run_id, { thread_id });
logger.debug('[abortRun] Cancelled run:', cancelledRun);
} catch (error) {
logger.error('[abortRun] Error cancelling run', error);
@@ -60,7 +60,7 @@ async function abortRun(req, res) {
}
try {
const run = await openai.beta.threads.runs.retrieve(thread_id, run_id);
const run = await openai.beta.threads.runs.retrieve(run_id, { thread_id });
await recordUsage({
...run.usage,
model: run.model,

View File

@@ -0,0 +1,97 @@
const { logger } = require('@librechat/data-schemas');
const { Constants, isAgentsEndpoint, ResourceType } = require('librechat-data-provider');
const { canAccessResource } = require('./canAccessResource');
const { getAgent } = require('~/models/Agent');
/**
* Agent ID resolver function for agent_id from request body
* Resolves custom agent ID (e.g., "agent_abc123") to MongoDB ObjectId
* This is used specifically for chat routes where agent_id comes from request body
*
* @param {string} agentCustomId - Custom agent ID from request body
* @returns {Promise<Object|null>} Agent document with _id field, or null if not found
*/
const resolveAgentIdFromBody = async (agentCustomId) => {
// Handle ephemeral agents - they don't need permission checks
if (agentCustomId === Constants.EPHEMERAL_AGENT_ID) {
return null; // No permission check needed for ephemeral agents
}
return await getAgent({ id: agentCustomId });
};
/**
* Middleware factory that creates middleware to check agent access permissions from request body.
* This middleware is specifically designed for chat routes where the agent_id comes from req.body
* instead of route parameters.
*
* @param {Object} options - Configuration options
* @param {number} options.requiredPermission - The permission bit required (1=view, 2=edit, 4=delete, 8=share)
* @returns {Function} Express middleware function
*
* @example
* // Basic usage for agent chat (requires VIEW permission)
* router.post('/chat',
* canAccessAgentFromBody({ requiredPermission: PermissionBits.VIEW }),
* buildEndpointOption,
* chatController
* );
*/
const canAccessAgentFromBody = (options) => {
const { requiredPermission } = options;
// Validate required options
if (!requiredPermission || typeof requiredPermission !== 'number') {
throw new Error('canAccessAgentFromBody: requiredPermission is required and must be a number');
}
return async (req, res, next) => {
try {
const { endpoint, agent_id } = req.body;
let agentId = agent_id;
if (!isAgentsEndpoint(endpoint)) {
agentId = Constants.EPHEMERAL_AGENT_ID;
}
if (!agentId) {
return res.status(400).json({
error: 'Bad Request',
message: 'agent_id is required in request body',
});
}
// Skip permission checks for ephemeral agents
if (agentId === Constants.EPHEMERAL_AGENT_ID) {
return next();
}
const agentAccessMiddleware = canAccessResource({
resourceType: ResourceType.AGENT,
requiredPermission,
resourceIdParam: 'agent_id', // This will be ignored since we use custom resolver
idResolver: () => resolveAgentIdFromBody(agentId),
});
const tempReq = {
...req,
params: {
...req.params,
agent_id: agentId,
},
};
return agentAccessMiddleware(tempReq, res, next);
} catch (error) {
logger.error('Failed to validate agent access permissions', error);
return res.status(500).json({
error: 'Internal Server Error',
message: 'Failed to validate agent access permissions',
});
}
};
};
module.exports = {
canAccessAgentFromBody,
};

Some files were not shown because too many files have changed in this diff Show More