Compare commits

...

45 Commits

Author SHA1 Message Date
Danny Avila
9449f36ead docs: add AGENTS.md for project structure and development guidelines 2025-10-26 12:32:21 +01:00
Danny Avila
9495520f6f 📦 chore: update vite to v6.4.1 and @playwright/test to v1.56.1 (#10227)
* 📦 chore: update vite to v6.4.1

* 📦 chore: update @playwright/test to v1.56.1
2025-10-22 22:22:57 +02:00
Sebastien Bruel
87d7ee4b0e 🌐 feat: Configurable Domain and Port for Vite Dev Server (#10180) 2025-10-22 22:04:49 +02:00
Danny Avila
d8d5d59d92 ♻️ refactor: Message Cache Clearing Logic into Reusable Helper (#10226) 2025-10-22 22:02:29 +02:00
Danny Avila
e3d33fed8d 📦 chore: update @librechat/agents to v2.4.86 (#10216) 2025-10-22 16:51:58 +02:00
github-actions[bot]
cbf52eabe3 🌍 i18n: Update translation.json with latest translations (#10175)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-10-22 09:53:21 +02:00
Danny Avila
36f0365fd4 🧮 feat: Enhance Model Pricing Coverage and Pattern Matching (#10173)
* updated gpt5-pro

it is here and on openrouter
https://platform.openai.com/docs/models/gpt-5-pro

* feat: Add gpt-5-pro pricing
- Implemented handling for the new gpt-5-pro model in the getValueKey function.
- Updated tests to ensure correct behavior for gpt-5-pro across various scenarios.
- Adjusted token limits and multipliers for gpt-5-pro in the tokens utility files.
- Enhanced model matching functionality to include gpt-5-pro variations.

* refactor: optimize model pricing and validation logic

- Added new model pricing entries for llama2, llama3, and qwen variants in tx.js.
- Updated tokenValues to include additional models and their pricing structures.
- Implemented validation tests in tx.spec.js to ensure all models resolve correctly to pricing.
- Refactored getValueKey function to improve model matching and resolution efficiency.
- Removed outdated model entries from tokens.ts to streamline pricing management.

* fix: add missing pricing

* chore: update model pricing for qwen and gemma variants

* chore: update model pricing and add validation for context windows

- Removed outdated model entries from tx.js and updated tokenValues with new models.
- Added a test in tx.spec.js to ensure all models with pricing have corresponding context windows defined in tokens.ts.
- Introduced 'command-text' model pricing in tokens.ts to maintain consistency across model definitions.

* chore: update model names and pricing for AI21 and Amazon models

- Refactored model names in tx.js for AI21 and Amazon models to remove versioning and improve consistency.
- Updated pricing values in tokens.ts to reflect the new model names.
- Added comprehensive tests in tx.spec.js to validate pricing for both short and full model names across AI21 and Amazon models.

* feat: add pricing and validation for Claude Haiku 4.5 model

* chore: increase default max context tokens to 18000 for agents

* feat: add Qwen3 model pricing and validation tests

* chore: reorganize and update Qwen model pricing in tx.js and tokens.ts

---------

Co-authored-by: khfung <68192841+khfung@users.noreply.github.com>
2025-10-19 15:23:27 +02:00
Federico Ruggi
589f119310 🩹 fix: Wrap Attempt to Reconnect OAuth MCP Servers (#10172) 2025-10-18 05:54:05 -04:00
Marco Beretta
d41b07c0af ♻️ refactor: Replace fontSize Recoil atom with Jotai (#10171)
* fix: reapply chat font size on load

* refactor: streamline font size handling in localStorage

* fix: update matchMedia mock to accurately reflect desktop and touchscreen capabilities

* refactor: implement Jotai for font size management and initialize on app load

- Replaced Recoil with Jotai for font size state management across components.
- Added a new `fontSize` atom to handle font size changes and persist them in localStorage.
- Implemented `initializeFontSize` function to apply saved font size on app load.
- Updated relevant components to utilize the new font size atom.

---------

Co-authored-by: ddooochii <ddooochii@gmail.com>
Co-authored-by: Danny Avila <danny@librechat.ai>
2025-10-18 05:50:34 -04:00
Sean McGrath
114deecc4e 🛠️ chore: Add @radix-ui/react-tooltip to Artifact Dependencies (#10112) 2025-10-16 16:26:14 -04:00
Danny Avila
f59daaeecc 📄 feat: Context Field for Anthropic Documents (PDF) (#10148)
* fix: Remove ephemeral cache control from document encoding function

* refactor: Improve document encoding types and add file context for anthropic messages api

- Added AnthropicDocumentBlock interface to define the structure for documents from the Anthropic provider.
- Updated encodeAndFormatDocuments function to utilize the new type and include optional context for filenames.
- Refactored DocumentResult to use a union type for various document formats, improving type safety and clarity.
2025-10-16 16:24:14 -04:00
Danny Avila
bc77bbd1ba 🪂 refactor: OCR Fallback for "Upload as Text" File Process (#10126) 2025-10-15 09:20:54 -04:00
Danny Avila
c602088178 📱 fix: Improve Mobile Chat Focus Detection and Navigation (#10125) 2025-10-15 08:12:32 -04:00
Danny Avila
3d1cedb85b 📡 refactor: Flush Redis Cache Script (#10087)
* 🔧 feat: Enhance Redis Configuration and Connection Handling in Cache Flush Utility

- Added support for Redis username, password, and CA certificate.
- Improved Redis client creation to handle both cluster and single instance configurations.
- Implemented a helper function to read the Redis CA certificate with error handling.
- Updated connection logic to include timeout and error handling for Redis connections.

* refactor: flush cache if redis URI is defined
2025-10-12 04:15:18 -04:00
Marlon
bf2567bc8f 🏷️ chore: update OpenAI models list in .env.example (#10085)
Remove deprecated OpenAI models and add latest GPT-5, o3/o4, and GPT-4.1 series models based on current API offerings as of October 2025.

Removed deprecated models:
- gpt-4.5-preview (deprecated July 2025)
- o1-preview, o1-mini (deprecated July/October 2025)
- gpt-4-vision-preview (shut down December 2024)
- Dated GPT-3.5 and GPT-4 variants (consolidated into base versions)

Added new flagship models:
- GPT-5 series: gpt-5, gpt-5-mini, gpt-5-nano
- o3/o4 reasoning models: o3, o4-mini, o3-pro, o3-mini
- GPT-4.1 series: gpt-4.1, gpt-4.1-mini, gpt-4.1-nano


Reorganized list with newest models first for better discoverability.

References:
- https://platform.openai.com/docs/models
- https://platform.openai.com/docs/deprecations
2025-10-12 04:13:17 -04:00
Federico Ruggi
5ce67b5b71 📮 feat: Custom OAuth Headers Support for MCP Server Config (#10014)
* add oauth_headers field to mcp options

* wrap fetch to pass oauth headers

* fix order

* consolidate headers passing

* fix tests
2025-10-11 11:17:12 -04:00
WhammyLeaf
cbd217efae 🏷️ feat: Add Custom Deployment Labels and Annotations for Helm (#10076) 2025-10-11 11:15:35 -04:00
François Leblanc
d990fe1d5f 📖 feat: Word Wrapping for Text and Markdown Code Blocks (#10055)
Enables word wrapping for text, markdown, and plaintext code blocks
to prevent horizontal scrolling on long lines. Improves readability
for prose content in fenced code blocks.
2025-10-11 10:39:33 -04:00
Sebastien Bruel
7c9a868d34 📝 feat: Add Markdown Rendering Support for Artifacts (#10049)
* Add Markdown rendering support for artifacts

* Add tests

* Remove custom code for mermaid

* Remove unnecessary dark mode hook

* refactor: optimize mermaid dependencies

- Added support for additional MIME types in artifact templates.
- Updated mermaidDependencies to include new packages: class-variance-authority, clsx, tailwind-merge, and @radix-ui/react-slot.
- Refactored zoom and refresh icons in MermaidDiagram component for improved clarity and maintainability.

* fix: add Markdown support for artifacts rendering

* feat: support 'text/md' as an additional MIME type for Markdown artifacts

* refactor: simplify markdownDependencies structure in artifacts utility

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-10-11 10:37:35 -04:00
Peter Nancarrow
e9a85d5c65 🗂️ feat: Add Optional Group Field to ModelSpecs Configuration (#9996)
* feat: Add group field to modelSpecs for flexible grouping

* resolve lint issues

* fix test

* docs: enhance modelSpecs group field documentation for clarity

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-10-11 07:55:06 -04:00
Karthikeyan N
f61afc1124 💸 chore: Update Gemini 2.5 Flash Lite Input Pricing (#10062)
* Update prompt value for gemini-2.5-flash-lite

New Input price (text, image, video)  -->	$0.10

https://ai.google.dev/gemini-api/docs/pricing

* Fix formatting of gemini-2.5-flash-lite values

changed 0.10 to 0.1 to follow standards
2025-10-11 07:53:28 -04:00
Marco Beretta
5566cc499e 🔗 fix: Add branch-specific shared links (targetMessageId) (#10016)
* feat: Enhance shared link functionality with target message support

* refactor: Remove comment on compound index in share schema

* chore: Reorganize imports in ShareButton component for clarity

* refactor: Integrate Recoil for latest message tracking in ShareButton component

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-10-10 08:42:05 -04:00
Dustin Healy
ded3f2e998 🕸️ fix: Upload to Provider Filetype Filtering for DragDropModal (#10064) 2025-10-10 07:48:31 -04:00
Dustin Healy
7792fcee17 👆🏼 fix: Agent Support for Upload to Provider in DragDropModal (#10063) 2025-10-10 07:43:56 -04:00
github-actions[bot]
f931731ef8 🌍 i18n: Update translation.json with latest translations (#10070)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-10-10 07:43:11 -04:00
Danny Avila
07d0abc9fd 🖼️ fix: Extract File Context & Persist Attachments (#10069)
- problem: `addImageUrls` had a side effect that was being leveraged before to populate both the `ocr` message field, now `fileContext`, and `client.options.attachments`, which would record the user's uploaded message attachments to the user message when saved to the database and returned at the end of the request lifecycle
- solution: created dedicated handling for file context, and made sure to populate `allFiles` with non-provider attachments
2025-10-10 05:35:37 -04:00
Danny Avila
fbe341a171 refactor: Latest Message Tracking with Robust Text Key Generation (#10059)
* chore: enhance logging for latest message actions in message components

* fix: Extract previous convoId from latest text in message helpers and process hooks

- Updated `useMessageHelpers` and `useMessageProcess` to extract `convoId` from the previous text key for improved message handling.
- Refactored `getLengthAndLastTenChars` to `getLengthAndLastNChars` for better flexibility in character length retrieval.
- Introduced `getLatestContentForKey` function to streamline content extraction from messages.

* chore: Enhance logging for clearing latest messages in conversation hooks

* refactor: Update message key formatting for improved URL parameter handling

- Modified `getLatestContentForKey` to change the format from `${text}-${i}` to `${text}&i=${i}` for better URL parameter structure.
- Adjusted `getTextKey` to increase character length retrieval from 12 to 16 in `getLengthAndLastNChars` for enhanced text processing.

* refactor: Simplify convoId extraction and enhance message formatting

- Updated `useMessageHelpers` and `useMessageProcess` to extract `convoId` using a new format for improved clarity.
- Refactored `getLatestContentForKey` to streamline content formatting and ensure consistent use of `Constants.COMMON_DIVIDER` for better message structure.
- Removed redundant length and last character extraction logic from `getLengthAndLastNChars` for cleaner code.

* chore: linting

* chore: Simplify pre-commit hook by removing unnecessary lines
2025-10-10 04:22:16 -04:00
Danny Avila
20282f32c8 📦 chore: Bump nodemailer to v7.0.9 (#10045)
* chore: add build script for client package to streamline builds

* chore: update nodemailer dependency to version 7.0.9
2025-10-09 03:45:10 -04:00
José Pedro Silva
6fa3db2969 👑 feat: Add OIDC Claim-Based Admin Role Assignment (#9170)
* feat: Add support for users to be admins when logging in using OpenID

* fix: Linting issues

* fix: whitespace

* chore: add unit tests for OIDC_ADMIN_ROLE

* refactor: Replace custom property retrieval function with lodash's get for improved readability and maintainability

* feat: Enhance OpenID role extraction and error handling in setupOpenId function

- Improved role validation to check for both array and string types.
- Added detailed error messages for missing or invalid role paths in tokens.
- Expanded unit tests to cover various scenarios for nested role extraction and error handling.

* fix: Improve error handling for role extraction in OpenID strategy

- Enhanced validation to check for invalid role types (array or string).
- Updated error messages for clarity when roles are missing or of incorrect type.
- Added unit tests to cover scenarios where roles return invalid types (object, number).

* feat: Implement user role demotion in OpenID strategy when admin role is absent from token

- Added logic to demote users from 'ADMIN' to 'USER' if the admin role is not present in the token.
- Enhanced logging to capture role changes for better traceability.
- Introduced unit tests to verify the demotion behavior and ensure correct handling when admin role environment variables are not configured.

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-10-09 03:35:22 -04:00
Dustin Healy
ff027e8243 👨‍🔧 fix: Direct Provider Attachment Support for Agents (#10035)
* fix: show direct upload option on applicable agents

* fix: allow agent file upload handler to process direct upload files (no tool_resource)
2025-10-09 03:31:04 -04:00
MarcAmick
e9b678dd6a ⚖️ fix: Add Configurable File Size Cap for Conversation Imports (#10012)
* Check file size of conversation being imported against a configured max size to prevent bringing down the application by uploading a large file

chore: remove non-english localization as needs to be added via locize

* feat: Implement file size validation for conversation imports to prevent oversized uploads

---------

Co-authored-by: Marc Amick <MarcAmick@jhu.edu>
Co-authored-by: Danny Avila <danny@librechat.ai>
2025-10-07 14:47:21 -04:00
github-actions[bot]
bb7a0274fa 🌍 i18n: Update translation.json with latest translations (#9995)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-10-07 14:14:18 -04:00
Marco Beretta
a5189052ec ️ fix: Accessibility, UI consistency, dialog & avatar refactors (#9975)
* 🔧 refactor: Improve accessibility and styling in ChatGroupItem and FilterPrompts components

* 🔧 fix: Add button type and keyboard accessibility to dropdown menu trigger in ChatGroupItem

* 🔧 fix(757): Enhance accessibility by updating aria-labels and adding localization for prompt groups

* 🔧 fix(618): Update version to 0.3.1 and enhance accessibility in InfoHoverCard component

* 🔧 fix(618): Update aria-label in InfoHoverCard to use dynamic text prop for improved accessibility

* 🔧 fix: Enhance accessibility by updating aria-labels and roles in Conversations components

* 🔧 fix(620): Enhance accessibility by adding tabIndex to Tabs.Content components in ArtifactTabs, Settings, and Speech components

* refactor: remove RevokeKeysButton component and update related components for accessibility

- Deleted RevokeKeysButton component.
- Updated SharedLinks and General components to use Label for accessibility.
- Enhanced Personalization component with aria-labelledby and aria-describedby attributes.
- Refactored ConversationModeSwitch to use ToggleSwitch for better state management.
- Improved AutoSendTextSelector with local state management and accessibility attributes.
- Replaced Switch components with ToggleSwitch in various Speech and TTS components for consistency.
- Added aria-labelledby attributes to Dropdown components for better accessibility.
- Updated translation.json to include new localization keys and improved existing ones.
- Enhanced Slider component to support aria attributes for better accessibility.

* 🔧 fix: Enhance user feedback for API key operations with success and error messages

* 🔧 fix: Update aria-labels in Avatar component for improved localization and accessibility

* 🔧 fix: Refactor handleFile and handleDrop functions for improved readability and maintainability
2025-10-07 14:12:49 -04:00
Danny Avila
bcd97aad2f 📎 feat: Direct Provider Attachment Support for Multimodal Content (#9994)
* 📎 feat: Direct Provider Attachment Support for Multimodal Content

* 📑 feat: Anthropic Direct Provider Upload (#9072)

* feat: implement Anthropic native PDF support with document preservation

- Add comprehensive debug logging throughout PDF processing pipeline
- Refactor attachment processing to separate image and document handling
- Create distinct addImageURLs(), addDocuments(), and processAttachments() methods
- Fix critical bugs in stream handling and parameter passing
- Add streamToBuffer utility for proper stream-to-buffer conversion
- Remove api/agents submodule from repository

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* chore: remove out of scope formatting changes

* fix: stop duplication of file in chat on end of response stream

* chore: bring back file search and ocr options

* chore: localize upload to provider string in file menu

* refactor: change createMenuItems args to fit new pattern introduced by anthropic-native-pdf-support

* feat: add cache point for pdfs processed by anthropic endpoint since they are unlikely to change and should benefit from caching

* feat: combine Upload Image into Upload to Provider since they both perform direct upload and change provider upload icon to reflect multimodal upload

* feat: add citations support according to docs

* refactor: remove redundant 'document' check since documents are handled properly by formatMessage in the agents repo now

* refactor: change upload logic so anthropic endpoint isn't exempted from normal upload path using Agents for consistency with the rest of the upload logic

* fix: include width and height in return from uploadLocalFile so images are correctly identified when going through an AgentUpload in addImageURLs

* chore: remove client specific handling since the direct provider stuff is handled by the agent client

* feat: handle documents in AgentClient so no need for change to agents repo

* chore: removed unused changes

* chore: remove auto generated comments from OG commit

* feat: add logic for agents to use direct to provider uploads if supported (currently just anthropic)

* fix: reintroduce role check to fix render error because of undefined value for Content Part

* fix: actually fix render bug by using proper isCreatedByUser check and making sure our mutation of formattedMessage.content is consistent

---------

Co-authored-by: Andres Restrepo <andres@thelinuxkid.com>
Co-authored-by: Claude <noreply@anthropic.com>

📁 feat: Send Attachments Directly to Provider (OpenAI) (#9098)

* refactor: change references from direct upload to direct attach to better reflect functionality

since we are just using base64 encoding strategy now rather than Files/File API for sending our attachments directly to the provider, the upload nomenclature no longer makes sense. direct_attach better describes the different methods of sending attachments to providers anyways even if we later introduce direct upload support

* feat: add upload to provider option for openai (and agent) ui

* chore: move anthropic pdf validator over to packages/api

* feat: simple pdf validation according to openai docs

* feat: add provider agnostic validatePdf logic to start handling multiple endpoints

* feat: add handling for openai specific documentPart formatting

* refactor: move require statement to proper place at top of file

* chore: add in openAI endpoint for the rest of the document handling logic

* feat: add direct attach support for azureOpenAI endpoint and agents

* feat: add pdf validation for azureOpenAI endpoint

* refactor: unify all the endpoint checks with isDocumentSupportedEndpoint

* refactor: consolidate Upload to Provider vs Upload image logic for clarity

* refactor: remove anthropic from anthropic_multimodal fileType since we support multiple providers now

🗂️ feat: Send Attachments Directly to Provider (Google) (#9100)

* feat: add validation for google PDFs and add google endpoint as a document supporting endpoint

* feat: add proper pdf formatting for google endpoints (requires PR #14 in agents)

* feat: add multimodal support for google endpoint attachments

* feat: add audio file svg

* fix: refactor attachments logic so multi-attachment messages work properly

* feat: add video file svg

* fix: allows for followup questions of uploaded multimodal attachments

* fix: remove incorrect final message filtering that was breaking Attachment component rendering

fix: manualy rename 'documents' to 'Documents' in git since it wasn't picked up due to case insensitivity in dir name

fix: add logic so filepicker for a google agent has proper filetype filtering

🛫 refactor: Move Encoding Logic to packages/api (#9182)

* refactor: move audio encode over to TS

* refactor: audio encoding now functional in LC again

* refactor: move video encode over to TS

* refactor: move document encode over to TS

* refactor: video encoding now functional in LC again

* refactor: document encoding now functional in LC again

* fix: extend file type options in AttachFileMenu to include 'google_multimodal' and update dependency array to include agent?.provider

* feat: only accept pdfs if responses api is enabled for openai convos

chore: address ESLint comments

chore: add missing audio mimetype

* fix: type safety for message content parts and improve null handling

* chore: reorder AttachFileMenuProps for consistency and clarity

* chore: import order in AttachFileMenu

* fix: improve null handling for text parts in parseTextParts function

* fix: remove no longer used unsupported capability error message for file uploads

* fix: OpenAI Direct File Attachment Format

* fix: update encodeAndFormatDocuments to support  OpenAI responses API and enhance document result types

* refactor: broaden providers supported for documents

* feat: enhance DragDrop context and modal to support document uploads based on provider capabilities

* fix: reorder import statements for consistency in video encoding module

---------

Co-authored-by: Dustin Healy <54083382+dustinhealy@users.noreply.github.com>
2025-10-06 17:30:16 -04:00
Danny Avila
9c77f53454 🔀 refactor: Only Cleanup Meili Sync if actually Synced 2025-10-05 22:41:40 -04:00
Danny Avila
31a283a4fe 🔍 feat: Add Serper as Scraper Provider and Firecrawl Version Support (#9984)
* 🔧 chore: Update @librechat/agents to v2.4.84 in package.json and package-lock.json

* feat: Serper as new scraperProvider for Web Search and add firecrawlVersion support

* fix: TWebSearchKeys and ensure unique API keys extraction

* chore: Add build:packages script to streamline package builds
2025-10-05 20:34:05 -04:00
Danny Avila
857c054a9a 🗑️ feat: Cleanup for Orphaned MeiliSearch Documents (#9980)
- Added a new function `deleteDocumentsWithoutUserField` to remove documents lacking the user field from the specified MeiliSearch index.
- Integrated this function into the `ensureFilterableAttributes` method to clean up orphaned documents when existing messages or conversations are found without a user field.

🔧 feat: Enhance Index Synchronization Logic

- Updated `ensureFilterableAttributes` to return an object indicating whether settings were updated and if orphaned documents were found.
- Integrated orphaned document cleanup directly into the index synchronization process without forcing a full re-sync unless settings were updated.
- Improved logging for clarity on index configuration updates and orphaned document handling.

🔧 feat: Improve Flow State Management in Index Synchronization

- Refactored flow state management logic to ensure cleanup occurs after synchronization, regardless of success or error.
- Enhanced logging for flow state cleanup to provide better visibility into the synchronization process.
- Streamlined the structure of the index synchronization function for improved readability.
2025-10-05 15:54:47 -04:00
Danny Avila
c9103a1708 🤖 feat: Add Z.AI GLM Context Window & Pricing (#9979)
* fix: update @librechat/agents to v2.4.83 to handle reasoning edge case encountered with GLM models

* feat: GLM Context Window & Pricing Support

* feat: Add support for glm4 model in token values and tests
2025-10-05 09:08:29 -04:00
Danny Avila
7288449011 🫴 refactor: Add Broader Support for GPT-OSS Naming (#9978) 2025-10-05 07:02:09 -04:00
alfo-dev
7897801fbc 🧱 fix: DALL-E Proxy Bypass (#9971) 2025-10-05 06:56:21 -04:00
Danny Avila
838fb53208 🔃 refactor: Decouple Effects from AppService, move to data-schemas (#9974)
* chore: linting for `loadCustomConfig`

* refactor: decouple CDN init and variable/health checks from AppService

* refactor: move AppService to packages/data-schemas

* chore: update AppConfig import path to use data-schemas

* chore: update JsonSchemaType import path to use data-schemas

* refactor: update UserController to import webSearchKeys and redefine FunctionTool typedef

* chore: remove AppService.js

* refactor: update AppConfig interface to use Partial<TCustomConfig> and make paths and fileStrategies optional

* refactor: update checkConfig function to accept Partial<TCustomConfig>

* chore: fix types

* refactor: move handleRateLimits to startup checks as is an effect

* test: remove outdated rate limit tests from AppService.spec and add new handleRateLimits tests in checks.spec
2025-10-05 06:37:57 -04:00
Danny Avila
9ff608e6af 📦 chore: fix packages/api peer dependencies (#9973) 2025-10-04 16:43:22 -04:00
Danny Avila
1b8a0bfaee ⚙️ chore: Resolve Build Warning, Package Cleanup, Robust Temp Chat Time (#9962)
* ⚙️ chore: Resolve Build Warning and `keyvMongo` types

* 🔄 chore: Update mongodb version to ^6.14.2 in package.json and package-lock.json

* chore: remove @langchain/openai dep

* 🔄 refactor: Change log level from warn to debug for missing endpoint config

* 🔄 refactor: Improve temp chat expiration date calculation in tests and implementation
2025-10-04 01:53:37 -04:00
Federico Ruggi
c0ed738aed 🚉 feat: MCP Registry Individual Server Init (2) (#9940)
* initialize servers sequentially

* adjust for exported properties that are not nullable anymore

* use underscore separator

* mock with set

* customize init timeout via env var

* refactor for readability, use loaded conns for tool functions

* address PR comments

* clean up fire-and-forget

* fix tests
2025-10-03 16:01:34 -04:00
Theo N. Truong
0e5bb6f98c 🔄 refactor: Migrate Cache Logic to TypeScript (#9771)
* Refactor: Moved Redis cache infra logic into `packages/api`
- Moved cacheFactory and redisClients from `api/cache` into `packages/api/src/cache` so that features in `packages/api` can use cache without importing backward from the backend.
- Converted all moved files into TS with proper typing.
- Created integration tests to run against actual Redis servers for redisClients and cacheFactory.
- Added a GitHub workflow to run integration tests for the cache feature.
- Bug fix: keyvRedisClient now implements the PING feature properly.

* chore: consolidate imports in getLogStores.js

* chore: reorder imports

* chore: re-add fs-extra as dev dep.

* chore: reorder imports in cacheConfig.ts, cacheFactory.ts, and keyvMongo.ts

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-10-02 09:33:58 -04:00
311 changed files with 10410 additions and 4624 deletions

View File

@@ -196,7 +196,7 @@ GOOGLE_KEY=user_provided
#============#
OPENAI_API_KEY=user_provided
# OPENAI_MODELS=o1,o1-mini,o1-preview,gpt-4o,gpt-4.5-preview,chatgpt-4o-latest,gpt-4o-mini,gpt-3.5-turbo-0125,gpt-3.5-turbo-0301,gpt-3.5-turbo,gpt-4,gpt-4-0613,gpt-4-vision-preview,gpt-3.5-turbo-0613,gpt-3.5-turbo-16k-0613,gpt-4-0125-preview,gpt-4-turbo-preview,gpt-4-1106-preview,gpt-3.5-turbo-1106,gpt-3.5-turbo-instruct,gpt-3.5-turbo-instruct-0914,gpt-3.5-turbo-16k
# OPENAI_MODELS=gpt-5,gpt-5-codex,gpt-5-mini,gpt-5-nano,o3-pro,o3,o4-mini,gpt-4.1,gpt-4.1-mini,gpt-4.1-nano,o3-mini,o1-pro,o1,gpt-4o,gpt-4o-mini
DEBUG_OPENAI=false
@@ -459,6 +459,9 @@ OPENID_CALLBACK_URL=/oauth/openid/callback
OPENID_REQUIRED_ROLE=
OPENID_REQUIRED_ROLE_TOKEN_KIND=
OPENID_REQUIRED_ROLE_PARAMETER_PATH=
OPENID_ADMIN_ROLE=
OPENID_ADMIN_ROLE_PARAMETER_PATH=
OPENID_ADMIN_ROLE_TOKEN_KIND=
# Set to determine which user info property returned from OpenID Provider to store as the User's username
OPENID_USERNAME_CLAIM=
# Set to determine which user info property returned from OpenID Provider to store as the User's name
@@ -650,6 +653,12 @@ HELP_AND_FAQ_URL=https://librechat.ai
# Google tag manager id
#ANALYTICS_GTM_ID=user provided google tag manager id
# limit conversation file imports to a certain number of bytes in size to avoid the container
# maxing out memory limitations by unremarking this line and supplying a file size in bytes
# such as the below example of 250 mib
# CONVERSATION_IMPORT_MAX_FILE_SIZE_BYTES=262144000
#===============#
# REDIS Options #
#===============#

View File

@@ -0,0 +1,78 @@
name: Cache Integration Tests
on:
pull_request:
branches:
- main
- dev
- release/*
paths:
- 'packages/api/src/cache/**'
- 'redis-config/**'
- '.github/workflows/cache-integration-tests.yml'
jobs:
cache_integration_tests:
name: Run Cache Integration Tests
timeout-minutes: 30
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Use Node.js 20.x
uses: actions/setup-node@v4
with:
node-version: 20
cache: 'npm'
- name: Install Redis tools
run: |
sudo apt-get update
sudo apt-get install -y redis-server redis-tools
- name: Start Single Redis Instance
run: |
redis-server --daemonize yes --port 6379
sleep 2
# Verify single Redis is running
redis-cli -p 6379 ping || exit 1
- name: Start Redis Cluster
working-directory: redis-config
run: |
chmod +x start-cluster.sh stop-cluster.sh
./start-cluster.sh
sleep 10
# Verify cluster is running
redis-cli -p 7001 cluster info || exit 1
redis-cli -p 7002 cluster info || exit 1
redis-cli -p 7003 cluster info || exit 1
- name: Install dependencies
run: npm ci
- name: Build packages
run: |
npm run build:data-provider
npm run build:data-schemas
npm run build:api
- name: Run cache integration tests
working-directory: packages/api
env:
NODE_ENV: test
USE_REDIS: true
REDIS_URI: redis://127.0.0.1:6379
REDIS_CLUSTER_URI: redis://127.0.0.1:7001,redis://127.0.0.1:7002,redis://127.0.0.1:7003
run: npm run test:cache:integration
- name: Stop Redis Cluster
if: always()
working-directory: redis-config
run: ./stop-cluster.sh || true
- name: Stop Single Redis Instance
if: always()
run: redis-cli -p 6379 shutdown || true

View File

@@ -1,5 +1,2 @@
#!/usr/bin/env sh
set -e
. "$(dirname -- "$0")/_/husky.sh"
[ -n "$CI" ] && exit 0
npx lint-staged --config ./.husky/lint-staged.config.js

233
AGENTS.md Normal file
View File

@@ -0,0 +1,233 @@
# LibreChat AGENTS.md
LibreChat is a multi-provider AI chat platform featuring agents, tools, and multimodal interactions. This file provides guidance for AI coding agents working on the codebase.
## Project Overview
LibreChat is a monorepo-based full-stack application providing a unified interface for multiple AI models and providers (OpenAI, Anthropic, Google Gemini, Azure, AWS Bedrock, Groq, Mistral, and more).
## Workspace Structure
LibreChat uses npm workspaces to organize code into distinct packages with clear responsibilities:
```
LibreChat/
├── api/ # Express.js backend (CJS, transitioning to TypeScript)
├── client/ # React/Vite frontend application
└── packages/
├── api/ # @librechat/api - Backend TypeScript package
├── client/ # @librechat/client - Frontend shared components
├── data-provider/ # librechat-data-provider - Shared frontend/backend
└── data-schemas/ # @librechat/data-schemas - Backend schemas & models
```
### Workspace Responsibilities
#### `/api` - Express.js Backend
- **Language**: CommonJS JavaScript (transitioning to TypeScript via shared packages)
- **Purpose**: Main Express.js server, routes, middleware, legacy services
- **Note**: Legacy workspace; **new backend logic should go in `packages/api` instead**
- **Uses**: `librechat-data-provider`, `@librechat/api`, `@librechat/data-schemas`
#### `/client` - React Frontend Application
- **Language**: TypeScript + React
- **Purpose**: Main web application UI
- **Uses**: `librechat-data-provider`, `@librechat/client`
#### `/packages/api` - `@librechat/api`
- **Language**: TypeScript (full support)
- **Purpose**: Backend-only package for all new backend logic
- **Used by**: `/api` workspace and potentially other backend projects
- **Key Modules**: Agents, MCP, tools, file handling, endpoints, authentication, caching, middleware
- **Critical**: **All new backend logic should be coded here first and foremost for full TypeScript support**
- **Depends on**: `@librechat/data-schemas`
#### `/packages/client` - `@librechat/client`
- **Language**: TypeScript + React
- **Purpose**: Reusable React components, hooks, and utilities
- **Used by**: `/client` workspace and other LibreChat team frontend repositories
- **Exports**: Common components, hooks, SVG icons, theme utilities, localization
#### `/packages/data-provider` - `librechat-data-provider`
- **Language**: TypeScript
- **Purpose**: **App-wide shared package** used by both frontend and backend
- **Scope**: Universal - used everywhere
- **Exports**: Data services, API endpoints, type definitions, utilities, React Query hooks
- **Note**: Foundation package that all other workspaces depend on
#### `/packages/data-schemas` - `@librechat/data-schemas`
- **Language**: TypeScript
- **Purpose**: Backend-only schemas, models, and data validation
- **Used by**: Backend workspaces only (`@librechat/api`, `/api`)
- **Exports**: Mongoose models, Zod schemas, database utilities, configuration schemas
## Build System & Dependencies
### Build Order
The build system has a specific dependency chain that must be followed:
```bash
# Full frontend build command:
npm run frontend
```
**Execution order:**
1. `librechat-data-provider` - Foundation package (all depend on this)
2. `@librechat/data-schemas` - Required by `@librechat/api`
3. `@librechat/api` - Backend TypeScript package
4. `@librechat/client` - Frontend shared components
5. `/client` - Final frontend compilation
### Individual Build Commands
Packages can be built separately as needed:
```bash
npm run build:data-provider # Build librechat-data-provider
npm run build:data-schemas # Build @librechat/data-schemas
npm run build:api # Build @librechat/api
npm run build:client-package # Build @librechat/client
npm run build:client # Build /client frontend app
npm run build:packages # Build all packages (excludes /client app)
```
**Note**: Not all packages need rebuilding for every change - only rebuild when files in that specific workspace are modified.
## Development Workflow
### Running Development Servers
```bash
# Backend
npm run backend:dev # Runs /api workspace with nodemon
# Frontend
npm run frontend:dev # Runs /client with Vite dev server
# Both (not recommended for active development)
npm run dev
```
### Where to Place New Code
#### Backend Logic
- **Primary location**: `/packages/api/src/` - Full TypeScript support
- **Legacy location**: `/api/` - Only for modifying existing CJS code
- **Strategy**: Prefer `@librechat/api` for all new features, utilities, and services
#### Frontend Components
- **Reusable components**: `/packages/client/src/components/`
- **App-specific components**: `/client/src/components/`
- **Strategy**: If it could be reused in other frontend projects, put it in `@librechat/client`
#### Shared Types & Utilities
- **Universal (frontend + backend)**: `/packages/data-provider/src/`
- **Backend-only**: `/packages/data-schemas/src/` or `/packages/api/src/`
- **Frontend-only**: `/packages/client/src/`
#### Database Models & Schemas
- **Always**: `/packages/data-schemas/src/models/` or `/packages/data-schemas/src/schema/`
## Technology Stack
### Backend
- **Runtime**: Node.js
- **Framework**: Express.js
- **Database**: MongoDB with Mongoose ODM
- **Authentication**: Passport.js (OAuth2, OpenID, LDAP, SAML, JWT)
- **AI Integration**: LangChain, direct SDK integrations
- **Streaming**: Server-Sent Events (SSE)
### Frontend
- **Framework**: React 18
- **Build Tool**: Vite
- **State Management**: Recoil
- **Styling**: TailwindCSS
- **UI Components**: Radix UI, Headless UI
- **Data Fetching**: TanStack Query (React Query)
- **HTTP Client**: Axios
### Package Manager
- **Primary**: npm (workspaces)
- **Alternative**: pnpm (compatible), bun (experimental scripts available)
## Configuration
### Environment Variables
- `.env` - Local development (never commit)
- `docker-compose.override.yml.example` - Example for Docker environment configuration
- Validate on server startup
### librechat.yaml
Main configuration for:
- AI provider endpoints
- Model configurations
- Agent and tool settings
- Authentication options
## Code Quality & Standards
### General Guidelines
- Use TypeScript for all new code (especially in `packages/`)
- Follow existing ESLint/Prettier configurations (formatting issues will be fixed manually)
- Keep functions focused and modular
- Handle errors appropriately with proper HTTP status codes
### File Naming
- React components: `PascalCase.tsx` (e.g., `ChatInterface.tsx`)
- Utilities: `camelCase.ts` (e.g., `formatMessage.ts`)
- Test files: `*.spec.ts` or `*.test.ts`
## Key Architectural Patterns
### Multi-Provider Pattern
Abstract AI provider implementations to support multiple services uniformly:
- Provider services implement common interfaces
- Handle streaming via SSE
- Support both completion and chat endpoints
### Agent System
- Built on LangChain
- Agent definitions in `packages/api/src/agents/`
- Tools in `packages/api/src/tools/`
- MCP (Model Context Protocol) support in `packages/api/src/mcp/`
### Data Layer
- `librechat-data-provider` provides unified data access
- React Query for frontend data fetching
- Backend services use Mongoose models from `@librechat/data-schemas`
## Testing
```bash
npm run test:api # Backend tests
npm run test:client # Frontend tests
npm run e2e # Playwright E2E tests
```
## Common Pitfalls
1. **Wrong workspace for backend code**: New backend logic belongs in `/packages/api`, not `/api`
- The `/api` workspace is still necessary as it's used to run the Express.js server, but all new logic should be written in TypeScript in `/packages/api` as much as possible. Existing logic should also be converted to TypeScript if possible.
2. **Build order matters**: Can't build `@librechat/api` before `@librechat/data-schemas`
3. **Unnecessary rebuilds**: Only rebuild packages when their source files change
4. **Import paths**: Use package names (`@librechat/api`) not relative paths across workspaces
5. **TypeScript vs CJS**: `/api` is still CJS; use `packages/api` for TypeScript
## Getting Help
- **Documentation**: https://docs.librechat.ai
- **Discord**: Active community support
- **GitHub Issues**: Bug reports and feature requests
- **Discussions**: General questions and ideas
---
**Remember**: The monorepo structure exists to enforce separation of concerns. Respect workspace boundaries and build dependencies. When in doubt about where code should live, consider:
- Is it backend logic? → `packages/api/src/`
- Is it a database model? → `packages/data-schemas/src/`
- Is it shared between frontend/backend? → `packages/data-provider/src/`
- Is it a reusable React component? → `packages/client/src/`
- Is it app-specific UI? → `client/src/`

View File

@@ -1,20 +1,29 @@
const crypto = require('crypto');
const fetch = require('node-fetch');
const { logger } = require('@librechat/data-schemas');
const { getBalanceConfig } = require('@librechat/api');
const {
supportsBalanceCheck,
isAgentsEndpoint,
isParamEndpoint,
EModelEndpoint,
getBalanceConfig,
extractFileContext,
encodeAndFormatAudios,
encodeAndFormatVideos,
encodeAndFormatDocuments,
} = require('@librechat/api');
const {
Constants,
ErrorTypes,
FileSources,
ContentTypes,
excludedKeys,
ErrorTypes,
Constants,
EModelEndpoint,
isParamEndpoint,
isAgentsEndpoint,
supportsBalanceCheck,
} = require('librechat-data-provider');
const { getMessages, saveMessage, updateMessage, saveConvo, getConvo } = require('~/models');
const { getStrategyFunctions } = require('~/server/services/Files/strategies');
const { checkBalance } = require('~/models/balanceMethods');
const { truncateToolCallOutputs } = require('./prompts');
const countTokens = require('~/server/utils/countTokens');
const { getFiles } = require('~/models/File');
const TextStream = require('./TextStream');
@@ -1198,8 +1207,135 @@ class BaseClient {
return await this.sendCompletion(payload, opts);
}
async addDocuments(message, attachments) {
const documentResult = await encodeAndFormatDocuments(
this.options.req,
attachments,
{
provider: this.options.agent?.provider,
useResponsesApi: this.options.agent?.model_parameters?.useResponsesApi,
},
getStrategyFunctions,
);
message.documents =
documentResult.documents && documentResult.documents.length
? documentResult.documents
: undefined;
return documentResult.files;
}
async addVideos(message, attachments) {
const videoResult = await encodeAndFormatVideos(
this.options.req,
attachments,
this.options.agent.provider,
getStrategyFunctions,
);
message.videos =
videoResult.videos && videoResult.videos.length ? videoResult.videos : undefined;
return videoResult.files;
}
async addAudios(message, attachments) {
const audioResult = await encodeAndFormatAudios(
this.options.req,
attachments,
this.options.agent.provider,
getStrategyFunctions,
);
message.audios =
audioResult.audios && audioResult.audios.length ? audioResult.audios : undefined;
return audioResult.files;
}
/**
* Extracts text context from attachments and sets it on the message.
* This handles text that was already extracted from files (OCR, transcriptions, document text, etc.)
* @param {TMessage} message - The message to add context to
* @param {MongoFile[]} attachments - Array of file attachments
* @returns {Promise<void>}
*/
async addFileContextToMessage(message, attachments) {
const fileContext = await extractFileContext({
attachments,
req: this.options?.req,
tokenCountFn: (text) => countTokens(text),
});
if (fileContext) {
message.fileContext = fileContext;
}
}
async processAttachments(message, attachments) {
const categorizedAttachments = {
images: [],
videos: [],
audios: [],
documents: [],
};
const allFiles = [];
for (const file of attachments) {
/** @type {FileSources} */
const source = file.source ?? FileSources.local;
if (source === FileSources.text) {
allFiles.push(file);
continue;
}
if (file.embedded === true || file.metadata?.fileIdentifier != null) {
allFiles.push(file);
continue;
}
if (file.type.startsWith('image/')) {
categorizedAttachments.images.push(file);
} else if (file.type === 'application/pdf') {
categorizedAttachments.documents.push(file);
allFiles.push(file);
} else if (file.type.startsWith('video/')) {
categorizedAttachments.videos.push(file);
allFiles.push(file);
} else if (file.type.startsWith('audio/')) {
categorizedAttachments.audios.push(file);
allFiles.push(file);
}
}
const [imageFiles] = await Promise.all([
categorizedAttachments.images.length > 0
? this.addImageURLs(message, categorizedAttachments.images)
: Promise.resolve([]),
categorizedAttachments.documents.length > 0
? this.addDocuments(message, categorizedAttachments.documents)
: Promise.resolve([]),
categorizedAttachments.videos.length > 0
? this.addVideos(message, categorizedAttachments.videos)
: Promise.resolve([]),
categorizedAttachments.audios.length > 0
? this.addAudios(message, categorizedAttachments.audios)
: Promise.resolve([]),
]);
allFiles.push(...imageFiles);
const seenFileIds = new Set();
const uniqueFiles = [];
for (const file of allFiles) {
if (file.file_id && !seenFileIds.has(file.file_id)) {
seenFileIds.add(file.file_id);
uniqueFiles.push(file);
} else if (!file.file_id) {
uniqueFiles.push(file);
}
}
return uniqueFiles;
}
/**
*
* @param {TMessage[]} _messages
* @returns {Promise<TMessage[]>}
*/
@@ -1248,7 +1384,8 @@ class BaseClient {
{},
);
await this.addImageURLs(message, files, this.visionMode);
await this.addFileContextToMessage(message, files);
await this.processAttachments(message, files);
this.message_file_map[message.messageId] = files;
return message;

View File

@@ -43,7 +43,6 @@ const { runTitleChain } = require('./chains');
const { extractBaseURL } = require('~/utils');
const { tokenSplit } = require('./document');
const BaseClient = require('./BaseClient');
const { createLLM } = require('./llm');
class OpenAIClient extends BaseClient {
constructor(apiKey, options = {}) {
@@ -614,65 +613,8 @@ class OpenAIClient extends BaseClient {
return (reply ?? '').trim();
}
initializeLLM({
model = openAISettings.model.default,
modelName,
temperature = 0.2,
max_tokens,
streaming,
}) {
const modelOptions = {
modelName: modelName ?? model,
temperature,
user: this.user,
};
if (max_tokens) {
modelOptions.max_tokens = max_tokens;
}
const configOptions = {};
if (this.langchainProxy) {
configOptions.basePath = this.langchainProxy;
}
if (this.useOpenRouter) {
configOptions.basePath = 'https://openrouter.ai/api/v1';
configOptions.baseOptions = {
headers: {
'HTTP-Referer': 'https://librechat.ai',
'X-Title': 'LibreChat',
},
};
}
const { headers } = this.options;
if (headers && typeof headers === 'object' && !Array.isArray(headers)) {
configOptions.baseOptions = {
headers: resolveHeaders({
headers: {
...headers,
...configOptions?.baseOptions?.headers,
},
}),
};
}
if (this.options.proxy) {
configOptions.httpAgent = new HttpsProxyAgent(this.options.proxy);
configOptions.httpsAgent = new HttpsProxyAgent(this.options.proxy);
}
const llm = createLLM({
modelOptions,
configOptions,
openAIApiKey: this.apiKey,
azure: this.azure,
streaming,
});
return llm;
initializeLLM() {
throw new Error('Deprecated');
}
/**

View File

@@ -1,81 +0,0 @@
const { ChatOpenAI } = require('@langchain/openai');
const { isEnabled, sanitizeModelName, constructAzureURL } = require('@librechat/api');
/**
* Creates a new instance of a language model (LLM) for chat interactions.
*
* @param {Object} options - The options for creating the LLM.
* @param {ModelOptions} options.modelOptions - The options specific to the model, including modelName, temperature, presence_penalty, frequency_penalty, and other model-related settings.
* @param {ConfigOptions} options.configOptions - Configuration options for the API requests, including proxy settings and custom headers.
* @param {Callbacks} [options.callbacks] - Callback functions for managing the lifecycle of the LLM, including token buffers, context, and initial message count.
* @param {boolean} [options.streaming=false] - Determines if the LLM should operate in streaming mode.
* @param {string} options.openAIApiKey - The API key for OpenAI, used for authentication.
* @param {AzureOptions} [options.azure={}] - Optional Azure-specific configurations. If provided, Azure configurations take precedence over OpenAI configurations.
*
* @returns {ChatOpenAI} An instance of the ChatOpenAI class, configured with the provided options.
*
* @example
* const llm = createLLM({
* modelOptions: { modelName: 'gpt-4o-mini', temperature: 0.2 },
* configOptions: { basePath: 'https://example.api/path' },
* callbacks: { onMessage: handleMessage },
* openAIApiKey: 'your-api-key'
* });
*/
function createLLM({
modelOptions,
configOptions,
callbacks,
streaming = false,
openAIApiKey,
azure = {},
}) {
let credentials = { openAIApiKey };
let configuration = {
apiKey: openAIApiKey,
...(configOptions.basePath && { baseURL: configOptions.basePath }),
};
/** @type {AzureOptions} */
let azureOptions = {};
if (azure) {
const useModelName = isEnabled(process.env.AZURE_USE_MODEL_AS_DEPLOYMENT_NAME);
credentials = {};
configuration = {};
azureOptions = azure;
azureOptions.azureOpenAIApiDeploymentName = useModelName
? sanitizeModelName(modelOptions.modelName)
: azureOptions.azureOpenAIApiDeploymentName;
}
if (azure && process.env.AZURE_OPENAI_DEFAULT_MODEL) {
modelOptions.modelName = process.env.AZURE_OPENAI_DEFAULT_MODEL;
}
if (azure && configOptions.basePath) {
const azureURL = constructAzureURL({
baseURL: configOptions.basePath,
azureOptions,
});
azureOptions.azureOpenAIBasePath = azureURL.split(
`/${azureOptions.azureOpenAIApiDeploymentName}`,
)[0];
}
return new ChatOpenAI(
{
streaming,
credentials,
configuration,
...azureOptions,
...modelOptions,
...credentials,
callbacks,
},
configOptions,
);
}
module.exports = createLLM;

View File

@@ -1,7 +1,5 @@
const createLLM = require('./createLLM');
const createCoherePayload = require('./createCoherePayload');
module.exports = {
createLLM,
createCoherePayload,
};

View File

@@ -1,31 +0,0 @@
require('dotenv').config();
const { ChatOpenAI } = require('@langchain/openai');
const { getBufferString, ConversationSummaryBufferMemory } = require('langchain/memory');
const chatPromptMemory = new ConversationSummaryBufferMemory({
llm: new ChatOpenAI({ modelName: 'gpt-4o-mini', temperature: 0 }),
maxTokenLimit: 10,
returnMessages: true,
});
(async () => {
await chatPromptMemory.saveContext({ input: 'hi my name\'s Danny' }, { output: 'whats up' });
await chatPromptMemory.saveContext({ input: 'not much you' }, { output: 'not much' });
await chatPromptMemory.saveContext(
{ input: 'are you excited for the olympics?' },
{ output: 'not really' },
);
// We can also utilize the predict_new_summary method directly.
const messages = await chatPromptMemory.chatHistory.getMessages();
console.log('MESSAGES\n\n');
console.log(JSON.stringify(messages));
const previous_summary = '';
const predictSummary = await chatPromptMemory.predictNewSummary(messages, previous_summary);
console.log('SUMMARY\n\n');
console.log(JSON.stringify(getBufferString([{ role: 'system', content: predictSummary }])));
// const { history } = await chatPromptMemory.loadMemoryVariables({});
// console.log('HISTORY\n\n');
// console.log(JSON.stringify(history));
})();

View File

@@ -3,6 +3,7 @@ const { EModelEndpoint, ArtifactModes } = require('librechat-data-provider');
const { generateShadcnPrompt } = require('~/app/clients/prompts/shadcn-docs/generate');
const { components } = require('~/app/clients/prompts/shadcn-docs/components');
/** @deprecated */
// eslint-disable-next-line no-unused-vars
const artifactsPromptV1 = dedent`The assistant can create and reference artifacts during conversations.
@@ -115,6 +116,7 @@ Here are some examples of correct usage of artifacts:
</assistant_response>
</example>
</examples>`;
const artifactsPrompt = dedent`The assistant can create and reference artifacts during conversations.
Artifacts are for substantial, self-contained content that users might modify or reuse, displayed in a separate UI window for clarity.
@@ -165,6 +167,10 @@ Artifacts are for substantial, self-contained content that users might modify or
- SVG: "image/svg+xml"
- The user interface will render the Scalable Vector Graphics (SVG) image within the artifact tags.
- The assistant should specify the viewbox of the SVG rather than defining a width/height
- Markdown: "text/markdown" or "text/md"
- The user interface will render Markdown content placed within the artifact tags.
- Supports standard Markdown syntax including headers, lists, links, images, code blocks, tables, and more.
- Both "text/markdown" and "text/md" are accepted as valid MIME types for Markdown content.
- Mermaid Diagrams: "application/vnd.mermaid"
- The user interface will render Mermaid diagrams placed within the artifact tags.
- React Components: "application/vnd.react"
@@ -366,6 +372,10 @@ Artifacts are for substantial, self-contained content that users might modify or
- SVG: "image/svg+xml"
- The user interface will render the Scalable Vector Graphics (SVG) image within the artifact tags.
- The assistant should specify the viewbox of the SVG rather than defining a width/height
- Markdown: "text/markdown" or "text/md"
- The user interface will render Markdown content placed within the artifact tags.
- Supports standard Markdown syntax including headers, lists, links, images, code blocks, tables, and more.
- Both "text/markdown" and "text/md" are accepted as valid MIME types for Markdown content.
- Mermaid Diagrams: "application/vnd.mermaid"
- The user interface will render Mermaid diagrams placed within the artifact tags.
- React Components: "application/vnd.react"

View File

@@ -1,9 +1,8 @@
const { z } = require('zod');
const path = require('path');
const OpenAI = require('openai');
const fetch = require('node-fetch');
const { v4: uuidv4 } = require('uuid');
const { ProxyAgent } = require('undici');
const { ProxyAgent, fetch } = require('undici');
const { Tool } = require('@langchain/core/tools');
const { logger } = require('@librechat/data-schemas');
const { getImageBasename } = require('@librechat/api');

View File

@@ -30,7 +30,6 @@ jest.mock('~/server/services/Config', () => ({
}),
}));
const { BaseLLM } = require('@langchain/openai');
const { Calculator } = require('@langchain/community/tools/calculator');
const { User } = require('~/db/models');
@@ -172,7 +171,6 @@ describe('Tool Handlers', () => {
beforeAll(async () => {
const toolMap = await loadTools({
user: fakeUser._id,
model: BaseLLM,
tools: sampleTools,
returnMap: true,
useSpecs: true,
@@ -266,7 +264,6 @@ describe('Tool Handlers', () => {
it('returns an empty object when no tools are requested', async () => {
toolFunctions = await loadTools({
user: fakeUser._id,
model: BaseLLM,
returnMap: true,
useSpecs: true,
});
@@ -276,7 +273,6 @@ describe('Tool Handlers', () => {
process.env.SD_WEBUI_URL = mockCredential;
toolFunctions = await loadTools({
user: fakeUser._id,
model: BaseLLM,
tools: ['stable-diffusion'],
functions: true,
returnMap: true,

View File

@@ -1,108 +0,0 @@
const KeyvRedis = require('@keyv/redis').default;
const { Keyv } = require('keyv');
const { RedisStore } = require('rate-limit-redis');
const { Time } = require('librechat-data-provider');
const { logger } = require('@librechat/data-schemas');
const { RedisStore: ConnectRedis } = require('connect-redis');
const MemoryStore = require('memorystore')(require('express-session'));
const { keyvRedisClient, ioredisClient, GLOBAL_PREFIX_SEPARATOR } = require('./redisClients');
const { cacheConfig } = require('./cacheConfig');
const { violationFile } = require('./keyvFiles');
/**
* Creates a cache instance using Redis or a fallback store. Suitable for general caching needs.
* @param {string} namespace - The cache namespace.
* @param {number} [ttl] - Time to live for cache entries.
* @param {object} [fallbackStore] - Optional fallback store if Redis is not used.
* @returns {Keyv} Cache instance.
*/
const standardCache = (namespace, ttl = undefined, fallbackStore = undefined) => {
if (
cacheConfig.USE_REDIS &&
!cacheConfig.FORCED_IN_MEMORY_CACHE_NAMESPACES?.includes(namespace)
) {
try {
const keyvRedis = new KeyvRedis(keyvRedisClient);
const cache = new Keyv(keyvRedis, { namespace, ttl });
keyvRedis.namespace = cacheConfig.REDIS_KEY_PREFIX;
keyvRedis.keyPrefixSeparator = GLOBAL_PREFIX_SEPARATOR;
cache.on('error', (err) => {
logger.error(`Cache error in namespace ${namespace}:`, err);
});
return cache;
} catch (err) {
logger.error(`Failed to create Redis cache for namespace ${namespace}:`, err);
throw err;
}
}
if (fallbackStore) return new Keyv({ store: fallbackStore, namespace, ttl });
return new Keyv({ namespace, ttl });
};
/**
* Creates a cache instance for storing violation data.
* Uses a file-based fallback store if Redis is not enabled.
* @param {string} namespace - The cache namespace for violations.
* @param {number} [ttl] - Time to live for cache entries.
* @returns {Keyv} Cache instance for violations.
*/
const violationCache = (namespace, ttl = undefined) => {
return standardCache(`violations:${namespace}`, ttl, violationFile);
};
/**
* Creates a session cache instance using Redis or in-memory store.
* @param {string} namespace - The session namespace.
* @param {number} [ttl] - Time to live for session entries.
* @returns {MemoryStore | ConnectRedis} Session store instance.
*/
const sessionCache = (namespace, ttl = undefined) => {
namespace = namespace.endsWith(':') ? namespace : `${namespace}:`;
if (!cacheConfig.USE_REDIS) return new MemoryStore({ ttl, checkPeriod: Time.ONE_DAY });
const store = new ConnectRedis({ client: ioredisClient, ttl, prefix: namespace });
if (ioredisClient) {
ioredisClient.on('error', (err) => {
logger.error(`Session store Redis error for namespace ${namespace}:`, err);
});
}
return store;
};
/**
* Creates a rate limiter cache using Redis.
* @param {string} prefix - The key prefix for rate limiting.
* @returns {RedisStore|undefined} RedisStore instance or undefined if Redis is not used.
*/
const limiterCache = (prefix) => {
if (!prefix) throw new Error('prefix is required');
if (!cacheConfig.USE_REDIS) return undefined;
prefix = prefix.endsWith(':') ? prefix : `${prefix}:`;
try {
if (!ioredisClient) {
logger.warn(`Redis client not available for rate limiter with prefix ${prefix}`);
return undefined;
}
return new RedisStore({ sendCommand, prefix });
} catch (err) {
logger.error(`Failed to create Redis rate limiter for prefix ${prefix}:`, err);
return undefined;
}
};
const sendCommand = (...args) => {
if (!ioredisClient) {
logger.warn('Redis client not available for command execution');
return Promise.reject(new Error('Redis client not available'));
}
return ioredisClient.call(...args).catch((err) => {
logger.error('Redis command execution failed:', err);
throw err;
});
};
module.exports = { standardCache, sessionCache, violationCache, limiterCache };

View File

@@ -1,432 +0,0 @@
const { Time } = require('librechat-data-provider');
// Mock dependencies first
const mockKeyvRedis = {
namespace: '',
keyPrefixSeparator: '',
};
const mockKeyv = jest.fn().mockReturnValue({
mock: 'keyv',
on: jest.fn(),
});
const mockConnectRedis = jest.fn().mockReturnValue({ mock: 'connectRedis' });
const mockMemoryStore = jest.fn().mockReturnValue({ mock: 'memoryStore' });
const mockRedisStore = jest.fn().mockReturnValue({ mock: 'redisStore' });
const mockIoredisClient = {
call: jest.fn(),
on: jest.fn(),
};
const mockKeyvRedisClient = {};
const mockViolationFile = {};
// Mock modules before requiring the main module
jest.mock('@keyv/redis', () => ({
default: jest.fn().mockImplementation(() => mockKeyvRedis),
}));
jest.mock('keyv', () => ({
Keyv: mockKeyv,
}));
jest.mock('./cacheConfig', () => ({
cacheConfig: {
USE_REDIS: false,
REDIS_KEY_PREFIX: 'test',
FORCED_IN_MEMORY_CACHE_NAMESPACES: [],
},
}));
jest.mock('./redisClients', () => ({
keyvRedisClient: mockKeyvRedisClient,
ioredisClient: mockIoredisClient,
GLOBAL_PREFIX_SEPARATOR: '::',
}));
jest.mock('./keyvFiles', () => ({
violationFile: mockViolationFile,
}));
jest.mock('connect-redis', () => ({ RedisStore: mockConnectRedis }));
jest.mock('memorystore', () => jest.fn(() => mockMemoryStore));
jest.mock('rate-limit-redis', () => ({
RedisStore: mockRedisStore,
}));
jest.mock('@librechat/data-schemas', () => ({
logger: {
error: jest.fn(),
warn: jest.fn(),
info: jest.fn(),
},
}));
// Import after mocking
const { standardCache, sessionCache, violationCache, limiterCache } = require('./cacheFactory');
const { cacheConfig } = require('./cacheConfig');
describe('cacheFactory', () => {
beforeEach(() => {
jest.clearAllMocks();
// Reset cache config mock
cacheConfig.USE_REDIS = false;
cacheConfig.REDIS_KEY_PREFIX = 'test';
cacheConfig.FORCED_IN_MEMORY_CACHE_NAMESPACES = [];
});
describe('redisCache', () => {
it('should create Redis cache when USE_REDIS is true', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'test-namespace';
const ttl = 3600;
standardCache(namespace, ttl);
expect(require('@keyv/redis').default).toHaveBeenCalledWith(mockKeyvRedisClient);
expect(mockKeyv).toHaveBeenCalledWith(mockKeyvRedis, { namespace, ttl });
expect(mockKeyvRedis.namespace).toBe(cacheConfig.REDIS_KEY_PREFIX);
expect(mockKeyvRedis.keyPrefixSeparator).toBe('::');
});
it('should create Redis cache with undefined ttl when not provided', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'test-namespace';
standardCache(namespace);
expect(mockKeyv).toHaveBeenCalledWith(mockKeyvRedis, { namespace, ttl: undefined });
});
it('should use fallback store when USE_REDIS is false and fallbackStore is provided', () => {
cacheConfig.USE_REDIS = false;
const namespace = 'test-namespace';
const ttl = 3600;
const fallbackStore = { some: 'store' };
standardCache(namespace, ttl, fallbackStore);
expect(mockKeyv).toHaveBeenCalledWith({ store: fallbackStore, namespace, ttl });
});
it('should create default Keyv instance when USE_REDIS is false and no fallbackStore', () => {
cacheConfig.USE_REDIS = false;
const namespace = 'test-namespace';
const ttl = 3600;
standardCache(namespace, ttl);
expect(mockKeyv).toHaveBeenCalledWith({ namespace, ttl });
});
it('should handle namespace and ttl as undefined', () => {
cacheConfig.USE_REDIS = false;
standardCache();
expect(mockKeyv).toHaveBeenCalledWith({ namespace: undefined, ttl: undefined });
});
it('should use fallback when namespace is in FORCED_IN_MEMORY_CACHE_NAMESPACES', () => {
cacheConfig.USE_REDIS = true;
cacheConfig.FORCED_IN_MEMORY_CACHE_NAMESPACES = ['forced-memory'];
const namespace = 'forced-memory';
const ttl = 3600;
standardCache(namespace, ttl);
expect(require('@keyv/redis').default).not.toHaveBeenCalled();
expect(mockKeyv).toHaveBeenCalledWith({ namespace, ttl });
});
it('should use Redis when namespace is not in FORCED_IN_MEMORY_CACHE_NAMESPACES', () => {
cacheConfig.USE_REDIS = true;
cacheConfig.FORCED_IN_MEMORY_CACHE_NAMESPACES = ['other-namespace'];
const namespace = 'test-namespace';
const ttl = 3600;
standardCache(namespace, ttl);
expect(require('@keyv/redis').default).toHaveBeenCalledWith(mockKeyvRedisClient);
expect(mockKeyv).toHaveBeenCalledWith(mockKeyvRedis, { namespace, ttl });
});
it('should throw error when Redis cache creation fails', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'test-namespace';
const ttl = 3600;
const testError = new Error('Redis connection failed');
const KeyvRedis = require('@keyv/redis').default;
KeyvRedis.mockImplementationOnce(() => {
throw testError;
});
expect(() => standardCache(namespace, ttl)).toThrow('Redis connection failed');
const { logger } = require('@librechat/data-schemas');
expect(logger.error).toHaveBeenCalledWith(
`Failed to create Redis cache for namespace ${namespace}:`,
testError,
);
expect(mockKeyv).not.toHaveBeenCalled();
});
});
describe('violationCache', () => {
it('should create violation cache with prefixed namespace', () => {
const namespace = 'test-violations';
const ttl = 7200;
// We can't easily mock the internal redisCache call since it's in the same module
// But we can test that the function executes without throwing
expect(() => violationCache(namespace, ttl)).not.toThrow();
});
it('should create violation cache with undefined ttl', () => {
const namespace = 'test-violations';
violationCache(namespace);
// The function should call redisCache with violations: prefixed namespace
// Since we can't easily mock the internal redisCache call, we test the behavior
expect(() => violationCache(namespace)).not.toThrow();
});
it('should handle undefined namespace', () => {
expect(() => violationCache(undefined)).not.toThrow();
});
});
describe('sessionCache', () => {
it('should return MemoryStore when USE_REDIS is false', () => {
cacheConfig.USE_REDIS = false;
const namespace = 'sessions';
const ttl = 86400;
const result = sessionCache(namespace, ttl);
expect(mockMemoryStore).toHaveBeenCalledWith({ ttl, checkPeriod: Time.ONE_DAY });
expect(result).toBe(mockMemoryStore());
});
it('should return ConnectRedis when USE_REDIS is true', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'sessions';
const ttl = 86400;
const result = sessionCache(namespace, ttl);
expect(mockConnectRedis).toHaveBeenCalledWith({
client: mockIoredisClient,
ttl,
prefix: `${namespace}:`,
});
expect(result).toBe(mockConnectRedis());
});
it('should add colon to namespace if not present', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'sessions';
sessionCache(namespace);
expect(mockConnectRedis).toHaveBeenCalledWith({
client: mockIoredisClient,
ttl: undefined,
prefix: 'sessions:',
});
});
it('should not add colon to namespace if already present', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'sessions:';
sessionCache(namespace);
expect(mockConnectRedis).toHaveBeenCalledWith({
client: mockIoredisClient,
ttl: undefined,
prefix: 'sessions:',
});
});
it('should handle undefined ttl', () => {
cacheConfig.USE_REDIS = false;
const namespace = 'sessions';
sessionCache(namespace);
expect(mockMemoryStore).toHaveBeenCalledWith({
ttl: undefined,
checkPeriod: Time.ONE_DAY,
});
});
it('should throw error when ConnectRedis constructor fails', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'sessions';
const ttl = 86400;
// Mock ConnectRedis to throw an error during construction
const redisError = new Error('Redis connection failed');
mockConnectRedis.mockImplementationOnce(() => {
throw redisError;
});
// The error should propagate up, not be caught
expect(() => sessionCache(namespace, ttl)).toThrow('Redis connection failed');
// Verify that MemoryStore was NOT used as fallback
expect(mockMemoryStore).not.toHaveBeenCalled();
});
it('should register error handler but let errors propagate to Express', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'sessions';
// Create a mock session store with middleware methods
const mockSessionStore = {
get: jest.fn(),
set: jest.fn(),
destroy: jest.fn(),
};
mockConnectRedis.mockReturnValue(mockSessionStore);
const store = sessionCache(namespace);
// Verify error handler was registered
expect(mockIoredisClient.on).toHaveBeenCalledWith('error', expect.any(Function));
// Get the error handler
const errorHandler = mockIoredisClient.on.mock.calls.find((call) => call[0] === 'error')[1];
// Simulate an error from Redis during a session operation
const redisError = new Error('Socket closed unexpectedly');
// The error handler should log but not swallow the error
const { logger } = require('@librechat/data-schemas');
errorHandler(redisError);
expect(logger.error).toHaveBeenCalledWith(
`Session store Redis error for namespace ${namespace}::`,
redisError,
);
// Now simulate what happens when session middleware tries to use the store
const callback = jest.fn();
mockSessionStore.get.mockImplementation((sid, cb) => {
cb(new Error('Redis connection lost'));
});
// Call the store's get method (as Express session would)
store.get('test-session-id', callback);
// The error should be passed to the callback, not swallowed
expect(callback).toHaveBeenCalledWith(new Error('Redis connection lost'));
});
it('should handle null ioredisClient gracefully', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'sessions';
// Temporarily set ioredisClient to null (simulating connection not established)
const originalClient = require('./redisClients').ioredisClient;
require('./redisClients').ioredisClient = null;
// ConnectRedis might accept null client but would fail on first use
// The important thing is it doesn't throw uncaught exceptions during construction
const store = sessionCache(namespace);
expect(store).toBeDefined();
// Restore original client
require('./redisClients').ioredisClient = originalClient;
});
});
describe('limiterCache', () => {
it('should return undefined when USE_REDIS is false', () => {
cacheConfig.USE_REDIS = false;
const result = limiterCache('prefix');
expect(result).toBeUndefined();
});
it('should return RedisStore when USE_REDIS is true', () => {
cacheConfig.USE_REDIS = true;
const result = limiterCache('rate-limit');
expect(mockRedisStore).toHaveBeenCalledWith({
sendCommand: expect.any(Function),
prefix: `rate-limit:`,
});
expect(result).toBe(mockRedisStore());
});
it('should add colon to prefix if not present', () => {
cacheConfig.USE_REDIS = true;
limiterCache('rate-limit');
expect(mockRedisStore).toHaveBeenCalledWith({
sendCommand: expect.any(Function),
prefix: 'rate-limit:',
});
});
it('should not add colon to prefix if already present', () => {
cacheConfig.USE_REDIS = true;
limiterCache('rate-limit:');
expect(mockRedisStore).toHaveBeenCalledWith({
sendCommand: expect.any(Function),
prefix: 'rate-limit:',
});
});
it('should pass sendCommand function that calls ioredisClient.call', async () => {
cacheConfig.USE_REDIS = true;
mockIoredisClient.call.mockResolvedValue('test-value');
limiterCache('rate-limit');
const sendCommandCall = mockRedisStore.mock.calls[0][0];
const sendCommand = sendCommandCall.sendCommand;
// Test that sendCommand properly delegates to ioredisClient.call
const args = ['GET', 'test-key'];
const result = await sendCommand(...args);
expect(mockIoredisClient.call).toHaveBeenCalledWith(...args);
expect(result).toBe('test-value');
});
it('should handle sendCommand errors properly', async () => {
cacheConfig.USE_REDIS = true;
// Mock the call method to reject with an error
const testError = new Error('Redis error');
mockIoredisClient.call.mockRejectedValue(testError);
limiterCache('rate-limit');
const sendCommandCall = mockRedisStore.mock.calls[0][0];
const sendCommand = sendCommandCall.sendCommand;
// Test that sendCommand properly handles errors
const args = ['GET', 'test-key'];
await expect(sendCommand(...args)).rejects.toThrow('Redis error');
expect(mockIoredisClient.call).toHaveBeenCalledWith(...args);
});
it('should handle undefined prefix', () => {
cacheConfig.USE_REDIS = true;
expect(() => limiterCache()).toThrow('prefix is required');
});
});
});

View File

@@ -1,9 +1,13 @@
const { cacheConfig } = require('./cacheConfig');
const { Keyv } = require('keyv');
const { CacheKeys, ViolationTypes, Time } = require('librechat-data-provider');
const { logFile } = require('./keyvFiles');
const keyvMongo = require('./keyvMongo');
const { standardCache, sessionCache, violationCache } = require('./cacheFactory');
const { Time, CacheKeys, ViolationTypes } = require('librechat-data-provider');
const {
logFile,
keyvMongo,
cacheConfig,
sessionCache,
standardCache,
violationCache,
} = require('@librechat/api');
const namespaces = {
[ViolationTypes.GENERAL]: new Keyv({ store: logFile, namespace: 'violations' }),

3
api/cache/index.js vendored
View File

@@ -1,5 +1,4 @@
const keyvFiles = require('./keyvFiles');
const getLogStores = require('./getLogStores');
const logViolation = require('./logViolation');
module.exports = { ...keyvFiles, getLogStores, logViolation };
module.exports = { getLogStores, logViolation };

View File

@@ -1,9 +0,0 @@
const { KeyvFile } = require('keyv-file');
const logFile = new KeyvFile({ filename: './data/logs.json' }).setMaxListeners(20);
const violationFile = new KeyvFile({ filename: './data/violations.json' }).setMaxListeners(20);
module.exports = {
logFile,
violationFile,
};

View File

@@ -29,12 +29,64 @@ class MeiliSearchClient {
}
}
/**
* Deletes documents from MeiliSearch index that are missing the user field
* @param {import('meilisearch').Index} index - MeiliSearch index instance
* @param {string} indexName - Name of the index for logging
* @returns {Promise<number>} - Number of documents deleted
*/
async function deleteDocumentsWithoutUserField(index, indexName) {
let deletedCount = 0;
let offset = 0;
const batchSize = 1000;
try {
while (true) {
const searchResult = await index.search('', {
limit: batchSize,
offset: offset,
});
if (searchResult.hits.length === 0) {
break;
}
const idsToDelete = searchResult.hits.filter((hit) => !hit.user).map((hit) => hit.id);
if (idsToDelete.length > 0) {
logger.info(
`[indexSync] Deleting ${idsToDelete.length} documents without user field from ${indexName} index`,
);
await index.deleteDocuments(idsToDelete);
deletedCount += idsToDelete.length;
}
if (searchResult.hits.length < batchSize) {
break;
}
offset += batchSize;
}
if (deletedCount > 0) {
logger.info(`[indexSync] Deleted ${deletedCount} orphaned documents from ${indexName} index`);
}
} catch (error) {
logger.error(`[indexSync] Error deleting documents from ${indexName}:`, error);
}
return deletedCount;
}
/**
* Ensures indexes have proper filterable attributes configured and checks if documents have user field
* @param {MeiliSearch} client - MeiliSearch client instance
* @returns {Promise<boolean>} - true if configuration was updated or re-sync is needed
* @returns {Promise<{settingsUpdated: boolean, orphanedDocsFound: boolean}>} - Status of what was done
*/
async function ensureFilterableAttributes(client) {
let settingsUpdated = false;
let hasOrphanedDocs = false;
try {
// Check and update messages index
try {
@@ -47,16 +99,17 @@ async function ensureFilterableAttributes(client) {
filterableAttributes: ['user'],
});
logger.info('[indexSync] Messages index configured for user filtering');
logger.info('[indexSync] Index configuration updated. Full re-sync will be triggered.');
return true;
settingsUpdated = true;
}
// Check if existing documents have user field indexed
try {
const searchResult = await messagesIndex.search('', { limit: 1 });
if (searchResult.hits.length > 0 && !searchResult.hits[0].user) {
logger.info('[indexSync] Existing messages missing user field, re-sync needed');
return true;
logger.info(
'[indexSync] Existing messages missing user field, will clean up orphaned documents...',
);
hasOrphanedDocs = true;
}
} catch (searchError) {
logger.debug('[indexSync] Could not check message documents:', searchError.message);
@@ -78,16 +131,17 @@ async function ensureFilterableAttributes(client) {
filterableAttributes: ['user'],
});
logger.info('[indexSync] Convos index configured for user filtering');
logger.info('[indexSync] Index configuration updated. Full re-sync will be triggered.');
return true;
settingsUpdated = true;
}
// Check if existing documents have user field indexed
try {
const searchResult = await convosIndex.search('', { limit: 1 });
if (searchResult.hits.length > 0 && !searchResult.hits[0].user) {
logger.info('[indexSync] Existing conversations missing user field, re-sync needed');
return true;
logger.info(
'[indexSync] Existing conversations missing user field, will clean up orphaned documents...',
);
hasOrphanedDocs = true;
}
} catch (searchError) {
logger.debug('[indexSync] Could not check conversation documents:', searchError.message);
@@ -97,101 +151,143 @@ async function ensureFilterableAttributes(client) {
logger.warn('[indexSync] Could not check/update convos index settings:', error.message);
}
}
// If either index has orphaned documents, clean them up (but don't force resync)
if (hasOrphanedDocs) {
try {
const messagesIndex = client.index('messages');
await deleteDocumentsWithoutUserField(messagesIndex, 'messages');
} catch (error) {
logger.debug('[indexSync] Could not clean up messages:', error.message);
}
try {
const convosIndex = client.index('convos');
await deleteDocumentsWithoutUserField(convosIndex, 'convos');
} catch (error) {
logger.debug('[indexSync] Could not clean up convos:', error.message);
}
logger.info('[indexSync] Orphaned documents cleaned up without forcing resync.');
}
if (settingsUpdated) {
logger.info('[indexSync] Index settings updated. Full re-sync will be triggered.');
}
} catch (error) {
logger.error('[indexSync] Error ensuring filterable attributes:', error);
}
return false;
return { settingsUpdated, orphanedDocsFound: hasOrphanedDocs };
}
/**
* Performs the actual sync operations for messages and conversations
* @param {FlowStateManager} flowManager - Flow state manager instance
* @param {string} flowId - Flow identifier
* @param {string} flowType - Flow type
*/
async function performSync() {
const client = MeiliSearchClient.getInstance();
async function performSync(flowManager, flowId, flowType) {
try {
const client = MeiliSearchClient.getInstance();
const { status } = await client.health();
if (status !== 'available') {
throw new Error('Meilisearch not available');
}
if (indexingDisabled === true) {
logger.info('[indexSync] Indexing is disabled, skipping...');
return { messagesSync: false, convosSync: false };
}
/** Ensures indexes have proper filterable attributes configured */
const configUpdated = await ensureFilterableAttributes(client);
let messagesSync = false;
let convosSync = false;
// If configuration was just updated or documents are missing user field, force a full re-sync
if (configUpdated) {
logger.info('[indexSync] Forcing full re-sync to ensure user field is properly indexed...');
// Reset sync flags to force full re-sync
await Message.collection.updateMany({ _meiliIndex: true }, { $set: { _meiliIndex: false } });
await Conversation.collection.updateMany(
{ _meiliIndex: true },
{ $set: { _meiliIndex: false } },
);
}
// Check if we need to sync messages
const messageProgress = await Message.getSyncProgress();
if (!messageProgress.isComplete || configUpdated) {
logger.info(
`[indexSync] Messages need syncing: ${messageProgress.totalProcessed}/${messageProgress.totalDocuments} indexed`,
);
// Check if we should do a full sync or incremental
const messageCount = await Message.countDocuments();
const messagesIndexed = messageProgress.totalProcessed;
const syncThreshold = parseInt(process.env.MEILI_SYNC_THRESHOLD || '1000', 10);
if (messageCount - messagesIndexed > syncThreshold) {
logger.info('[indexSync] Starting full message sync due to large difference');
await Message.syncWithMeili();
messagesSync = true;
} else if (messageCount !== messagesIndexed) {
logger.warn('[indexSync] Messages out of sync, performing incremental sync');
await Message.syncWithMeili();
messagesSync = true;
const { status } = await client.health();
if (status !== 'available') {
throw new Error('Meilisearch not available');
}
} else {
logger.info(
`[indexSync] Messages are fully synced: ${messageProgress.totalProcessed}/${messageProgress.totalDocuments}`,
);
}
// Check if we need to sync conversations
const convoProgress = await Conversation.getSyncProgress();
if (!convoProgress.isComplete || configUpdated) {
logger.info(
`[indexSync] Conversations need syncing: ${convoProgress.totalProcessed}/${convoProgress.totalDocuments} indexed`,
);
const convoCount = await Conversation.countDocuments();
const convosIndexed = convoProgress.totalProcessed;
const syncThreshold = parseInt(process.env.MEILI_SYNC_THRESHOLD || '1000', 10);
if (convoCount - convosIndexed > syncThreshold) {
logger.info('[indexSync] Starting full conversation sync due to large difference');
await Conversation.syncWithMeili();
convosSync = true;
} else if (convoCount !== convosIndexed) {
logger.warn('[indexSync] Convos out of sync, performing incremental sync');
await Conversation.syncWithMeili();
convosSync = true;
if (indexingDisabled === true) {
logger.info('[indexSync] Indexing is disabled, skipping...');
return { messagesSync: false, convosSync: false };
}
} else {
logger.info(
`[indexSync] Conversations are fully synced: ${convoProgress.totalProcessed}/${convoProgress.totalDocuments}`,
);
}
return { messagesSync, convosSync };
/** Ensures indexes have proper filterable attributes configured */
const { settingsUpdated, orphanedDocsFound: _orphanedDocsFound } =
await ensureFilterableAttributes(client);
let messagesSync = false;
let convosSync = false;
// Only reset flags if settings were actually updated (not just for orphaned doc cleanup)
if (settingsUpdated) {
logger.info(
'[indexSync] Settings updated. Forcing full re-sync to reindex with new configuration...',
);
// Reset sync flags to force full re-sync
await Message.collection.updateMany({ _meiliIndex: true }, { $set: { _meiliIndex: false } });
await Conversation.collection.updateMany(
{ _meiliIndex: true },
{ $set: { _meiliIndex: false } },
);
}
// Check if we need to sync messages
const messageProgress = await Message.getSyncProgress();
if (!messageProgress.isComplete || settingsUpdated) {
logger.info(
`[indexSync] Messages need syncing: ${messageProgress.totalProcessed}/${messageProgress.totalDocuments} indexed`,
);
// Check if we should do a full sync or incremental
const messageCount = await Message.countDocuments();
const messagesIndexed = messageProgress.totalProcessed;
const syncThreshold = parseInt(process.env.MEILI_SYNC_THRESHOLD || '1000', 10);
if (messageCount - messagesIndexed > syncThreshold) {
logger.info('[indexSync] Starting full message sync due to large difference');
await Message.syncWithMeili();
messagesSync = true;
} else if (messageCount !== messagesIndexed) {
logger.warn('[indexSync] Messages out of sync, performing incremental sync');
await Message.syncWithMeili();
messagesSync = true;
}
} else {
logger.info(
`[indexSync] Messages are fully synced: ${messageProgress.totalProcessed}/${messageProgress.totalDocuments}`,
);
}
// Check if we need to sync conversations
const convoProgress = await Conversation.getSyncProgress();
if (!convoProgress.isComplete || settingsUpdated) {
logger.info(
`[indexSync] Conversations need syncing: ${convoProgress.totalProcessed}/${convoProgress.totalDocuments} indexed`,
);
const convoCount = await Conversation.countDocuments();
const convosIndexed = convoProgress.totalProcessed;
const syncThreshold = parseInt(process.env.MEILI_SYNC_THRESHOLD || '1000', 10);
if (convoCount - convosIndexed > syncThreshold) {
logger.info('[indexSync] Starting full conversation sync due to large difference');
await Conversation.syncWithMeili();
convosSync = true;
} else if (convoCount !== convosIndexed) {
logger.warn('[indexSync] Convos out of sync, performing incremental sync');
await Conversation.syncWithMeili();
convosSync = true;
}
} else {
logger.info(
`[indexSync] Conversations are fully synced: ${convoProgress.totalProcessed}/${convoProgress.totalDocuments}`,
);
}
return { messagesSync, convosSync };
} finally {
if (indexingDisabled === true) {
logger.info('[indexSync] Indexing is disabled, skipping cleanup...');
} else if (flowManager && flowId && flowType) {
try {
await flowManager.deleteFlow(flowId, flowType);
logger.debug('[indexSync] Flow state cleaned up');
} catch (cleanupErr) {
logger.debug('[indexSync] Could not clean up flow state:', cleanupErr.message);
}
}
}
}
/**
@@ -204,24 +300,26 @@ async function indexSync() {
logger.info('[indexSync] Starting index synchronization check...');
// Get or create FlowStateManager instance
const flowsCache = getLogStores(CacheKeys.FLOWS);
if (!flowsCache) {
logger.warn('[indexSync] Flows cache not available, falling back to direct sync');
return await performSync(null, null, null);
}
const flowManager = new FlowStateManager(flowsCache, {
ttl: 60000 * 10, // 10 minutes TTL for sync operations
});
// Use a unique flow ID for the sync operation
const flowId = 'meili-index-sync';
const flowType = 'MEILI_SYNC';
try {
// Get or create FlowStateManager instance
const flowsCache = getLogStores(CacheKeys.FLOWS);
if (!flowsCache) {
logger.warn('[indexSync] Flows cache not available, falling back to direct sync');
return await performSync();
}
const flowManager = new FlowStateManager(flowsCache, {
ttl: 60000 * 10, // 10 minutes TTL for sync operations
});
// Use a unique flow ID for the sync operation
const flowId = 'meili-index-sync';
const flowType = 'MEILI_SYNC';
// This will only execute the handler if no other instance is running the sync
const result = await flowManager.createFlowWithHandler(flowId, flowType, performSync);
const result = await flowManager.createFlowWithHandler(flowId, flowType, () =>
performSync(flowManager, flowId, flowType),
);
if (result.messagesSync || result.convosSync) {
logger.info('[indexSync] Sync completed successfully');

View File

@@ -1,4 +1,4 @@
const { matchModelName } = require('@librechat/api');
const { matchModelName, findMatchingPattern } = require('@librechat/api');
const defaultRate = 6;
/**
@@ -6,44 +6,58 @@ const defaultRate = 6;
* source: https://aws.amazon.com/bedrock/pricing/
* */
const bedrockValues = {
// Basic llama2 patterns
// Basic llama2 patterns (base defaults to smallest variant)
llama2: { prompt: 0.75, completion: 1.0 },
'llama-2': { prompt: 0.75, completion: 1.0 },
'llama2-13b': { prompt: 0.75, completion: 1.0 },
'llama2:13b': { prompt: 0.75, completion: 1.0 },
'llama2:70b': { prompt: 1.95, completion: 2.56 },
'llama2-70b': { prompt: 1.95, completion: 2.56 },
// Basic llama3 patterns
// Basic llama3 patterns (base defaults to smallest variant)
llama3: { prompt: 0.3, completion: 0.6 },
'llama-3': { prompt: 0.3, completion: 0.6 },
'llama3-8b': { prompt: 0.3, completion: 0.6 },
'llama3:8b': { prompt: 0.3, completion: 0.6 },
'llama3-70b': { prompt: 2.65, completion: 3.5 },
'llama3:70b': { prompt: 2.65, completion: 3.5 },
// llama3-x-Nb pattern
// llama3-x-Nb pattern (base defaults to smallest variant)
'llama3-1': { prompt: 0.22, completion: 0.22 },
'llama3-1-8b': { prompt: 0.22, completion: 0.22 },
'llama3-1-70b': { prompt: 0.72, completion: 0.72 },
'llama3-1-405b': { prompt: 2.4, completion: 2.4 },
'llama3-2': { prompt: 0.1, completion: 0.1 },
'llama3-2-1b': { prompt: 0.1, completion: 0.1 },
'llama3-2-3b': { prompt: 0.15, completion: 0.15 },
'llama3-2-11b': { prompt: 0.16, completion: 0.16 },
'llama3-2-90b': { prompt: 0.72, completion: 0.72 },
'llama3-3': { prompt: 2.65, completion: 3.5 },
'llama3-3-70b': { prompt: 2.65, completion: 3.5 },
// llama3.x:Nb pattern
// llama3.x:Nb pattern (base defaults to smallest variant)
'llama3.1': { prompt: 0.22, completion: 0.22 },
'llama3.1:8b': { prompt: 0.22, completion: 0.22 },
'llama3.1:70b': { prompt: 0.72, completion: 0.72 },
'llama3.1:405b': { prompt: 2.4, completion: 2.4 },
'llama3.2': { prompt: 0.1, completion: 0.1 },
'llama3.2:1b': { prompt: 0.1, completion: 0.1 },
'llama3.2:3b': { prompt: 0.15, completion: 0.15 },
'llama3.2:11b': { prompt: 0.16, completion: 0.16 },
'llama3.2:90b': { prompt: 0.72, completion: 0.72 },
'llama3.3': { prompt: 2.65, completion: 3.5 },
'llama3.3:70b': { prompt: 2.65, completion: 3.5 },
// llama-3.x-Nb pattern
// llama-3.x-Nb pattern (base defaults to smallest variant)
'llama-3.1': { prompt: 0.22, completion: 0.22 },
'llama-3.1-8b': { prompt: 0.22, completion: 0.22 },
'llama-3.1-70b': { prompt: 0.72, completion: 0.72 },
'llama-3.1-405b': { prompt: 2.4, completion: 2.4 },
'llama-3.2': { prompt: 0.1, completion: 0.1 },
'llama-3.2-1b': { prompt: 0.1, completion: 0.1 },
'llama-3.2-3b': { prompt: 0.15, completion: 0.15 },
'llama-3.2-11b': { prompt: 0.16, completion: 0.16 },
'llama-3.2-90b': { prompt: 0.72, completion: 0.72 },
'llama-3.3': { prompt: 2.65, completion: 3.5 },
'llama-3.3-70b': { prompt: 2.65, completion: 3.5 },
'mistral-7b': { prompt: 0.15, completion: 0.2 },
'mistral-small': { prompt: 0.15, completion: 0.2 },
@@ -52,15 +66,19 @@ const bedrockValues = {
'mistral-large-2407': { prompt: 3.0, completion: 9.0 },
'command-text': { prompt: 1.5, completion: 2.0 },
'command-light': { prompt: 0.3, completion: 0.6 },
'ai21.j2-mid-v1': { prompt: 12.5, completion: 12.5 },
'ai21.j2-ultra-v1': { prompt: 18.8, completion: 18.8 },
'ai21.jamba-instruct-v1:0': { prompt: 0.5, completion: 0.7 },
'amazon.titan-text-lite-v1': { prompt: 0.15, completion: 0.2 },
'amazon.titan-text-express-v1': { prompt: 0.2, completion: 0.6 },
'amazon.titan-text-premier-v1:0': { prompt: 0.5, completion: 1.5 },
'amazon.nova-micro-v1:0': { prompt: 0.035, completion: 0.14 },
'amazon.nova-lite-v1:0': { prompt: 0.06, completion: 0.24 },
'amazon.nova-pro-v1:0': { prompt: 0.8, completion: 3.2 },
// AI21 models
'j2-mid': { prompt: 12.5, completion: 12.5 },
'j2-ultra': { prompt: 18.8, completion: 18.8 },
'jamba-instruct': { prompt: 0.5, completion: 0.7 },
// Amazon Titan models
'titan-text-lite': { prompt: 0.15, completion: 0.2 },
'titan-text-express': { prompt: 0.2, completion: 0.6 },
'titan-text-premier': { prompt: 0.5, completion: 1.5 },
// Amazon Nova models
'nova-micro': { prompt: 0.035, completion: 0.14 },
'nova-lite': { prompt: 0.06, completion: 0.24 },
'nova-pro': { prompt: 0.8, completion: 3.2 },
'nova-premier': { prompt: 2.5, completion: 12.5 },
'deepseek.r1': { prompt: 1.35, completion: 5.4 },
};
@@ -71,89 +89,136 @@ const bedrockValues = {
*/
const tokenValues = Object.assign(
{
// Legacy token size mappings (generic patterns - check LAST)
'8k': { prompt: 30, completion: 60 },
'32k': { prompt: 60, completion: 120 },
'4k': { prompt: 1.5, completion: 2 },
'16k': { prompt: 3, completion: 4 },
// Generic fallback patterns (check LAST)
'claude-': { prompt: 0.8, completion: 2.4 },
deepseek: { prompt: 0.28, completion: 0.42 },
command: { prompt: 0.38, completion: 0.38 },
gemma: { prompt: 0.02, completion: 0.04 }, // Base pattern (using gemma-3n-e4b pricing)
gemini: { prompt: 0.5, completion: 1.5 },
'gpt-oss': { prompt: 0.05, completion: 0.2 },
// Specific model variants (check FIRST - more specific patterns at end)
'gpt-3.5-turbo-1106': { prompt: 1, completion: 2 },
'o4-mini': { prompt: 1.1, completion: 4.4 },
'o3-mini': { prompt: 1.1, completion: 4.4 },
o3: { prompt: 2, completion: 8 },
'o1-mini': { prompt: 1.1, completion: 4.4 },
'o1-preview': { prompt: 15, completion: 60 },
o1: { prompt: 15, completion: 60 },
'gpt-3.5-turbo-0125': { prompt: 0.5, completion: 1.5 },
'gpt-4-1106': { prompt: 10, completion: 30 },
'gpt-4.1': { prompt: 2, completion: 8 },
'gpt-4.1-nano': { prompt: 0.1, completion: 0.4 },
'gpt-4.1-mini': { prompt: 0.4, completion: 1.6 },
'gpt-4.1': { prompt: 2, completion: 8 },
'gpt-4.5': { prompt: 75, completion: 150 },
'gpt-4o-mini': { prompt: 0.15, completion: 0.6 },
'gpt-5': { prompt: 1.25, completion: 10 },
'gpt-5-mini': { prompt: 0.25, completion: 2 },
'gpt-5-nano': { prompt: 0.05, completion: 0.4 },
'gpt-4o': { prompt: 2.5, completion: 10 },
'gpt-4o-2024-05-13': { prompt: 5, completion: 15 },
'gpt-4-1106': { prompt: 10, completion: 30 },
'gpt-3.5-turbo-0125': { prompt: 0.5, completion: 1.5 },
'claude-3-opus': { prompt: 15, completion: 75 },
'gpt-4o-mini': { prompt: 0.15, completion: 0.6 },
'gpt-5': { prompt: 1.25, completion: 10 },
'gpt-5-nano': { prompt: 0.05, completion: 0.4 },
'gpt-5-mini': { prompt: 0.25, completion: 2 },
'gpt-5-pro': { prompt: 15, completion: 120 },
o1: { prompt: 15, completion: 60 },
'o1-mini': { prompt: 1.1, completion: 4.4 },
'o1-preview': { prompt: 15, completion: 60 },
o3: { prompt: 2, completion: 8 },
'o3-mini': { prompt: 1.1, completion: 4.4 },
'o4-mini': { prompt: 1.1, completion: 4.4 },
'claude-instant': { prompt: 0.8, completion: 2.4 },
'claude-2': { prompt: 8, completion: 24 },
'claude-2.1': { prompt: 8, completion: 24 },
'claude-3-haiku': { prompt: 0.25, completion: 1.25 },
'claude-3-sonnet': { prompt: 3, completion: 15 },
'claude-3-opus': { prompt: 15, completion: 75 },
'claude-3-5-haiku': { prompt: 0.8, completion: 4 },
'claude-3.5-haiku': { prompt: 0.8, completion: 4 },
'claude-3-5-sonnet': { prompt: 3, completion: 15 },
'claude-3.5-sonnet': { prompt: 3, completion: 15 },
'claude-3-7-sonnet': { prompt: 3, completion: 15 },
'claude-3.7-sonnet': { prompt: 3, completion: 15 },
'claude-3-5-haiku': { prompt: 0.8, completion: 4 },
'claude-3.5-haiku': { prompt: 0.8, completion: 4 },
'claude-3-haiku': { prompt: 0.25, completion: 1.25 },
'claude-sonnet-4': { prompt: 3, completion: 15 },
'claude-haiku-4-5': { prompt: 1, completion: 5 },
'claude-opus-4': { prompt: 15, completion: 75 },
'claude-2.1': { prompt: 8, completion: 24 },
'claude-2': { prompt: 8, completion: 24 },
'claude-instant': { prompt: 0.8, completion: 2.4 },
'claude-': { prompt: 0.8, completion: 2.4 },
'command-r-plus': { prompt: 3, completion: 15 },
'claude-sonnet-4': { prompt: 3, completion: 15 },
'command-r': { prompt: 0.5, completion: 1.5 },
'command-r-plus': { prompt: 3, completion: 15 },
'command-text': { prompt: 1.5, completion: 2.0 },
'deepseek-reasoner': { prompt: 0.28, completion: 0.42 },
deepseek: { prompt: 0.28, completion: 0.42 },
/* cohere doesn't have rates for the older command models,
so this was from https://artificialanalysis.ai/models/command-light/providers */
command: { prompt: 0.38, completion: 0.38 },
gemma: { prompt: 0, completion: 0 }, // https://ai.google.dev/pricing
'gemma-2': { prompt: 0, completion: 0 }, // https://ai.google.dev/pricing
'gemma-3': { prompt: 0, completion: 0 }, // https://ai.google.dev/pricing
'gemma-3-27b': { prompt: 0, completion: 0 }, // https://ai.google.dev/pricing
'gemini-2.0-flash-lite': { prompt: 0.075, completion: 0.3 },
'gemini-2.0-flash': { prompt: 0.1, completion: 0.4 },
'gemini-2.0': { prompt: 0, completion: 0 }, // https://ai.google.dev/pricing
'gemini-2.5-pro': { prompt: 1.25, completion: 10 },
'gemini-2.5-flash': { prompt: 0.3, completion: 2.5 },
'gemini-2.5-flash-lite': { prompt: 0.075, completion: 0.4 },
'gemini-2.5': { prompt: 0, completion: 0 }, // Free for a period of time
'gemini-1.5-flash-8b': { prompt: 0.075, completion: 0.3 },
'gemini-1.5-flash': { prompt: 0.15, completion: 0.6 },
'deepseek-r1': { prompt: 0.4, completion: 2.0 },
'deepseek-v3': { prompt: 0.2, completion: 0.8 },
'gemma-2': { prompt: 0.01, completion: 0.03 }, // Base pattern (using gemma-2-9b pricing)
'gemma-3': { prompt: 0.02, completion: 0.04 }, // Base pattern (using gemma-3n-e4b pricing)
'gemma-3-27b': { prompt: 0.09, completion: 0.16 },
'gemini-1.5': { prompt: 2.5, completion: 10 },
'gemini-1.5-flash': { prompt: 0.15, completion: 0.6 },
'gemini-1.5-flash-8b': { prompt: 0.075, completion: 0.3 },
'gemini-2.0': { prompt: 0.1, completion: 0.4 }, // Base pattern (using 2.0-flash pricing)
'gemini-2.0-flash': { prompt: 0.1, completion: 0.4 },
'gemini-2.0-flash-lite': { prompt: 0.075, completion: 0.3 },
'gemini-2.5': { prompt: 0.3, completion: 2.5 }, // Base pattern (using 2.5-flash pricing)
'gemini-2.5-flash': { prompt: 0.3, completion: 2.5 },
'gemini-2.5-flash-lite': { prompt: 0.1, completion: 0.4 },
'gemini-2.5-pro': { prompt: 1.25, completion: 10 },
'gemini-pro-vision': { prompt: 0.5, completion: 1.5 },
gemini: { prompt: 0.5, completion: 1.5 },
'grok-2-vision-1212': { prompt: 2.0, completion: 10.0 },
'grok-2-vision-latest': { prompt: 2.0, completion: 10.0 },
'grok-2-vision': { prompt: 2.0, completion: 10.0 },
grok: { prompt: 2.0, completion: 10.0 }, // Base pattern defaults to grok-2
'grok-beta': { prompt: 5.0, completion: 15.0 },
'grok-vision-beta': { prompt: 5.0, completion: 15.0 },
'grok-2': { prompt: 2.0, completion: 10.0 },
'grok-2-1212': { prompt: 2.0, completion: 10.0 },
'grok-2-latest': { prompt: 2.0, completion: 10.0 },
'grok-2': { prompt: 2.0, completion: 10.0 },
'grok-3-mini-fast': { prompt: 0.6, completion: 4 },
'grok-3-mini': { prompt: 0.3, completion: 0.5 },
'grok-3-fast': { prompt: 5.0, completion: 25.0 },
'grok-2-vision': { prompt: 2.0, completion: 10.0 },
'grok-2-vision-1212': { prompt: 2.0, completion: 10.0 },
'grok-2-vision-latest': { prompt: 2.0, completion: 10.0 },
'grok-3': { prompt: 3.0, completion: 15.0 },
'grok-3-fast': { prompt: 5.0, completion: 25.0 },
'grok-3-mini': { prompt: 0.3, completion: 0.5 },
'grok-3-mini-fast': { prompt: 0.6, completion: 4 },
'grok-4': { prompt: 3.0, completion: 15.0 },
'grok-beta': { prompt: 5.0, completion: 15.0 },
'mistral-large': { prompt: 2.0, completion: 6.0 },
'pixtral-large': { prompt: 2.0, completion: 6.0 },
'mistral-saba': { prompt: 0.2, completion: 0.6 },
codestral: { prompt: 0.3, completion: 0.9 },
'ministral-8b': { prompt: 0.1, completion: 0.1 },
'ministral-3b': { prompt: 0.04, completion: 0.04 },
// GPT-OSS models
'ministral-8b': { prompt: 0.1, completion: 0.1 },
'mistral-nemo': { prompt: 0.15, completion: 0.15 },
'mistral-saba': { prompt: 0.2, completion: 0.6 },
'pixtral-large': { prompt: 2.0, completion: 6.0 },
'mistral-large': { prompt: 2.0, completion: 6.0 },
'mixtral-8x22b': { prompt: 0.65, completion: 0.65 },
kimi: { prompt: 0.14, completion: 2.49 }, // Base pattern (using kimi-k2 pricing)
// GPT-OSS models (specific sizes)
'gpt-oss:20b': { prompt: 0.05, completion: 0.2 },
'gpt-oss-20b': { prompt: 0.05, completion: 0.2 },
'gpt-oss:120b': { prompt: 0.15, completion: 0.6 },
'gpt-oss-120b': { prompt: 0.15, completion: 0.6 },
// GLM models (Zhipu AI) - general to specific
glm4: { prompt: 0.1, completion: 0.1 },
'glm-4': { prompt: 0.1, completion: 0.1 },
'glm-4-32b': { prompt: 0.1, completion: 0.1 },
'glm-4.5': { prompt: 0.35, completion: 1.55 },
'glm-4.5-air': { prompt: 0.14, completion: 0.86 },
'glm-4.5v': { prompt: 0.6, completion: 1.8 },
'glm-4.6': { prompt: 0.5, completion: 1.75 },
// Qwen models
qwen: { prompt: 0.08, completion: 0.33 }, // Qwen base pattern (using qwen2.5-72b pricing)
'qwen2.5': { prompt: 0.08, completion: 0.33 }, // Qwen 2.5 base pattern
'qwen-turbo': { prompt: 0.05, completion: 0.2 },
'qwen-plus': { prompt: 0.4, completion: 1.2 },
'qwen-max': { prompt: 1.6, completion: 6.4 },
'qwq-32b': { prompt: 0.15, completion: 0.4 },
// Qwen3 models
qwen3: { prompt: 0.035, completion: 0.138 }, // Qwen3 base pattern (using qwen3-4b pricing)
'qwen3-8b': { prompt: 0.035, completion: 0.138 },
'qwen3-14b': { prompt: 0.05, completion: 0.22 },
'qwen3-30b-a3b': { prompt: 0.06, completion: 0.22 },
'qwen3-32b': { prompt: 0.05, completion: 0.2 },
'qwen3-235b-a22b': { prompt: 0.08, completion: 0.55 },
// Qwen3 VL (Vision-Language) models
'qwen3-vl-8b-thinking': { prompt: 0.18, completion: 2.1 },
'qwen3-vl-8b-instruct': { prompt: 0.18, completion: 0.69 },
'qwen3-vl-30b-a3b': { prompt: 0.29, completion: 1.0 },
'qwen3-vl-235b-a22b': { prompt: 0.3, completion: 1.2 },
// Qwen3 specialized models
'qwen3-max': { prompt: 1.2, completion: 6 },
'qwen3-coder': { prompt: 0.22, completion: 0.95 },
'qwen3-coder-30b-a3b': { prompt: 0.06, completion: 0.25 },
'qwen3-coder-plus': { prompt: 1, completion: 5 },
'qwen3-coder-flash': { prompt: 0.3, completion: 1.5 },
'qwen3-next-80b-a3b': { prompt: 0.1, completion: 0.8 },
},
bedrockValues,
);
@@ -184,67 +249,39 @@ const cacheTokenValues = {
* @returns {string|undefined} The key corresponding to the model name, or undefined if no match is found.
*/
const getValueKey = (model, endpoint) => {
if (!model || typeof model !== 'string') {
return undefined;
}
// Use findMatchingPattern directly against tokenValues for efficient lookup
if (!endpoint || (typeof endpoint === 'string' && !tokenValues[endpoint])) {
const matchedKey = findMatchingPattern(model, tokenValues);
if (matchedKey) {
return matchedKey;
}
}
// Fallback: use matchModelName for edge cases and legacy handling
const modelName = matchModelName(model, endpoint);
if (!modelName) {
return undefined;
}
// Legacy token size mappings and aliases for older models
if (modelName.includes('gpt-3.5-turbo-16k')) {
return '16k';
} else if (modelName.includes('gpt-3.5-turbo-0125')) {
return 'gpt-3.5-turbo-0125';
} else if (modelName.includes('gpt-3.5-turbo-1106')) {
return 'gpt-3.5-turbo-1106';
} else if (modelName.includes('gpt-3.5')) {
return '4k';
} else if (modelName.includes('o4-mini')) {
return 'o4-mini';
} else if (modelName.includes('o4')) {
return 'o4';
} else if (modelName.includes('o3-mini')) {
return 'o3-mini';
} else if (modelName.includes('o3')) {
return 'o3';
} else if (modelName.includes('o1-preview')) {
return 'o1-preview';
} else if (modelName.includes('o1-mini')) {
return 'o1-mini';
} else if (modelName.includes('o1')) {
return 'o1';
} else if (modelName.includes('gpt-4.5')) {
return 'gpt-4.5';
} else if (modelName.includes('gpt-4.1-nano')) {
return 'gpt-4.1-nano';
} else if (modelName.includes('gpt-4.1-mini')) {
return 'gpt-4.1-mini';
} else if (modelName.includes('gpt-4.1')) {
return 'gpt-4.1';
} else if (modelName.includes('gpt-4o-2024-05-13')) {
return 'gpt-4o-2024-05-13';
} else if (modelName.includes('gpt-5-nano')) {
return 'gpt-5-nano';
} else if (modelName.includes('gpt-5-mini')) {
return 'gpt-5-mini';
} else if (modelName.includes('gpt-5')) {
return 'gpt-5';
} else if (modelName.includes('gpt-4o-mini')) {
return 'gpt-4o-mini';
} else if (modelName.includes('gpt-4o')) {
return 'gpt-4o';
} else if (modelName.includes('gpt-4-vision')) {
return 'gpt-4-1106';
} else if (modelName.includes('gpt-4-1106')) {
return 'gpt-4-1106';
return 'gpt-4-1106'; // Alias for gpt-4-vision
} else if (modelName.includes('gpt-4-0125')) {
return 'gpt-4-1106';
return 'gpt-4-1106'; // Alias for gpt-4-0125
} else if (modelName.includes('gpt-4-turbo')) {
return 'gpt-4-1106';
return 'gpt-4-1106'; // Alias for gpt-4-turbo
} else if (modelName.includes('gpt-4-32k')) {
return '32k';
} else if (modelName.includes('gpt-4')) {
return '8k';
} else if (tokenValues[modelName]) {
return modelName;
}
return undefined;

View File

@@ -1,3 +1,4 @@
const { maxTokensMap } = require('@librechat/api');
const { EModelEndpoint } = require('librechat-data-provider');
const {
defaultRate,
@@ -113,6 +114,14 @@ describe('getValueKey', () => {
expect(getValueKey('gpt-5-nano-2025-01-30-0130')).toBe('gpt-5-nano');
});
it('should return "gpt-5-pro" for model type of "gpt-5-pro"', () => {
expect(getValueKey('gpt-5-pro-2025-01-30')).toBe('gpt-5-pro');
expect(getValueKey('openai/gpt-5-pro')).toBe('gpt-5-pro');
expect(getValueKey('gpt-5-pro-0130')).toBe('gpt-5-pro');
expect(getValueKey('gpt-5-pro-2025-01-30-0130')).toBe('gpt-5-pro');
expect(getValueKey('gpt-5-pro-preview')).toBe('gpt-5-pro');
});
it('should return "gpt-4o" for model type of "gpt-4o"', () => {
expect(getValueKey('gpt-4o-2024-08-06')).toBe('gpt-4o');
expect(getValueKey('gpt-4o-2024-08-06-0718')).toBe('gpt-4o');
@@ -184,6 +193,16 @@ describe('getValueKey', () => {
expect(getValueKey('claude-3.5-haiku-turbo')).toBe('claude-3.5-haiku');
expect(getValueKey('claude-3.5-haiku-0125')).toBe('claude-3.5-haiku');
});
it('should return expected value keys for "gpt-oss" models', () => {
expect(getValueKey('openai/gpt-oss-120b')).toBe('gpt-oss-120b');
expect(getValueKey('openai/gpt-oss:120b')).toBe('gpt-oss:120b');
expect(getValueKey('openai/gpt-oss-570b')).toBe('gpt-oss');
expect(getValueKey('gpt-oss-570b')).toBe('gpt-oss');
expect(getValueKey('groq/gpt-oss-1080b')).toBe('gpt-oss');
expect(getValueKey('gpt-oss-20b')).toBe('gpt-oss-20b');
expect(getValueKey('oai/gpt-oss:20b')).toBe('gpt-oss:20b');
});
});
describe('getMultiplier', () => {
@@ -278,6 +297,20 @@ describe('getMultiplier', () => {
);
});
it('should return the correct multiplier for gpt-5-pro', () => {
const valueKey = getValueKey('gpt-5-pro-2025-01-30');
expect(getMultiplier({ valueKey, tokenType: 'prompt' })).toBe(tokenValues['gpt-5-pro'].prompt);
expect(getMultiplier({ valueKey, tokenType: 'completion' })).toBe(
tokenValues['gpt-5-pro'].completion,
);
expect(getMultiplier({ model: 'gpt-5-pro-preview', tokenType: 'prompt' })).toBe(
tokenValues['gpt-5-pro'].prompt,
);
expect(getMultiplier({ model: 'openai/gpt-5-pro', tokenType: 'completion' })).toBe(
tokenValues['gpt-5-pro'].completion,
);
});
it('should return the correct multiplier for gpt-4o', () => {
const valueKey = getValueKey('gpt-4o-2024-08-06');
expect(getMultiplier({ valueKey, tokenType: 'prompt' })).toBe(tokenValues['gpt-4o'].prompt);
@@ -394,6 +427,18 @@ describe('getMultiplier', () => {
expect(getMultiplier({ model: key, tokenType: 'completion' })).toBe(expectedCompletion);
});
});
it('should return correct multipliers for GLM models', () => {
const models = ['glm-4.6', 'glm-4.5v', 'glm-4.5-air', 'glm-4.5', 'glm-4-32b', 'glm-4', 'glm4'];
models.forEach((key) => {
const expectedPrompt = tokenValues[key].prompt;
const expectedCompletion = tokenValues[key].completion;
expect(getMultiplier({ valueKey: key, tokenType: 'prompt' })).toBe(expectedPrompt);
expect(getMultiplier({ valueKey: key, tokenType: 'completion' })).toBe(expectedCompletion);
expect(getMultiplier({ model: key, tokenType: 'prompt' })).toBe(expectedPrompt);
expect(getMultiplier({ model: key, tokenType: 'completion' })).toBe(expectedCompletion);
});
});
});
describe('AWS Bedrock Model Tests', () => {
@@ -449,6 +494,249 @@ describe('AWS Bedrock Model Tests', () => {
});
});
describe('Amazon Model Tests', () => {
describe('Amazon Nova Models', () => {
it('should return correct pricing for nova-premier', () => {
expect(getMultiplier({ model: 'nova-premier', tokenType: 'prompt' })).toBe(
tokenValues['nova-premier'].prompt,
);
expect(getMultiplier({ model: 'nova-premier', tokenType: 'completion' })).toBe(
tokenValues['nova-premier'].completion,
);
expect(getMultiplier({ model: 'amazon.nova-premier-v1:0', tokenType: 'prompt' })).toBe(
tokenValues['nova-premier'].prompt,
);
expect(getMultiplier({ model: 'amazon.nova-premier-v1:0', tokenType: 'completion' })).toBe(
tokenValues['nova-premier'].completion,
);
});
it('should return correct pricing for nova-pro', () => {
expect(getMultiplier({ model: 'nova-pro', tokenType: 'prompt' })).toBe(
tokenValues['nova-pro'].prompt,
);
expect(getMultiplier({ model: 'nova-pro', tokenType: 'completion' })).toBe(
tokenValues['nova-pro'].completion,
);
expect(getMultiplier({ model: 'amazon.nova-pro-v1:0', tokenType: 'prompt' })).toBe(
tokenValues['nova-pro'].prompt,
);
expect(getMultiplier({ model: 'amazon.nova-pro-v1:0', tokenType: 'completion' })).toBe(
tokenValues['nova-pro'].completion,
);
});
it('should return correct pricing for nova-lite', () => {
expect(getMultiplier({ model: 'nova-lite', tokenType: 'prompt' })).toBe(
tokenValues['nova-lite'].prompt,
);
expect(getMultiplier({ model: 'nova-lite', tokenType: 'completion' })).toBe(
tokenValues['nova-lite'].completion,
);
expect(getMultiplier({ model: 'amazon.nova-lite-v1:0', tokenType: 'prompt' })).toBe(
tokenValues['nova-lite'].prompt,
);
expect(getMultiplier({ model: 'amazon.nova-lite-v1:0', tokenType: 'completion' })).toBe(
tokenValues['nova-lite'].completion,
);
});
it('should return correct pricing for nova-micro', () => {
expect(getMultiplier({ model: 'nova-micro', tokenType: 'prompt' })).toBe(
tokenValues['nova-micro'].prompt,
);
expect(getMultiplier({ model: 'nova-micro', tokenType: 'completion' })).toBe(
tokenValues['nova-micro'].completion,
);
expect(getMultiplier({ model: 'amazon.nova-micro-v1:0', tokenType: 'prompt' })).toBe(
tokenValues['nova-micro'].prompt,
);
expect(getMultiplier({ model: 'amazon.nova-micro-v1:0', tokenType: 'completion' })).toBe(
tokenValues['nova-micro'].completion,
);
});
it('should match both short and full model names to the same pricing', () => {
const models = ['nova-micro', 'nova-lite', 'nova-pro', 'nova-premier'];
const fullModels = [
'amazon.nova-micro-v1:0',
'amazon.nova-lite-v1:0',
'amazon.nova-pro-v1:0',
'amazon.nova-premier-v1:0',
];
models.forEach((shortModel, i) => {
const fullModel = fullModels[i];
const shortPrompt = getMultiplier({ model: shortModel, tokenType: 'prompt' });
const fullPrompt = getMultiplier({ model: fullModel, tokenType: 'prompt' });
const shortCompletion = getMultiplier({ model: shortModel, tokenType: 'completion' });
const fullCompletion = getMultiplier({ model: fullModel, tokenType: 'completion' });
expect(shortPrompt).toBe(fullPrompt);
expect(shortCompletion).toBe(fullCompletion);
expect(shortPrompt).toBe(tokenValues[shortModel].prompt);
expect(shortCompletion).toBe(tokenValues[shortModel].completion);
});
});
});
describe('Amazon Titan Models', () => {
it('should return correct pricing for titan-text-premier', () => {
expect(getMultiplier({ model: 'titan-text-premier', tokenType: 'prompt' })).toBe(
tokenValues['titan-text-premier'].prompt,
);
expect(getMultiplier({ model: 'titan-text-premier', tokenType: 'completion' })).toBe(
tokenValues['titan-text-premier'].completion,
);
expect(getMultiplier({ model: 'amazon.titan-text-premier-v1:0', tokenType: 'prompt' })).toBe(
tokenValues['titan-text-premier'].prompt,
);
expect(
getMultiplier({ model: 'amazon.titan-text-premier-v1:0', tokenType: 'completion' }),
).toBe(tokenValues['titan-text-premier'].completion);
});
it('should return correct pricing for titan-text-express', () => {
expect(getMultiplier({ model: 'titan-text-express', tokenType: 'prompt' })).toBe(
tokenValues['titan-text-express'].prompt,
);
expect(getMultiplier({ model: 'titan-text-express', tokenType: 'completion' })).toBe(
tokenValues['titan-text-express'].completion,
);
expect(getMultiplier({ model: 'amazon.titan-text-express-v1', tokenType: 'prompt' })).toBe(
tokenValues['titan-text-express'].prompt,
);
expect(
getMultiplier({ model: 'amazon.titan-text-express-v1', tokenType: 'completion' }),
).toBe(tokenValues['titan-text-express'].completion);
});
it('should return correct pricing for titan-text-lite', () => {
expect(getMultiplier({ model: 'titan-text-lite', tokenType: 'prompt' })).toBe(
tokenValues['titan-text-lite'].prompt,
);
expect(getMultiplier({ model: 'titan-text-lite', tokenType: 'completion' })).toBe(
tokenValues['titan-text-lite'].completion,
);
expect(getMultiplier({ model: 'amazon.titan-text-lite-v1', tokenType: 'prompt' })).toBe(
tokenValues['titan-text-lite'].prompt,
);
expect(getMultiplier({ model: 'amazon.titan-text-lite-v1', tokenType: 'completion' })).toBe(
tokenValues['titan-text-lite'].completion,
);
});
it('should match both short and full model names to the same pricing', () => {
const models = ['titan-text-lite', 'titan-text-express', 'titan-text-premier'];
const fullModels = [
'amazon.titan-text-lite-v1',
'amazon.titan-text-express-v1',
'amazon.titan-text-premier-v1:0',
];
models.forEach((shortModel, i) => {
const fullModel = fullModels[i];
const shortPrompt = getMultiplier({ model: shortModel, tokenType: 'prompt' });
const fullPrompt = getMultiplier({ model: fullModel, tokenType: 'prompt' });
const shortCompletion = getMultiplier({ model: shortModel, tokenType: 'completion' });
const fullCompletion = getMultiplier({ model: fullModel, tokenType: 'completion' });
expect(shortPrompt).toBe(fullPrompt);
expect(shortCompletion).toBe(fullCompletion);
expect(shortPrompt).toBe(tokenValues[shortModel].prompt);
expect(shortCompletion).toBe(tokenValues[shortModel].completion);
});
});
});
});
describe('AI21 Model Tests', () => {
describe('AI21 J2 Models', () => {
it('should return correct pricing for j2-mid', () => {
expect(getMultiplier({ model: 'j2-mid', tokenType: 'prompt' })).toBe(
tokenValues['j2-mid'].prompt,
);
expect(getMultiplier({ model: 'j2-mid', tokenType: 'completion' })).toBe(
tokenValues['j2-mid'].completion,
);
expect(getMultiplier({ model: 'ai21.j2-mid-v1', tokenType: 'prompt' })).toBe(
tokenValues['j2-mid'].prompt,
);
expect(getMultiplier({ model: 'ai21.j2-mid-v1', tokenType: 'completion' })).toBe(
tokenValues['j2-mid'].completion,
);
});
it('should return correct pricing for j2-ultra', () => {
expect(getMultiplier({ model: 'j2-ultra', tokenType: 'prompt' })).toBe(
tokenValues['j2-ultra'].prompt,
);
expect(getMultiplier({ model: 'j2-ultra', tokenType: 'completion' })).toBe(
tokenValues['j2-ultra'].completion,
);
expect(getMultiplier({ model: 'ai21.j2-ultra-v1', tokenType: 'prompt' })).toBe(
tokenValues['j2-ultra'].prompt,
);
expect(getMultiplier({ model: 'ai21.j2-ultra-v1', tokenType: 'completion' })).toBe(
tokenValues['j2-ultra'].completion,
);
});
it('should match both short and full model names to the same pricing', () => {
const models = ['j2-mid', 'j2-ultra'];
const fullModels = ['ai21.j2-mid-v1', 'ai21.j2-ultra-v1'];
models.forEach((shortModel, i) => {
const fullModel = fullModels[i];
const shortPrompt = getMultiplier({ model: shortModel, tokenType: 'prompt' });
const fullPrompt = getMultiplier({ model: fullModel, tokenType: 'prompt' });
const shortCompletion = getMultiplier({ model: shortModel, tokenType: 'completion' });
const fullCompletion = getMultiplier({ model: fullModel, tokenType: 'completion' });
expect(shortPrompt).toBe(fullPrompt);
expect(shortCompletion).toBe(fullCompletion);
expect(shortPrompt).toBe(tokenValues[shortModel].prompt);
expect(shortCompletion).toBe(tokenValues[shortModel].completion);
});
});
});
describe('AI21 Jamba Models', () => {
it('should return correct pricing for jamba-instruct', () => {
expect(getMultiplier({ model: 'jamba-instruct', tokenType: 'prompt' })).toBe(
tokenValues['jamba-instruct'].prompt,
);
expect(getMultiplier({ model: 'jamba-instruct', tokenType: 'completion' })).toBe(
tokenValues['jamba-instruct'].completion,
);
expect(getMultiplier({ model: 'ai21.jamba-instruct-v1:0', tokenType: 'prompt' })).toBe(
tokenValues['jamba-instruct'].prompt,
);
expect(getMultiplier({ model: 'ai21.jamba-instruct-v1:0', tokenType: 'completion' })).toBe(
tokenValues['jamba-instruct'].completion,
);
});
it('should match both short and full model names to the same pricing', () => {
const shortPrompt = getMultiplier({ model: 'jamba-instruct', tokenType: 'prompt' });
const fullPrompt = getMultiplier({
model: 'ai21.jamba-instruct-v1:0',
tokenType: 'prompt',
});
const shortCompletion = getMultiplier({ model: 'jamba-instruct', tokenType: 'completion' });
const fullCompletion = getMultiplier({
model: 'ai21.jamba-instruct-v1:0',
tokenType: 'completion',
});
expect(shortPrompt).toBe(fullPrompt);
expect(shortCompletion).toBe(fullCompletion);
expect(shortPrompt).toBe(tokenValues['jamba-instruct'].prompt);
expect(shortCompletion).toBe(tokenValues['jamba-instruct'].completion);
});
});
});
describe('Deepseek Model Tests', () => {
const deepseekModels = ['deepseek-chat', 'deepseek-coder', 'deepseek-reasoner', 'deepseek.r1'];
@@ -480,6 +768,187 @@ describe('Deepseek Model Tests', () => {
});
});
describe('Qwen3 Model Tests', () => {
describe('Qwen3 Base Models', () => {
it('should return correct pricing for qwen3 base pattern', () => {
expect(getMultiplier({ model: 'qwen3', tokenType: 'prompt' })).toBe(
tokenValues['qwen3'].prompt,
);
expect(getMultiplier({ model: 'qwen3', tokenType: 'completion' })).toBe(
tokenValues['qwen3'].completion,
);
});
it('should return correct pricing for qwen3-4b (falls back to qwen3)', () => {
expect(getMultiplier({ model: 'qwen3-4b', tokenType: 'prompt' })).toBe(
tokenValues['qwen3'].prompt,
);
expect(getMultiplier({ model: 'qwen3-4b', tokenType: 'completion' })).toBe(
tokenValues['qwen3'].completion,
);
});
it('should return correct pricing for qwen3-8b', () => {
expect(getMultiplier({ model: 'qwen3-8b', tokenType: 'prompt' })).toBe(
tokenValues['qwen3-8b'].prompt,
);
expect(getMultiplier({ model: 'qwen3-8b', tokenType: 'completion' })).toBe(
tokenValues['qwen3-8b'].completion,
);
});
it('should return correct pricing for qwen3-14b', () => {
expect(getMultiplier({ model: 'qwen3-14b', tokenType: 'prompt' })).toBe(
tokenValues['qwen3-14b'].prompt,
);
expect(getMultiplier({ model: 'qwen3-14b', tokenType: 'completion' })).toBe(
tokenValues['qwen3-14b'].completion,
);
});
it('should return correct pricing for qwen3-235b-a22b', () => {
expect(getMultiplier({ model: 'qwen3-235b-a22b', tokenType: 'prompt' })).toBe(
tokenValues['qwen3-235b-a22b'].prompt,
);
expect(getMultiplier({ model: 'qwen3-235b-a22b', tokenType: 'completion' })).toBe(
tokenValues['qwen3-235b-a22b'].completion,
);
});
it('should handle model name variations with provider prefixes', () => {
const models = [
{ input: 'qwen3', expected: 'qwen3' },
{ input: 'qwen3-4b', expected: 'qwen3' },
{ input: 'qwen3-8b', expected: 'qwen3-8b' },
{ input: 'qwen3-32b', expected: 'qwen3-32b' },
];
models.forEach(({ input, expected }) => {
const withPrefix = `alibaba/${input}`;
expect(getMultiplier({ model: withPrefix, tokenType: 'prompt' })).toBe(
tokenValues[expected].prompt,
);
expect(getMultiplier({ model: withPrefix, tokenType: 'completion' })).toBe(
tokenValues[expected].completion,
);
});
});
});
describe('Qwen3 VL (Vision-Language) Models', () => {
it('should return correct pricing for qwen3-vl-8b-thinking', () => {
expect(getMultiplier({ model: 'qwen3-vl-8b-thinking', tokenType: 'prompt' })).toBe(
tokenValues['qwen3-vl-8b-thinking'].prompt,
);
expect(getMultiplier({ model: 'qwen3-vl-8b-thinking', tokenType: 'completion' })).toBe(
tokenValues['qwen3-vl-8b-thinking'].completion,
);
});
it('should return correct pricing for qwen3-vl-8b-instruct', () => {
expect(getMultiplier({ model: 'qwen3-vl-8b-instruct', tokenType: 'prompt' })).toBe(
tokenValues['qwen3-vl-8b-instruct'].prompt,
);
expect(getMultiplier({ model: 'qwen3-vl-8b-instruct', tokenType: 'completion' })).toBe(
tokenValues['qwen3-vl-8b-instruct'].completion,
);
});
it('should return correct pricing for qwen3-vl-30b-a3b', () => {
expect(getMultiplier({ model: 'qwen3-vl-30b-a3b', tokenType: 'prompt' })).toBe(
tokenValues['qwen3-vl-30b-a3b'].prompt,
);
expect(getMultiplier({ model: 'qwen3-vl-30b-a3b', tokenType: 'completion' })).toBe(
tokenValues['qwen3-vl-30b-a3b'].completion,
);
});
it('should return correct pricing for qwen3-vl-235b-a22b', () => {
expect(getMultiplier({ model: 'qwen3-vl-235b-a22b', tokenType: 'prompt' })).toBe(
tokenValues['qwen3-vl-235b-a22b'].prompt,
);
expect(getMultiplier({ model: 'qwen3-vl-235b-a22b', tokenType: 'completion' })).toBe(
tokenValues['qwen3-vl-235b-a22b'].completion,
);
});
});
describe('Qwen3 Specialized Models', () => {
it('should return correct pricing for qwen3-max', () => {
expect(getMultiplier({ model: 'qwen3-max', tokenType: 'prompt' })).toBe(
tokenValues['qwen3-max'].prompt,
);
expect(getMultiplier({ model: 'qwen3-max', tokenType: 'completion' })).toBe(
tokenValues['qwen3-max'].completion,
);
});
it('should return correct pricing for qwen3-coder', () => {
expect(getMultiplier({ model: 'qwen3-coder', tokenType: 'prompt' })).toBe(
tokenValues['qwen3-coder'].prompt,
);
expect(getMultiplier({ model: 'qwen3-coder', tokenType: 'completion' })).toBe(
tokenValues['qwen3-coder'].completion,
);
});
it('should return correct pricing for qwen3-coder-plus', () => {
expect(getMultiplier({ model: 'qwen3-coder-plus', tokenType: 'prompt' })).toBe(
tokenValues['qwen3-coder-plus'].prompt,
);
expect(getMultiplier({ model: 'qwen3-coder-plus', tokenType: 'completion' })).toBe(
tokenValues['qwen3-coder-plus'].completion,
);
});
it('should return correct pricing for qwen3-coder-flash', () => {
expect(getMultiplier({ model: 'qwen3-coder-flash', tokenType: 'prompt' })).toBe(
tokenValues['qwen3-coder-flash'].prompt,
);
expect(getMultiplier({ model: 'qwen3-coder-flash', tokenType: 'completion' })).toBe(
tokenValues['qwen3-coder-flash'].completion,
);
});
it('should return correct pricing for qwen3-next-80b-a3b', () => {
expect(getMultiplier({ model: 'qwen3-next-80b-a3b', tokenType: 'prompt' })).toBe(
tokenValues['qwen3-next-80b-a3b'].prompt,
);
expect(getMultiplier({ model: 'qwen3-next-80b-a3b', tokenType: 'completion' })).toBe(
tokenValues['qwen3-next-80b-a3b'].completion,
);
});
});
describe('Qwen3 Model Variations', () => {
it('should handle all qwen3 models with provider prefixes', () => {
const models = ['qwen3', 'qwen3-8b', 'qwen3-max', 'qwen3-coder', 'qwen3-vl-8b-instruct'];
const prefixes = ['alibaba', 'qwen', 'openrouter'];
models.forEach((model) => {
prefixes.forEach((prefix) => {
const fullModel = `${prefix}/${model}`;
expect(getMultiplier({ model: fullModel, tokenType: 'prompt' })).toBe(
tokenValues[model].prompt,
);
expect(getMultiplier({ model: fullModel, tokenType: 'completion' })).toBe(
tokenValues[model].completion,
);
});
});
});
it('should handle qwen3-4b falling back to qwen3 base pattern', () => {
const testCases = ['qwen3-4b', 'alibaba/qwen3-4b', 'qwen/qwen3-4b-preview'];
testCases.forEach((model) => {
expect(getMultiplier({ model, tokenType: 'prompt' })).toBe(tokenValues['qwen3'].prompt);
expect(getMultiplier({ model, tokenType: 'completion' })).toBe(
tokenValues['qwen3'].completion,
);
});
});
});
});
describe('getCacheMultiplier', () => {
it('should return the correct cache multiplier for a given valueKey and cacheType', () => {
expect(getCacheMultiplier({ valueKey: 'claude-3-5-sonnet', cacheType: 'write' })).toBe(
@@ -772,6 +1241,110 @@ describe('Grok Model Tests - Pricing', () => {
});
});
describe('GLM Model Tests', () => {
it('should return expected value keys for GLM models', () => {
expect(getValueKey('glm-4.6')).toBe('glm-4.6');
expect(getValueKey('glm-4.5')).toBe('glm-4.5');
expect(getValueKey('glm-4.5v')).toBe('glm-4.5v');
expect(getValueKey('glm-4.5-air')).toBe('glm-4.5-air');
expect(getValueKey('glm-4-32b')).toBe('glm-4-32b');
expect(getValueKey('glm-4')).toBe('glm-4');
expect(getValueKey('glm4')).toBe('glm4');
});
it('should match GLM model variations with provider prefixes', () => {
expect(getValueKey('z-ai/glm-4.6')).toBe('glm-4.6');
expect(getValueKey('z-ai/glm-4.5')).toBe('glm-4.5');
expect(getValueKey('z-ai/glm-4.5-air')).toBe('glm-4.5-air');
expect(getValueKey('z-ai/glm-4.5v')).toBe('glm-4.5v');
expect(getValueKey('z-ai/glm-4-32b')).toBe('glm-4-32b');
expect(getValueKey('zai/glm-4.6')).toBe('glm-4.6');
expect(getValueKey('zai/glm-4.5')).toBe('glm-4.5');
expect(getValueKey('zai/glm-4.5-air')).toBe('glm-4.5-air');
expect(getValueKey('zai/glm-4.5v')).toBe('glm-4.5v');
expect(getValueKey('zai-org/GLM-4.6')).toBe('glm-4.6');
expect(getValueKey('zai-org/GLM-4.5')).toBe('glm-4.5');
expect(getValueKey('zai-org/GLM-4.5-Air')).toBe('glm-4.5-air');
expect(getValueKey('zai-org/GLM-4.5V')).toBe('glm-4.5v');
expect(getValueKey('zai-org/GLM-4-32B-0414')).toBe('glm-4-32b');
});
it('should match GLM model variations with suffixes', () => {
expect(getValueKey('glm-4.6-fp8')).toBe('glm-4.6');
expect(getValueKey('zai-org/GLM-4.6-FP8')).toBe('glm-4.6');
expect(getValueKey('zai-org/GLM-4.5-Air-FP8')).toBe('glm-4.5-air');
});
it('should prioritize more specific GLM model patterns', () => {
expect(getValueKey('glm-4.5-air-something')).toBe('glm-4.5-air');
expect(getValueKey('glm-4.5-something')).toBe('glm-4.5');
expect(getValueKey('glm-4.5v-something')).toBe('glm-4.5v');
});
it('should return correct multipliers for all GLM models', () => {
expect(getMultiplier({ model: 'glm-4.6', tokenType: 'prompt' })).toBe(
tokenValues['glm-4.6'].prompt,
);
expect(getMultiplier({ model: 'glm-4.6', tokenType: 'completion' })).toBe(
tokenValues['glm-4.6'].completion,
);
expect(getMultiplier({ model: 'glm-4.5v', tokenType: 'prompt' })).toBe(
tokenValues['glm-4.5v'].prompt,
);
expect(getMultiplier({ model: 'glm-4.5v', tokenType: 'completion' })).toBe(
tokenValues['glm-4.5v'].completion,
);
expect(getMultiplier({ model: 'glm-4.5-air', tokenType: 'prompt' })).toBe(
tokenValues['glm-4.5-air'].prompt,
);
expect(getMultiplier({ model: 'glm-4.5-air', tokenType: 'completion' })).toBe(
tokenValues['glm-4.5-air'].completion,
);
expect(getMultiplier({ model: 'glm-4.5', tokenType: 'prompt' })).toBe(
tokenValues['glm-4.5'].prompt,
);
expect(getMultiplier({ model: 'glm-4.5', tokenType: 'completion' })).toBe(
tokenValues['glm-4.5'].completion,
);
expect(getMultiplier({ model: 'glm-4-32b', tokenType: 'prompt' })).toBe(
tokenValues['glm-4-32b'].prompt,
);
expect(getMultiplier({ model: 'glm-4-32b', tokenType: 'completion' })).toBe(
tokenValues['glm-4-32b'].completion,
);
expect(getMultiplier({ model: 'glm-4', tokenType: 'prompt' })).toBe(
tokenValues['glm-4'].prompt,
);
expect(getMultiplier({ model: 'glm-4', tokenType: 'completion' })).toBe(
tokenValues['glm-4'].completion,
);
expect(getMultiplier({ model: 'glm4', tokenType: 'prompt' })).toBe(tokenValues['glm4'].prompt);
expect(getMultiplier({ model: 'glm4', tokenType: 'completion' })).toBe(
tokenValues['glm4'].completion,
);
});
it('should return correct multipliers for GLM models with provider prefixes', () => {
expect(getMultiplier({ model: 'z-ai/glm-4.6', tokenType: 'prompt' })).toBe(
tokenValues['glm-4.6'].prompt,
);
expect(getMultiplier({ model: 'zai/glm-4.5-air', tokenType: 'completion' })).toBe(
tokenValues['glm-4.5-air'].completion,
);
expect(getMultiplier({ model: 'zai-org/GLM-4.5V', tokenType: 'prompt' })).toBe(
tokenValues['glm-4.5v'].prompt,
);
});
});
describe('Claude Model Tests', () => {
it('should return correct prompt and completion rates for Claude 4 models', () => {
expect(getMultiplier({ model: 'claude-sonnet-4', tokenType: 'prompt' })).toBe(
@@ -788,6 +1361,37 @@ describe('Claude Model Tests', () => {
);
});
it('should return correct prompt and completion rates for Claude Haiku 4.5', () => {
expect(getMultiplier({ model: 'claude-haiku-4-5', tokenType: 'prompt' })).toBe(
tokenValues['claude-haiku-4-5'].prompt,
);
expect(getMultiplier({ model: 'claude-haiku-4-5', tokenType: 'completion' })).toBe(
tokenValues['claude-haiku-4-5'].completion,
);
});
it('should handle Claude Haiku 4.5 model name variations', () => {
const modelVariations = [
'claude-haiku-4-5',
'claude-haiku-4-5-20250420',
'claude-haiku-4-5-latest',
'anthropic/claude-haiku-4-5',
'claude-haiku-4-5/anthropic',
'claude-haiku-4-5-preview',
];
modelVariations.forEach((model) => {
const valueKey = getValueKey(model);
expect(valueKey).toBe('claude-haiku-4-5');
expect(getMultiplier({ model, tokenType: 'prompt' })).toBe(
tokenValues['claude-haiku-4-5'].prompt,
);
expect(getMultiplier({ model, tokenType: 'completion' })).toBe(
tokenValues['claude-haiku-4-5'].completion,
);
});
});
it('should handle Claude 4 model name variations with different prefixes and suffixes', () => {
const modelVariations = [
'claude-sonnet-4',
@@ -865,3 +1469,119 @@ describe('Claude Model Tests', () => {
});
});
});
describe('tokens.ts and tx.js sync validation', () => {
it('should resolve all models in maxTokensMap to pricing via getValueKey', () => {
const tokensKeys = Object.keys(maxTokensMap[EModelEndpoint.openAI]);
const txKeys = Object.keys(tokenValues);
const unresolved = [];
tokensKeys.forEach((key) => {
// Skip legacy token size mappings (e.g., '4k', '8k', '16k', '32k')
if (/^\d+k$/.test(key)) return;
// Skip generic pattern keys (end with '-' or ':')
if (key.endsWith('-') || key.endsWith(':')) return;
// Try to resolve via getValueKey
const resolvedKey = getValueKey(key);
// If it resolves and the resolved key has pricing, success
if (resolvedKey && txKeys.includes(resolvedKey)) return;
// If it resolves to a legacy key (4k, 8k, etc), also OK
if (resolvedKey && /^\d+k$/.test(resolvedKey)) return;
// If we get here, this model can't get pricing - flag it
unresolved.push({
key,
resolvedKey: resolvedKey || 'undefined',
context: maxTokensMap[EModelEndpoint.openAI][key],
});
});
if (unresolved.length > 0) {
console.log('\nModels that cannot resolve to pricing via getValueKey:');
unresolved.forEach(({ key, resolvedKey, context }) => {
console.log(` - '${key}' → '${resolvedKey}' (context: ${context})`);
});
}
expect(unresolved).toEqual([]);
});
it('should not have redundant dated variants with same pricing and context as base model', () => {
const txKeys = Object.keys(tokenValues);
const redundant = [];
txKeys.forEach((key) => {
// Check if this is a dated variant (ends with -YYYY-MM-DD)
if (key.match(/.*-\d{4}-\d{2}-\d{2}$/)) {
const baseKey = key.replace(/-\d{4}-\d{2}-\d{2}$/, '');
if (txKeys.includes(baseKey)) {
const variantPricing = tokenValues[key];
const basePricing = tokenValues[baseKey];
const variantContext = maxTokensMap[EModelEndpoint.openAI][key];
const baseContext = maxTokensMap[EModelEndpoint.openAI][baseKey];
const samePricing =
variantPricing.prompt === basePricing.prompt &&
variantPricing.completion === basePricing.completion;
const sameContext = variantContext === baseContext;
if (samePricing && sameContext) {
redundant.push({
key,
baseKey,
pricing: `${variantPricing.prompt}/${variantPricing.completion}`,
context: variantContext,
});
}
}
}
});
if (redundant.length > 0) {
console.log('\nRedundant dated variants found (same pricing and context as base):');
redundant.forEach(({ key, baseKey, pricing, context }) => {
console.log(` - '${key}' → '${baseKey}' (pricing: ${pricing}, context: ${context})`);
console.log(` Can be removed - pattern matching will handle it`);
});
}
expect(redundant).toEqual([]);
});
it('should have context windows in tokens.ts for all models with pricing in tx.js (openAI catch-all)', () => {
const txKeys = Object.keys(tokenValues);
const missingContext = [];
txKeys.forEach((key) => {
// Skip legacy token size mappings (4k, 8k, 16k, 32k)
if (/^\d+k$/.test(key)) return;
// Check if this model has a context window defined
const context = maxTokensMap[EModelEndpoint.openAI][key];
if (!context) {
const pricing = tokenValues[key];
missingContext.push({
key,
pricing: `${pricing.prompt}/${pricing.completion}`,
});
}
});
if (missingContext.length > 0) {
console.log('\nModels with pricing but missing context in tokens.ts:');
missingContext.forEach(({ key, pricing }) => {
console.log(` - '${key}' (pricing: ${pricing})`);
console.log(` Add to tokens.ts openAIModels/bedrockModels/etc.`);
});
}
expect(missingContext).toEqual([]);
});
});

View File

@@ -47,9 +47,8 @@
"@langchain/core": "^0.3.62",
"@langchain/google-genai": "^0.2.13",
"@langchain/google-vertexai": "^0.2.13",
"@langchain/openai": "^0.5.18",
"@langchain/textsplitters": "^0.1.0",
"@librechat/agents": "^2.4.82",
"@librechat/agents": "^2.4.86",
"@librechat/api": "*",
"@librechat/data-schemas": "*",
"@microsoft/microsoft-graph-client": "^3.0.7",
@@ -94,7 +93,7 @@
"multer": "^2.0.2",
"nanoid": "^3.3.7",
"node-fetch": "^2.7.0",
"nodemailer": "^6.9.15",
"nodemailer": "^7.0.9",
"ollama": "^0.5.0",
"openai": "^5.10.1",
"openid-client": "^6.5.0",

View File

@@ -116,11 +116,15 @@ const refreshController = async (req, res) => {
const token = await setAuthTokens(userId, res, session);
// trigger OAuth MCP server reconnection asynchronously (best effort)
void getOAuthReconnectionManager()
.reconnectServers(userId)
.catch((err) => {
logger.error('Error reconnecting OAuth MCP servers:', err);
});
try {
void getOAuthReconnectionManager()
.reconnectServers(userId)
.catch((err) => {
logger.error('[refreshController] Error reconnecting OAuth MCP servers:', err);
});
} catch (err) {
logger.warn(`[refreshController] Cannot attempt OAuth MCP servers reconnection:`, err);
}
res.status(200).send({ token, user });
} else if (req?.query?.retry) {

View File

@@ -1,7 +1,6 @@
const { logger } = require('@librechat/data-schemas');
const { logger, webSearchKeys } = require('@librechat/data-schemas');
const { Tools, CacheKeys, Constants, FileSources } = require('librechat-data-provider');
const {
webSearchKeys,
MCPOAuthHandler,
MCPTokenStorage,
normalizeHttpError,
@@ -328,16 +327,23 @@ const maybeUninstallOAuthMCP = async (userId, pluginKey, appConfig) => {
const revocationEndpointAuthMethodsSupported =
serverConfig.oauth?.revocation_endpoint_auth_methods_supported ??
clientMetadata.revocation_endpoint_auth_methods_supported;
const oauthHeaders = serverConfig.oauth_headers ?? {};
if (tokens?.access_token) {
try {
await MCPOAuthHandler.revokeOAuthToken(serverName, tokens.access_token, 'access', {
serverUrl: serverConfig.url,
clientId: clientInfo.client_id,
clientSecret: clientInfo.client_secret ?? '',
revocationEndpoint,
revocationEndpointAuthMethodsSupported,
});
await MCPOAuthHandler.revokeOAuthToken(
serverName,
tokens.access_token,
'access',
{
serverUrl: serverConfig.url,
clientId: clientInfo.client_id,
clientSecret: clientInfo.client_secret ?? '',
revocationEndpoint,
revocationEndpointAuthMethodsSupported,
},
oauthHeaders,
);
} catch (error) {
logger.error(`Error revoking OAuth access token for ${serverName}:`, error);
}
@@ -345,13 +351,19 @@ const maybeUninstallOAuthMCP = async (userId, pluginKey, appConfig) => {
if (tokens?.refresh_token) {
try {
await MCPOAuthHandler.revokeOAuthToken(serverName, tokens.refresh_token, 'refresh', {
serverUrl: serverConfig.url,
clientId: clientInfo.client_id,
clientSecret: clientInfo.client_secret ?? '',
revocationEndpoint,
revocationEndpointAuthMethodsSupported,
});
await MCPOAuthHandler.revokeOAuthToken(
serverName,
tokens.refresh_token,
'refresh',
{
serverUrl: serverConfig.url,
clientId: clientInfo.client_id,
clientSecret: clientInfo.client_secret ?? '',
revocationEndpoint,
revocationEndpointAuthMethodsSupported,
},
oauthHeaders,
);
} catch (error) {
logger.error(`Error revoking OAuth refresh token for ${serverName}:`, error);
}

View File

@@ -211,16 +211,13 @@ class AgentClient extends BaseClient {
* @returns {Promise<Array<Partial<MongoFile>>>}
*/
async addImageURLs(message, attachments) {
const { files, text, image_urls } = await encodeAndFormat(
const { files, image_urls } = await encodeAndFormat(
this.options.req,
attachments,
this.options.agent.provider,
VisionModes.agents,
);
message.image_urls = image_urls.length ? image_urls : undefined;
if (text && text.length) {
message.ocr = text;
}
return files;
}
@@ -248,19 +245,18 @@ class AgentClient extends BaseClient {
if (this.options.attachments) {
const attachments = await this.options.attachments;
const latestMessage = orderedMessages[orderedMessages.length - 1];
if (this.message_file_map) {
this.message_file_map[orderedMessages[orderedMessages.length - 1].messageId] = attachments;
this.message_file_map[latestMessage.messageId] = attachments;
} else {
this.message_file_map = {
[orderedMessages[orderedMessages.length - 1].messageId]: attachments,
[latestMessage.messageId]: attachments,
};
}
const files = await this.addImageURLs(
orderedMessages[orderedMessages.length - 1],
attachments,
);
await this.addFileContextToMessage(latestMessage, attachments);
const files = await this.processAttachments(latestMessage, attachments);
this.options.attachments = files;
}
@@ -280,21 +276,21 @@ class AgentClient extends BaseClient {
assistantName: this.options?.modelLabel,
});
if (message.ocr && i !== orderedMessages.length - 1) {
if (message.fileContext && i !== orderedMessages.length - 1) {
if (typeof formattedMessage.content === 'string') {
formattedMessage.content = message.ocr + '\n' + formattedMessage.content;
formattedMessage.content = message.fileContext + '\n' + formattedMessage.content;
} else {
const textPart = formattedMessage.content.find((part) => part.type === 'text');
textPart
? (textPart.text = message.ocr + '\n' + textPart.text)
: formattedMessage.content.unshift({ type: 'text', text: message.ocr });
? (textPart.text = message.fileContext + '\n' + textPart.text)
: formattedMessage.content.unshift({ type: 'text', text: message.fileContext });
}
} else if (message.ocr && i === orderedMessages.length - 1) {
systemContent = [systemContent, message.ocr].join('\n');
} else if (message.fileContext && i === orderedMessages.length - 1) {
systemContent = [systemContent, message.fileContext].join('\n');
}
const needsTokenCount =
(this.contextStrategy && !orderedMessages[i].tokenCount) || message.ocr;
(this.contextStrategy && !orderedMessages[i].tokenCount) || message.fileContext;
/* If tokens were never counted, or, is a Vision request and the message has files, count again */
if (needsTokenCount || (this.isVisionModel && (message.image_urls || message.files))) {
@@ -1116,8 +1112,8 @@ class AgentClient extends BaseClient {
appConfig.endpoints?.[endpoint] ??
titleProviderConfig.customEndpointConfig;
if (!endpointConfig) {
logger.warn(
'[api/server/controllers/agents/client.js #titleConvo] Error getting endpoint config',
logger.debug(
`[api/server/controllers/agents/client.js #titleConvo] No endpoint config for "${endpoint}"`,
);
}

View File

@@ -10,7 +10,12 @@ const compression = require('compression');
const cookieParser = require('cookie-parser');
const { logger } = require('@librechat/data-schemas');
const mongoSanitize = require('express-mongo-sanitize');
const { isEnabled, ErrorController } = require('@librechat/api');
const {
isEnabled,
ErrorController,
performStartupChecks,
initializeFileStorage,
} = require('@librechat/api');
const { connectDb, indexSync } = require('~/db');
const initializeOAuthReconnectManager = require('./services/initializeOAuthReconnectManager');
const createValidateImageRequest = require('./middleware/validateImageRequest');
@@ -49,9 +54,11 @@ const startServer = async () => {
app.set('trust proxy', trusted_proxy);
await seedDatabase();
const appConfig = await getAppConfig();
initializeFileStorage(appConfig);
await performStartupChecks(appConfig);
await updateInterfacePermissions(appConfig);
const indexPath = path.join(appConfig.paths.dist, 'index.html');
let indexHTML = fs.readFileSync(indexPath, 'utf8');

View File

@@ -1,10 +1,9 @@
const { Keyv } = require('keyv');
const uap = require('ua-parser-js');
const { isEnabled } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { isEnabled, keyvMongo } = require('@librechat/api');
const { ViolationTypes } = require('librechat-data-provider');
const { removePorts } = require('~/server/utils');
const keyvMongo = require('~/cache/keyvMongo');
const denyRequest = require('./denyRequest');
const { getLogStores } = require('~/cache');
const { findUser } = require('~/models');

View File

@@ -1,6 +1,6 @@
const rateLimit = require('express-rate-limit');
const { limiterCache } = require('@librechat/api');
const { ViolationTypes } = require('librechat-data-provider');
const { limiterCache } = require('~/cache/cacheFactory');
const logViolation = require('~/cache/logViolation');
const getEnvironmentVariables = () => {

View File

@@ -1,6 +1,6 @@
const rateLimit = require('express-rate-limit');
const { limiterCache } = require('@librechat/api');
const { ViolationTypes } = require('librechat-data-provider');
const { limiterCache } = require('~/cache/cacheFactory');
const logViolation = require('~/cache/logViolation');
const getEnvironmentVariables = () => {

View File

@@ -1,7 +1,7 @@
const rateLimit = require('express-rate-limit');
const { limiterCache } = require('@librechat/api');
const { ViolationTypes } = require('librechat-data-provider');
const { removePorts } = require('~/server/utils');
const { limiterCache } = require('~/cache/cacheFactory');
const { logViolation } = require('~/cache');
const { LOGIN_WINDOW = 5, LOGIN_MAX = 7, LOGIN_VIOLATION_SCORE: score } = process.env;

View File

@@ -1,7 +1,7 @@
const rateLimit = require('express-rate-limit');
const { limiterCache } = require('@librechat/api');
const { ViolationTypes } = require('librechat-data-provider');
const denyRequest = require('~/server/middleware/denyRequest');
const { limiterCache } = require('~/cache/cacheFactory');
const { logViolation } = require('~/cache');
const {

View File

@@ -1,7 +1,7 @@
const rateLimit = require('express-rate-limit');
const { limiterCache } = require('@librechat/api');
const { ViolationTypes } = require('librechat-data-provider');
const { removePorts } = require('~/server/utils');
const { limiterCache } = require('~/cache/cacheFactory');
const { logViolation } = require('~/cache');
const { REGISTER_WINDOW = 60, REGISTER_MAX = 5, REGISTRATION_VIOLATION_SCORE: score } = process.env;

View File

@@ -1,7 +1,7 @@
const rateLimit = require('express-rate-limit');
const { limiterCache } = require('@librechat/api');
const { ViolationTypes } = require('librechat-data-provider');
const { removePorts } = require('~/server/utils');
const { limiterCache } = require('~/cache/cacheFactory');
const { logViolation } = require('~/cache');
const {

View File

@@ -1,6 +1,6 @@
const rateLimit = require('express-rate-limit');
const { limiterCache } = require('@librechat/api');
const { ViolationTypes } = require('librechat-data-provider');
const { limiterCache } = require('~/cache/cacheFactory');
const logViolation = require('~/cache/logViolation');
const getEnvironmentVariables = () => {

View File

@@ -1,6 +1,6 @@
const rateLimit = require('express-rate-limit');
const { limiterCache } = require('@librechat/api');
const { ViolationTypes } = require('librechat-data-provider');
const { limiterCache } = require('~/cache/cacheFactory');
const logViolation = require('~/cache/logViolation');
const { TOOL_CALL_VIOLATION_SCORE: score } = process.env;

View File

@@ -1,7 +1,7 @@
const rateLimit = require('express-rate-limit');
const { limiterCache } = require('@librechat/api');
const { ViolationTypes } = require('librechat-data-provider');
const logViolation = require('~/cache/logViolation');
const { limiterCache } = require('~/cache/cacheFactory');
const getEnvironmentVariables = () => {
const TTS_IP_MAX = parseInt(process.env.TTS_IP_MAX) || 100;

View File

@@ -1,6 +1,6 @@
const rateLimit = require('express-rate-limit');
const { limiterCache } = require('@librechat/api');
const { ViolationTypes } = require('librechat-data-provider');
const { limiterCache } = require('~/cache/cacheFactory');
const logViolation = require('~/cache/logViolation');
const getEnvironmentVariables = () => {

View File

@@ -1,7 +1,7 @@
const rateLimit = require('express-rate-limit');
const { limiterCache } = require('@librechat/api');
const { ViolationTypes } = require('librechat-data-provider');
const { removePorts } = require('~/server/utils');
const { limiterCache } = require('~/cache/cacheFactory');
const { logViolation } = require('~/cache');
const {

View File

@@ -127,8 +127,13 @@ describe('MCP Routes', () => {
}),
};
const mockMcpManager = {
getRawConfig: jest.fn().mockReturnValue({}),
};
getLogStores.mockReturnValue({});
require('~/config').getFlowStateManager.mockReturnValue(mockFlowManager);
require('~/config').getMCPManager.mockReturnValue(mockMcpManager);
MCPOAuthHandler.initiateOAuthFlow.mockResolvedValue({
authorizationUrl: 'https://oauth.example.com/auth',
@@ -146,6 +151,7 @@ describe('MCP Routes', () => {
'test-server',
'https://test-server.com',
'test-user-id',
{},
{ clientId: 'test-client-id' },
);
});
@@ -314,6 +320,7 @@ describe('MCP Routes', () => {
};
const mockMcpManager = {
getUserConnection: jest.fn().mockResolvedValue(mockUserConnection),
getRawConfig: jest.fn().mockReturnValue({}),
};
require('~/config').getMCPManager.mockReturnValue(mockMcpManager);
@@ -336,6 +343,7 @@ describe('MCP Routes', () => {
'test-flow-id',
'test-auth-code',
mockFlowManager,
{},
);
expect(MCPTokenStorage.storeTokens).toHaveBeenCalledWith(
expect.objectContaining({
@@ -392,6 +400,11 @@ describe('MCP Routes', () => {
getLogStores.mockReturnValue({});
require('~/config').getFlowStateManager.mockReturnValue(mockFlowManager);
const mockMcpManager = {
getRawConfig: jest.fn().mockReturnValue({}),
};
require('~/config').getMCPManager.mockReturnValue(mockMcpManager);
const response = await request(app).get('/api/mcp/test-server/oauth/callback').query({
code: 'test-auth-code',
state: 'test-flow-id',
@@ -427,6 +440,7 @@ describe('MCP Routes', () => {
const mockMcpManager = {
getUserConnection: jest.fn().mockRejectedValue(new Error('Reconnection failed')),
getRawConfig: jest.fn().mockReturnValue({}),
};
require('~/config').getMCPManager.mockReturnValue(mockMcpManager);
@@ -1234,6 +1248,7 @@ describe('MCP Routes', () => {
getUserConnection: jest.fn().mockResolvedValue({
fetchTools: jest.fn().mockResolvedValue([]),
}),
getRawConfig: jest.fn().mockReturnValue({}),
};
require('~/config').getMCPManager.mockReturnValue(mockMcpManager);
@@ -1281,6 +1296,7 @@ describe('MCP Routes', () => {
.fn()
.mockResolvedValue([{ name: 'test-tool', description: 'Test tool' }]),
}),
getRawConfig: jest.fn().mockReturnValue({}),
};
require('~/config').getMCPManager.mockReturnValue(mockMcpManager);

View File

@@ -115,6 +115,9 @@ router.get('/', async function (req, res) {
sharePointPickerGraphScope: process.env.SHAREPOINT_PICKER_GRAPH_SCOPE,
sharePointPickerSharePointScope: process.env.SHAREPOINT_PICKER_SHAREPOINT_SCOPE,
openidReuseTokens,
conversationImportMaxFileSize: process.env.CONVERSATION_IMPORT_MAX_FILE_SIZE_BYTES
? parseInt(process.env.CONVERSATION_IMPORT_MAX_FILE_SIZE_BYTES, 10)
: 0,
};
const minPasswordLength = parseInt(process.env.MIN_PASSWORD_LENGTH, 10);
@@ -156,7 +159,7 @@ router.get('/', async function (req, res) {
if (
webSearchConfig != null &&
(webSearchConfig.searchProvider ||
webSearchConfig.scraperType ||
webSearchConfig.scraperProvider ||
webSearchConfig.rerankerType)
) {
payload.webSearch = {};
@@ -165,8 +168,8 @@ router.get('/', async function (req, res) {
if (webSearchConfig?.searchProvider) {
payload.webSearch.searchProvider = webSearchConfig.searchProvider;
}
if (webSearchConfig?.scraperType) {
payload.webSearch.scraperType = webSearchConfig.scraperType;
if (webSearchConfig?.scraperProvider) {
payload.webSearch.scraperProvider = webSearchConfig.scraperProvider;
}
if (webSearchConfig?.rerankerType) {
payload.webSearch.rerankerType = webSearchConfig.rerankerType;

View File

@@ -65,6 +65,7 @@ router.get('/:serverName/oauth/initiate', requireJwtAuth, async (req, res) => {
serverName,
serverUrl,
userId,
getOAuthHeaders(serverName),
oauthConfig,
);
@@ -132,7 +133,12 @@ router.get('/:serverName/oauth/callback', async (req, res) => {
});
logger.debug('[MCP OAuth] Completing OAuth flow');
const tokens = await MCPOAuthHandler.completeOAuthFlow(flowId, code, flowManager);
const tokens = await MCPOAuthHandler.completeOAuthFlow(
flowId,
code,
flowManager,
getOAuthHeaders(serverName),
);
logger.info('[MCP OAuth] OAuth flow completed, tokens received in callback route');
/** Persist tokens immediately so reconnection uses fresh credentials */
@@ -538,4 +544,10 @@ router.get('/:serverName/auth-values', requireJwtAuth, async (req, res) => {
}
});
function getOAuthHeaders(serverName) {
const mcpManager = getMCPManager();
const serverConfig = mcpManager.getRawConfig(serverName);
return serverConfig?.oauth_headers ?? {};
}
module.exports = router;

View File

@@ -99,7 +99,8 @@ router.get('/link/:conversationId', requireJwtAuth, async (req, res) => {
router.post('/:conversationId', requireJwtAuth, async (req, res) => {
try {
const created = await createSharedLink(req.user.id, req.params.conversationId);
const { targetMessageId } = req.body;
const created = await createSharedLink(req.user.id, req.params.conversationId, targetMessageId);
if (created) {
res.status(200).json(created);
} else {

View File

@@ -1,198 +0,0 @@
jest.mock('@librechat/data-schemas', () => ({
logger: {
info: jest.fn(),
warn: jest.fn(),
error: jest.fn(),
},
}));
jest.mock('@librechat/api', () => ({
...jest.requireActual('@librechat/api'),
loadDefaultInterface: jest.fn(),
}));
jest.mock('./start/tools', () => ({
loadAndFormatTools: jest.fn().mockReturnValue({}),
}));
jest.mock('./start/checks', () => ({
checkVariables: jest.fn(),
checkHealth: jest.fn(),
checkConfig: jest.fn(),
checkAzureVariables: jest.fn(),
checkWebSearchConfig: jest.fn(),
}));
jest.mock('./Config/loadCustomConfig', () => jest.fn());
const AppService = require('./AppService');
const { loadDefaultInterface } = require('@librechat/api');
describe('AppService interface configuration', () => {
let mockLoadCustomConfig;
beforeEach(() => {
jest.resetModules();
jest.clearAllMocks();
mockLoadCustomConfig = require('./Config/loadCustomConfig');
});
it('should set prompts and bookmarks to true when loadDefaultInterface returns true for both', async () => {
mockLoadCustomConfig.mockResolvedValue({});
loadDefaultInterface.mockResolvedValue({ prompts: true, bookmarks: true });
const result = await AppService();
expect(result).toEqual(
expect.objectContaining({
interfaceConfig: expect.objectContaining({
prompts: true,
bookmarks: true,
}),
}),
);
expect(loadDefaultInterface).toHaveBeenCalled();
});
it('should set prompts and bookmarks to false when loadDefaultInterface returns false for both', async () => {
mockLoadCustomConfig.mockResolvedValue({ interface: { prompts: false, bookmarks: false } });
loadDefaultInterface.mockResolvedValue({ prompts: false, bookmarks: false });
const result = await AppService();
expect(result).toEqual(
expect.objectContaining({
interfaceConfig: expect.objectContaining({
prompts: false,
bookmarks: false,
}),
}),
);
expect(loadDefaultInterface).toHaveBeenCalled();
});
it('should not set prompts and bookmarks when loadDefaultInterface returns undefined for both', async () => {
mockLoadCustomConfig.mockResolvedValue({});
loadDefaultInterface.mockResolvedValue({});
const result = await AppService();
expect(result).toEqual(
expect.objectContaining({
interfaceConfig: expect.anything(),
}),
);
// Verify that prompts and bookmarks are undefined when not provided
expect(result.interfaceConfig.prompts).toBeUndefined();
expect(result.interfaceConfig.bookmarks).toBeUndefined();
expect(loadDefaultInterface).toHaveBeenCalled();
});
it('should set prompts and bookmarks to different values when loadDefaultInterface returns different values', async () => {
mockLoadCustomConfig.mockResolvedValue({ interface: { prompts: true, bookmarks: false } });
loadDefaultInterface.mockResolvedValue({ prompts: true, bookmarks: false });
const result = await AppService();
expect(result).toEqual(
expect.objectContaining({
interfaceConfig: expect.objectContaining({
prompts: true,
bookmarks: false,
}),
}),
);
expect(loadDefaultInterface).toHaveBeenCalled();
});
it('should correctly configure peoplePicker permissions including roles', async () => {
mockLoadCustomConfig.mockResolvedValue({
interface: {
peoplePicker: {
users: true,
groups: true,
roles: true,
},
},
});
loadDefaultInterface.mockResolvedValue({
peoplePicker: {
users: true,
groups: true,
roles: true,
},
});
const result = await AppService();
expect(result).toEqual(
expect.objectContaining({
interfaceConfig: expect.objectContaining({
peoplePicker: expect.objectContaining({
users: true,
groups: true,
roles: true,
}),
}),
}),
);
expect(loadDefaultInterface).toHaveBeenCalled();
});
it('should handle mixed peoplePicker permissions', async () => {
mockLoadCustomConfig.mockResolvedValue({
interface: {
peoplePicker: {
users: true,
groups: false,
roles: true,
},
},
});
loadDefaultInterface.mockResolvedValue({
peoplePicker: {
users: true,
groups: false,
roles: true,
},
});
const result = await AppService();
expect(result).toEqual(
expect.objectContaining({
interfaceConfig: expect.objectContaining({
peoplePicker: expect.objectContaining({
users: true,
groups: false,
roles: true,
}),
}),
}),
);
});
it('should set default peoplePicker permissions when not provided', async () => {
mockLoadCustomConfig.mockResolvedValue({});
loadDefaultInterface.mockResolvedValue({
peoplePicker: {
users: true,
groups: true,
roles: true,
},
});
const result = await AppService();
expect(result).toEqual(
expect.objectContaining({
interfaceConfig: expect.objectContaining({
peoplePicker: expect.objectContaining({
users: true,
groups: true,
roles: true,
}),
}),
}),
);
});
});

View File

@@ -1,11 +1,25 @@
const { logger } = require('@librechat/data-schemas');
const { CacheKeys } = require('librechat-data-provider');
const AppService = require('~/server/services/AppService');
const { logger, AppService } = require('@librechat/data-schemas');
const { loadAndFormatTools } = require('~/server/services/start/tools');
const loadCustomConfig = require('./loadCustomConfig');
const { setCachedTools } = require('./getCachedTools');
const getLogStores = require('~/cache/getLogStores');
const paths = require('~/config/paths');
const BASE_CONFIG_KEY = '_BASE_';
const loadBaseConfig = async () => {
/** @type {TCustomConfig} */
const config = (await loadCustomConfig()) ?? {};
/** @type {Record<string, FunctionTool>} */
const systemTools = loadAndFormatTools({
adminFilter: config.filteredTools,
adminIncluded: config.includedTools,
directory: paths.structuredTools,
});
return AppService({ config, paths, systemTools });
};
/**
* Get the app configuration based on user context
* @param {Object} [options]
@@ -29,7 +43,7 @@ async function getAppConfig(options = {}) {
let baseConfig = await cache.get(BASE_CONFIG_KEY);
if (!baseConfig) {
logger.info('[getAppConfig] App configuration not initialized. Initializing AppService...');
baseConfig = await AppService();
baseConfig = await loadBaseConfig();
if (!baseConfig) {
throw new Error('Failed to initialize app configuration through AppService.');

View File

@@ -1,48 +0,0 @@
const { RateLimitPrefix } = require('librechat-data-provider');
/**
*
* @param {TCustomConfig['rateLimits'] | undefined} rateLimits
*/
const handleRateLimits = (rateLimits) => {
if (!rateLimits) {
return;
}
const rateLimitKeys = {
fileUploads: RateLimitPrefix.FILE_UPLOAD,
conversationsImport: RateLimitPrefix.IMPORT,
tts: RateLimitPrefix.TTS,
stt: RateLimitPrefix.STT,
};
Object.entries(rateLimitKeys).forEach(([key, prefix]) => {
const rateLimit = rateLimits[key];
if (rateLimit) {
setRateLimitEnvVars(prefix, rateLimit);
}
});
};
/**
* Set environment variables for rate limit configurations
*
* @param {string} prefix - Prefix for environment variable names
* @param {object} rateLimit - Rate limit configuration object
*/
const setRateLimitEnvVars = (prefix, rateLimit) => {
const envVarsMapping = {
ipMax: `${prefix}_IP_MAX`,
ipWindowInMinutes: `${prefix}_IP_WINDOW`,
userMax: `${prefix}_USER_MAX`,
userWindowInMinutes: `${prefix}_USER_WINDOW`,
};
Object.entries(envVarsMapping).forEach(([key, envVar]) => {
if (rateLimit[key] !== undefined) {
process.env[envVar] = rateLimit[key];
}
});
};
module.exports = handleRateLimits;

View File

@@ -85,7 +85,9 @@ async function loadConfigModels(req) {
}
if (Array.isArray(models.default)) {
modelsConfig[name] = models.default;
modelsConfig[name] = models.default.map((model) =>
typeof model === 'string' ? model : model.name,
);
}
}

View File

@@ -254,8 +254,8 @@ describe('loadConfigModels', () => {
// For groq and ollama, since the apiKey is "user_provided", models should not be fetched
// Depending on your implementation's behavior regarding "default" models without fetching,
// you may need to adjust the following assertions:
expect(result.groq).toBe(exampleConfig.endpoints.custom[2].models.default);
expect(result.ollama).toBe(exampleConfig.endpoints.custom[3].models.default);
expect(result.groq).toEqual(exampleConfig.endpoints.custom[2].models.default);
expect(result.ollama).toEqual(exampleConfig.endpoints.custom[3].models.default);
// Verifying fetchModels was not called for groq and ollama
expect(fetchModels).not.toHaveBeenCalledWith(

View File

@@ -5,14 +5,12 @@ const keyBy = require('lodash/keyBy');
const { loadYaml } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const {
CacheKeys,
configSchema,
paramSettings,
EImageOutputType,
agentParamSettings,
validateSettingDefinitions,
} = require('librechat-data-provider');
const getLogStores = require('~/cache/getLogStores');
const projectRoot = path.resolve(__dirname, '..', '..', '..', '..');
const defaultConfigPath = path.resolve(projectRoot, 'librechat.yaml');
@@ -119,7 +117,6 @@ https://www.librechat.ai/docs/configuration/stt_tts`);
.filter((endpoint) => endpoint.customParams)
.forEach((endpoint) => parseCustomParams(endpoint.name, endpoint.customParams));
if (result.data.modelSpecs) {
customConfig.modelSpecs = result.data.modelSpecs;
}

View File

@@ -143,7 +143,7 @@ const initializeAgent = async ({
const agentMaxContextTokens = optionalChainWithEmptyCheck(
maxContextTokens,
getModelMaxTokens(tokensModel, providerEndpointMap[provider], options.endpointTokenConfig),
4096,
18000,
);
if (

View File

@@ -4,7 +4,7 @@ const mime = require('mime');
const axios = require('axios');
const fetch = require('node-fetch');
const { logger } = require('@librechat/data-schemas');
const { getAzureContainerClient } = require('./initialize');
const { getAzureContainerClient } = require('@librechat/api');
const defaultBasePath = 'images';
const { AZURE_STORAGE_PUBLIC_ACCESS = 'true', AZURE_CONTAINER_NAME = 'files' } = process.env;
@@ -30,7 +30,7 @@ async function saveBufferToAzure({
containerName,
}) {
try {
const containerClient = getAzureContainerClient(containerName);
const containerClient = await getAzureContainerClient(containerName);
const access = AZURE_STORAGE_PUBLIC_ACCESS?.toLowerCase() === 'true' ? 'blob' : undefined;
// Create the container if it doesn't exist. This is done per operation.
await containerClient.createIfNotExists({ access });
@@ -84,7 +84,7 @@ async function saveURLToAzure({
*/
async function getAzureURL({ fileName, basePath = defaultBasePath, userId, containerName }) {
try {
const containerClient = getAzureContainerClient(containerName);
const containerClient = await getAzureContainerClient(containerName);
const blobPath = userId ? `${basePath}/${userId}/${fileName}` : `${basePath}/${fileName}`;
const blockBlobClient = containerClient.getBlockBlobClient(blobPath);
return blockBlobClient.url;
@@ -103,7 +103,7 @@ async function getAzureURL({ fileName, basePath = defaultBasePath, userId, conta
*/
async function deleteFileFromAzure(req, file) {
try {
const containerClient = getAzureContainerClient(AZURE_CONTAINER_NAME);
const containerClient = await getAzureContainerClient(AZURE_CONTAINER_NAME);
const blobPath = file.filepath.split(`${AZURE_CONTAINER_NAME}/`)[1];
if (!blobPath.includes(req.user.id)) {
throw new Error('User ID not found in blob path');
@@ -140,7 +140,7 @@ async function streamFileToAzure({
containerName,
}) {
try {
const containerClient = getAzureContainerClient(containerName);
const containerClient = await getAzureContainerClient(containerName);
const access = AZURE_STORAGE_PUBLIC_ACCESS?.toLowerCase() === 'true' ? 'blob' : undefined;
// Create the container if it doesn't exist

View File

@@ -1,9 +1,7 @@
const crud = require('./crud');
const images = require('./images');
const initialize = require('./initialize');
module.exports = {
...crud,
...images,
...initialize,
};

View File

@@ -3,9 +3,9 @@ const path = require('path');
const axios = require('axios');
const fetch = require('node-fetch');
const { logger } = require('@librechat/data-schemas');
const { getFirebaseStorage } = require('@librechat/api');
const { ref, uploadBytes, getDownloadURL, deleteObject } = require('firebase/storage');
const { getBufferMetadata } = require('~/server/utils');
const { getFirebaseStorage } = require('./initialize');
/**
* Deletes a file from Firebase Storage.

View File

@@ -1,9 +1,7 @@
const crud = require('./crud');
const images = require('./images');
const initialize = require('./initialize');
module.exports = {
...crud,
...images,
...initialize,
};

View File

@@ -1,39 +0,0 @@
const firebase = require('firebase/app');
const { getStorage } = require('firebase/storage');
const { logger } = require('@librechat/data-schemas');
let i = 0;
let firebaseApp = null;
const initializeFirebase = () => {
// Return existing instance if already initialized
if (firebaseApp) {
return firebaseApp;
}
const firebaseConfig = {
apiKey: process.env.FIREBASE_API_KEY,
authDomain: process.env.FIREBASE_AUTH_DOMAIN,
projectId: process.env.FIREBASE_PROJECT_ID,
storageBucket: process.env.FIREBASE_STORAGE_BUCKET,
messagingSenderId: process.env.FIREBASE_MESSAGING_SENDER_ID,
appId: process.env.FIREBASE_APP_ID,
};
if (Object.values(firebaseConfig).some((value) => !value)) {
i === 0 && logger.info('[Optional] CDN not initialized.');
i++;
return null;
}
firebaseApp = firebase.initializeApp(firebaseConfig);
logger.info('Firebase CDN initialized');
return firebaseApp;
};
const getFirebaseStorage = () => {
const app = initializeFirebase();
return app ? getStorage(app) : null;
};
module.exports = { initializeFirebase, getFirebaseStorage };

View File

@@ -4,6 +4,7 @@ const axios = require('axios');
const { logger } = require('@librechat/data-schemas');
const { EModelEndpoint } = require('librechat-data-provider');
const { generateShortLivedToken } = require('@librechat/api');
const { resizeImageBuffer } = require('~/server/services/Files/images/resize');
const { getBufferMetadata } = require('~/server/utils');
const paths = require('~/config/paths');
@@ -286,7 +287,18 @@ async function uploadLocalFile({ req, file, file_id }) {
await fs.promises.writeFile(newPath, inputBuffer);
const filepath = path.posix.join('/', 'uploads', req.user.id, path.basename(newPath));
return { filepath, bytes };
let height, width;
if (file.mimetype && file.mimetype.startsWith('image/')) {
try {
const { width: imgWidth, height: imgHeight } = await resizeImageBuffer(inputBuffer, 'high');
height = imgHeight;
width = imgWidth;
} catch (error) {
logger.warn('[uploadLocalFile] Could not get image dimensions:', error.message);
}
}
return { filepath, bytes, height, width };
}
/**

View File

@@ -1,15 +1,15 @@
const fs = require('fs');
const fetch = require('node-fetch');
const { initializeS3 } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { FileSources } = require('librechat-data-provider');
const { getSignedUrl } = require('@aws-sdk/s3-request-presigner');
const {
PutObjectCommand,
GetObjectCommand,
HeadObjectCommand,
DeleteObjectCommand,
} = require('@aws-sdk/client-s3');
const { getSignedUrl } = require('@aws-sdk/s3-request-presigner');
const { initializeS3 } = require('./initialize');
const bucketName = process.env.AWS_BUCKET_NAME;
const defaultBasePath = 'images';

View File

@@ -1,9 +1,7 @@
const crud = require('./crud');
const images = require('./images');
const initialize = require('./initialize');
module.exports = {
...crud,
...images,
...initialize,
};

View File

@@ -1,16 +1,14 @@
const axios = require('axios');
const { logAxiosError } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { logAxiosError, processTextWithTokenLimit } = require('@librechat/api');
const {
FileSources,
VisionModes,
ImageDetail,
ContentTypes,
EModelEndpoint,
mergeFileConfig,
} = require('librechat-data-provider');
const { getStrategyFunctions } = require('~/server/services/Files/strategies');
const countTokens = require('~/server/utils/countTokens');
/**
* Converts a readable stream to a base64 encoded string.
@@ -88,15 +86,14 @@ const blobStorageSources = new Set([FileSources.azure_blob, FileSources.s3]);
* @param {Array<MongoFile>} files - The array of files to encode and format.
* @param {EModelEndpoint} [endpoint] - Optional: The endpoint for the image.
* @param {string} [mode] - Optional: The endpoint mode for the image.
* @returns {Promise<{ text: string; files: MongoFile[]; image_urls: MessageContentImageUrl[] }>} - A promise that resolves to the result object containing the encoded images and file details.
* @returns {Promise<{ files: MongoFile[]; image_urls: MessageContentImageUrl[] }>} - A promise that resolves to the result object containing the encoded images and file details.
*/
async function encodeAndFormat(req, files, endpoint, mode) {
const promises = [];
/** @type {Record<FileSources, Pick<ReturnType<typeof getStrategyFunctions>, 'prepareImagePayload' | 'getDownloadStream'>>} */
const encodingMethods = {};
/** @type {{ text: string; files: MongoFile[]; image_urls: MessageContentImageUrl[] }} */
/** @type {{ files: MongoFile[]; image_urls: MessageContentImageUrl[] }} */
const result = {
text: '',
files: [],
image_urls: [],
};
@@ -105,29 +102,9 @@ async function encodeAndFormat(req, files, endpoint, mode) {
return result;
}
const fileTokenLimit =
req.body?.fileTokenLimit ?? mergeFileConfig(req.config?.fileConfig).fileTokenLimit;
for (let file of files) {
/** @type {FileSources} */
const source = file.source ?? FileSources.local;
if (source === FileSources.text && file.text) {
let fileText = file.text;
const { text: limitedText, wasTruncated } = await processTextWithTokenLimit({
text: fileText,
tokenLimit: fileTokenLimit,
tokenCountFn: (text) => countTokens(text),
});
if (wasTruncated) {
logger.debug(
`[encodeAndFormat] Text content truncated for file: ${file.filename} due to token limits`,
);
}
result.text += `${!result.text ? 'Attached document(s):\n```md' : '\n\n---\n\n'}# "${file.filename}"\n${limitedText}\n`;
}
if (!file.height) {
promises.push([file, null]);
@@ -165,10 +142,6 @@ async function encodeAndFormat(req, files, endpoint, mode) {
promises.push(preparePayload(req, file));
}
if (result.text) {
result.text += '\n```';
}
const detail = req.body.imageDetail ?? ImageDetail.auto;
/** @type {Array<[MongoFile, string]>} */

View File

@@ -508,7 +508,10 @@ const processAgentFileUpload = async ({ req, res, metadata }) => {
const { file } = req;
const appConfig = req.config;
const { agent_id, tool_resource, file_id, temp_file_id = null } = metadata;
if (agent_id && !tool_resource) {
let messageAttachment = !!metadata.message_file;
if (agent_id && !tool_resource && !messageAttachment) {
throw new Error('No tool resource provided for agent file upload');
}
@@ -516,17 +519,11 @@ const processAgentFileUpload = async ({ req, res, metadata }) => {
throw new Error('Image uploads are not supported for file search tool resources');
}
let messageAttachment = !!metadata.message_file;
if (!messageAttachment && !agent_id) {
throw new Error('No agent ID provided for agent file upload');
}
const isImage = file.mimetype.startsWith('image');
if (!isImage && !tool_resource) {
/** Note: this needs to be removed when we can support files to providers */
throw new Error('No tool resource provided for non-image agent file upload');
}
let fileInfoMetadata;
const entity_id = messageAttachment === true ? undefined : agent_id;
const basePath = mime.getType(file.originalname)?.startsWith('image') ? 'images' : 'uploads';
@@ -601,11 +598,22 @@ const processAgentFileUpload = async ({ req, res, metadata }) => {
if (shouldUseOCR && !(await checkCapability(req, AgentCapabilities.ocr))) {
throw new Error('OCR capability is not enabled for Agents');
} else if (shouldUseOCR) {
const { handleFileUpload: uploadOCR } = getStrategyFunctions(
appConfig?.ocr?.strategy ?? FileSources.mistral_ocr,
);
const { text, bytes, filepath: ocrFileURL } = await uploadOCR({ req, file, loadAuthValues });
return await createTextFile({ text, bytes, filepath: ocrFileURL });
try {
const { handleFileUpload: uploadOCR } = getStrategyFunctions(
appConfig?.ocr?.strategy ?? FileSources.mistral_ocr,
);
const {
text,
bytes,
filepath: ocrFileURL,
} = await uploadOCR({ req, file, loadAuthValues });
return await createTextFile({ text, bytes, filepath: ocrFileURL });
} catch (ocrError) {
logger.error(
`[processAgentFileUpload] OCR processing failed for file "${file.originalname}", falling back to text extraction:`,
ocrError,
);
}
}
const shouldUseSTT = fileConfig.checkType(

View File

@@ -450,7 +450,7 @@ async function getMCPSetupData(userId) {
logger.error(`[MCP][User: ${userId}] Error getting app connections:`, error);
}
const userConnections = mcpManager.getUserConnections(userId) || new Map();
const oauthServers = mcpManager.getOAuthServers() || new Set();
const oauthServers = mcpManager.getOAuthServers();
return {
mcpConfig,

View File

@@ -170,7 +170,7 @@ describe('tests for the new helper functions used by the MCP connection status e
const mockMCPManager = {
appConnections: { getAll: jest.fn(() => null) },
getUserConnections: jest.fn(() => null),
getOAuthServers: jest.fn(() => null),
getOAuthServers: jest.fn(() => new Set()),
};
mockGetMCPManager.mockReturnValue(mockMCPManager);

View File

@@ -1,75 +0,0 @@
const { logger } = require('@librechat/data-schemas');
const { normalizeEndpointName } = require('@librechat/api');
const { EModelEndpoint } = require('librechat-data-provider');
/**
* Sets up Model Specs from the config (`librechat.yaml`) file.
* @param {TCustomConfig['endpoints']} [endpoints] - The loaded custom configuration for endpoints.
* @param {TCustomConfig['modelSpecs'] | undefined} [modelSpecs] - The loaded custom configuration for model specs.
* @param {TCustomConfig['interface'] | undefined} [interfaceConfig] - The loaded interface configuration.
* @returns {TCustomConfig['modelSpecs'] | undefined} The processed model specs, if any.
*/
function processModelSpecs(endpoints, _modelSpecs, interfaceConfig) {
if (!_modelSpecs) {
return undefined;
}
/** @type {TCustomConfig['modelSpecs']['list']} */
const modelSpecs = [];
/** @type {TCustomConfig['modelSpecs']['list']} */
const list = _modelSpecs.list;
const customEndpoints = endpoints?.[EModelEndpoint.custom] ?? [];
if (interfaceConfig.modelSelect !== true && (_modelSpecs.addedEndpoints?.length ?? 0) > 0) {
logger.warn(
`To utilize \`addedEndpoints\`, which allows provider/model selections alongside model specs, set \`modelSelect: true\` in the interface configuration.
Example:
\`\`\`yaml
interface:
modelSelect: true
\`\`\`
`,
);
}
for (const spec of list) {
if (EModelEndpoint[spec.preset.endpoint] && spec.preset.endpoint !== EModelEndpoint.custom) {
modelSpecs.push(spec);
continue;
} else if (spec.preset.endpoint === EModelEndpoint.custom) {
logger.warn(
`Model Spec with endpoint "${spec.preset.endpoint}" is not supported. You must specify the name of the custom endpoint (case-sensitive, as defined in your config). Skipping model spec...`,
);
continue;
}
const normalizedName = normalizeEndpointName(spec.preset.endpoint);
const endpoint = customEndpoints.find(
(customEndpoint) => normalizedName === normalizeEndpointName(customEndpoint.name),
);
if (!endpoint) {
logger.warn(`Model spec with endpoint "${spec.preset.endpoint}" was skipped: Endpoint not found in configuration. The \`endpoint\` value must exactly match either a system-defined endpoint or a custom endpoint defined by the user.
For more information, see the documentation at https://www.librechat.ai/docs/configuration/librechat_yaml/object_structure/model_specs#endpoint`);
continue;
}
modelSpecs.push({
...spec,
preset: {
...spec.preset,
endpoint: normalizedName,
},
});
}
return {
..._modelSpecs,
list: modelSpecs,
};
}
module.exports = { processModelSpecs };

View File

@@ -1,44 +0,0 @@
const { logger } = require('@librechat/data-schemas');
const { removeNullishValues } = require('librechat-data-provider');
/**
* Loads and maps the Cloudflare Turnstile configuration.
*
* Expected config structure:
*
* turnstile:
* siteKey: "your-site-key-here"
* options:
* language: "auto" // "auto" or an ISO 639-1 language code (e.g. en)
* size: "normal" // Options: "normal", "compact", "flexible", or "invisible"
*
* @param {TCustomConfig | undefined} config - The loaded custom configuration.
* @param {TConfigDefaults} configDefaults - The custom configuration default values.
* @returns {TCustomConfig['turnstile']} The mapped Turnstile configuration.
*/
function loadTurnstileConfig(config, configDefaults) {
const { turnstile: customTurnstile = {} } = config ?? {};
const { turnstile: defaults = {} } = configDefaults;
/** @type {TCustomConfig['turnstile']} */
const loadedTurnstile = removeNullishValues({
siteKey: customTurnstile.siteKey ?? defaults.siteKey,
options: customTurnstile.options ?? defaults.options,
});
const enabled = Boolean(loadedTurnstile.siteKey);
if (enabled) {
logger.info(
'Turnstile is ENABLED with configuration:\n' + JSON.stringify(loadedTurnstile, null, 2),
);
} else {
logger.info('Turnstile is DISABLED (no siteKey provided).');
}
return loadedTurnstile;
}
module.exports = {
loadTurnstileConfig,
};

View File

@@ -10,6 +10,15 @@ const importConversations = async (job) => {
const { filepath, requestUserId } = job;
try {
logger.debug(`user: ${requestUserId} | Importing conversation(s) from file...`);
/* error if file is too large */
const fileInfo = await fs.stat(filepath);
if (fileInfo.size > process.env.CONVERSATION_IMPORT_MAX_FILE_SIZE_BYTES) {
throw new Error(
`File size is ${fileInfo.size} bytes. It exceeds the maximum limit of ${process.env.CONVERSATION_IMPORT_MAX_FILE_SIZE_BYTES} bytes.`,
);
}
const fileData = await fs.readFile(filepath, 'utf8');
const jsonData = JSON.parse(fileData);
const importer = getImporter(jsonData);
@@ -17,6 +26,7 @@ const importConversations = async (job) => {
logger.debug(`user: ${requestUserId} | Finished importing conversations`);
} catch (error) {
logger.error(`user: ${requestUserId} | Failed to import conversation: `, error);
throw error; // throw error all the way up so request does not return success
} finally {
try {
await fs.unlink(filepath);

View File

@@ -1,4 +1,5 @@
const undici = require('undici');
const { get } = require('lodash');
const fetch = require('node-fetch');
const passport = require('passport');
const client = require('openid-client');
@@ -329,6 +330,12 @@ async function setupOpenId() {
: 'OPENID_GENERATE_NONCE=false - Standard flow without explicit nonce or metadata',
});
// Set of env variables that specify how to set if a user is an admin
// If not set, all users will be treated as regular users
const adminRole = process.env.OPENID_ADMIN_ROLE;
const adminRoleParameterPath = process.env.OPENID_ADMIN_ROLE_PARAMETER_PATH;
const adminRoleTokenKind = process.env.OPENID_ADMIN_ROLE_TOKEN_KIND;
const openidLogin = new CustomOpenIDStrategy(
{
config: openidConfig,
@@ -386,20 +393,19 @@ async function setupOpenId() {
} else if (requiredRoleTokenKind === 'id') {
decodedToken = jwtDecode(tokenset.id_token);
}
const pathParts = requiredRoleParameterPath.split('.');
let found = true;
let roles = pathParts.reduce((o, key) => {
if (o === null || o === undefined || !(key in o)) {
found = false;
return [];
}
return o[key];
}, decodedToken);
if (!found) {
let roles = get(decodedToken, requiredRoleParameterPath);
if (!roles || (!Array.isArray(roles) && typeof roles !== 'string')) {
logger.error(
`[openidStrategy] Key '${requiredRoleParameterPath}' not found in ${requiredRoleTokenKind} token!`,
`[openidStrategy] Key '${requiredRoleParameterPath}' not found or invalid type in ${requiredRoleTokenKind} token!`,
);
const rolesList =
requiredRoles.length === 1
? `"${requiredRoles[0]}"`
: `one of: ${requiredRoles.map((r) => `"${r}"`).join(', ')}`;
return done(null, false, {
message: `You must have ${rolesList} role to log in.`,
});
}
if (!requiredRoles.some((role) => roles.includes(role))) {
@@ -447,6 +453,50 @@ async function setupOpenId() {
}
}
if (adminRole && adminRoleParameterPath && adminRoleTokenKind) {
let adminRoleObject;
switch (adminRoleTokenKind) {
case 'access':
adminRoleObject = jwtDecode(tokenset.access_token);
break;
case 'id':
adminRoleObject = jwtDecode(tokenset.id_token);
break;
case 'userinfo':
adminRoleObject = userinfo;
break;
default:
logger.error(
`[openidStrategy] Invalid admin role token kind: ${adminRoleTokenKind}. Must be one of 'access', 'id', or 'userinfo'.`,
);
return done(new Error('Invalid admin role token kind'));
}
const adminRoles = get(adminRoleObject, adminRoleParameterPath);
// Accept 3 types of values for the object extracted from adminRoleParameterPath:
// 1. A boolean value indicating if the user is an admin
// 2. A string with a single role name
// 3. An array of role names
if (
adminRoles &&
(adminRoles === true ||
adminRoles === adminRole ||
(Array.isArray(adminRoles) && adminRoles.includes(adminRole)))
) {
user.role = 'ADMIN';
logger.info(
`[openidStrategy] User ${username} is an admin based on role: ${adminRole}`,
);
} else if (user.role === 'ADMIN') {
user.role = 'USER';
logger.info(
`[openidStrategy] User ${username} demoted from admin - role no longer present in token`,
);
}
}
if (!!userinfo && userinfo.picture && !user.avatar?.includes('manual=true')) {
/** @type {string | undefined} */
const imageUrl = userinfo.picture;

View File

@@ -125,6 +125,9 @@ describe('setupOpenId', () => {
process.env.OPENID_REQUIRED_ROLE = 'requiredRole';
process.env.OPENID_REQUIRED_ROLE_PARAMETER_PATH = 'roles';
process.env.OPENID_REQUIRED_ROLE_TOKEN_KIND = 'id';
process.env.OPENID_ADMIN_ROLE = 'admin';
process.env.OPENID_ADMIN_ROLE_PARAMETER_PATH = 'permissions';
process.env.OPENID_ADMIN_ROLE_TOKEN_KIND = 'id';
delete process.env.OPENID_USERNAME_CLAIM;
delete process.env.OPENID_NAME_CLAIM;
delete process.env.PROXY;
@@ -133,6 +136,7 @@ describe('setupOpenId', () => {
// Default jwtDecode mock returns a token that includes the required role.
jwtDecode.mockReturnValue({
roles: ['requiredRole'],
permissions: ['admin'],
});
// By default, assume that no user is found, so createUser will be called
@@ -441,4 +445,475 @@ describe('setupOpenId', () => {
expect(callOptions.usePKCE).toBe(false);
expect(callOptions.params?.code_challenge_method).toBeUndefined();
});
it('should set role to "ADMIN" if OPENID_ADMIN_ROLE is set and user has that role', async () => {
// Act
const { user } = await validate(tokenset);
// Assert verify that the user role is set to "ADMIN"
expect(user.role).toBe('ADMIN');
});
it('should not set user role if OPENID_ADMIN_ROLE is set but the user does not have that role', async () => {
// Arrange simulate a token without the admin permission
jwtDecode.mockReturnValue({
roles: ['requiredRole'],
permissions: ['not-admin'],
});
// Act
const { user } = await validate(tokenset);
// Assert verify that the user role is not defined
expect(user.role).toBeUndefined();
});
it('should demote existing admin user when admin role is removed from token', async () => {
// Arrange simulate an existing user who is currently an admin
const existingAdminUser = {
_id: 'existingAdminId',
provider: 'openid',
email: tokenset.claims().email,
openidId: tokenset.claims().sub,
username: 'adminuser',
name: 'Admin User',
role: 'ADMIN',
};
findUser.mockImplementation(async (query) => {
if (query.openidId === tokenset.claims().sub || query.email === tokenset.claims().email) {
return existingAdminUser;
}
return null;
});
// Token without admin permission
jwtDecode.mockReturnValue({
roles: ['requiredRole'],
permissions: ['not-admin'],
});
const { logger } = require('@librechat/data-schemas');
// Act
const { user } = await validate(tokenset);
// Assert verify that the user was demoted
expect(user.role).toBe('USER');
expect(updateUser).toHaveBeenCalledWith(
existingAdminUser._id,
expect.objectContaining({
role: 'USER',
}),
);
expect(logger.info).toHaveBeenCalledWith(
expect.stringContaining('demoted from admin - role no longer present in token'),
);
});
it('should NOT demote admin user when admin role env vars are not configured', async () => {
// Arrange remove admin role env vars
delete process.env.OPENID_ADMIN_ROLE;
delete process.env.OPENID_ADMIN_ROLE_PARAMETER_PATH;
delete process.env.OPENID_ADMIN_ROLE_TOKEN_KIND;
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallback();
// Simulate an existing admin user
const existingAdminUser = {
_id: 'existingAdminId',
provider: 'openid',
email: tokenset.claims().email,
openidId: tokenset.claims().sub,
username: 'adminuser',
name: 'Admin User',
role: 'ADMIN',
};
findUser.mockImplementation(async (query) => {
if (query.openidId === tokenset.claims().sub || query.email === tokenset.claims().email) {
return existingAdminUser;
}
return null;
});
jwtDecode.mockReturnValue({
roles: ['requiredRole'],
});
// Act
const { user } = await validate(tokenset);
// Assert verify that the admin user was NOT demoted
expect(user.role).toBe('ADMIN');
expect(updateUser).toHaveBeenCalledWith(
existingAdminUser._id,
expect.objectContaining({
role: 'ADMIN',
}),
);
});
describe('lodash get - nested path extraction', () => {
it('should extract roles from deeply nested token path', async () => {
process.env.OPENID_REQUIRED_ROLE = 'app-user';
process.env.OPENID_REQUIRED_ROLE_PARAMETER_PATH = 'resource_access.my-client.roles';
jwtDecode.mockReturnValue({
resource_access: {
'my-client': {
roles: ['app-user', 'viewer'],
},
},
});
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallback();
const { user } = await validate(tokenset);
expect(user).toBeTruthy();
expect(user.email).toBe(tokenset.claims().email);
});
it('should extract roles from three-level nested path', async () => {
process.env.OPENID_REQUIRED_ROLE = 'editor';
process.env.OPENID_REQUIRED_ROLE_PARAMETER_PATH = 'data.access.permissions.roles';
jwtDecode.mockReturnValue({
data: {
access: {
permissions: {
roles: ['editor', 'reader'],
},
},
},
});
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallback();
const { user } = await validate(tokenset);
expect(user).toBeTruthy();
});
it('should log error and reject login when required role path does not exist in token', async () => {
const { logger } = require('@librechat/data-schemas');
process.env.OPENID_REQUIRED_ROLE = 'app-user';
process.env.OPENID_REQUIRED_ROLE_PARAMETER_PATH = 'resource_access.nonexistent.roles';
jwtDecode.mockReturnValue({
resource_access: {
'my-client': {
roles: ['app-user'],
},
},
});
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallback();
const { user, details } = await validate(tokenset);
expect(logger.error).toHaveBeenCalledWith(
expect.stringContaining(
"Key 'resource_access.nonexistent.roles' not found or invalid type in id token!",
),
);
expect(user).toBe(false);
expect(details.message).toContain('role to log in');
});
it('should handle missing intermediate nested path gracefully', async () => {
const { logger } = require('@librechat/data-schemas');
process.env.OPENID_REQUIRED_ROLE = 'user';
process.env.OPENID_REQUIRED_ROLE_PARAMETER_PATH = 'org.team.roles';
jwtDecode.mockReturnValue({
org: {
other: 'value',
},
});
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallback();
const { user } = await validate(tokenset);
expect(logger.error).toHaveBeenCalledWith(
expect.stringContaining("Key 'org.team.roles' not found or invalid type in id token!"),
);
expect(user).toBe(false);
});
it('should extract admin role from nested path in access token', async () => {
process.env.OPENID_ADMIN_ROLE = 'admin';
process.env.OPENID_ADMIN_ROLE_PARAMETER_PATH = 'realm_access.roles';
process.env.OPENID_ADMIN_ROLE_TOKEN_KIND = 'access';
jwtDecode.mockImplementation((token) => {
if (token === 'fake_access_token') {
return {
realm_access: {
roles: ['admin', 'user'],
},
};
}
return {
roles: ['requiredRole'],
};
});
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallback();
const { user } = await validate(tokenset);
expect(user.role).toBe('ADMIN');
});
it('should extract admin role from nested path in userinfo', async () => {
process.env.OPENID_ADMIN_ROLE = 'admin';
process.env.OPENID_ADMIN_ROLE_PARAMETER_PATH = 'organization.permissions';
process.env.OPENID_ADMIN_ROLE_TOKEN_KIND = 'userinfo';
const userinfoWithNestedGroups = {
...tokenset.claims(),
organization: {
permissions: ['admin', 'write'],
},
};
require('openid-client').fetchUserInfo.mockResolvedValue({
organization: {
permissions: ['admin', 'write'],
},
});
jwtDecode.mockReturnValue({
roles: ['requiredRole'],
});
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallback();
const { user } = await validate({
...tokenset,
claims: () => userinfoWithNestedGroups,
});
expect(user.role).toBe('ADMIN');
});
it('should handle boolean admin role value', async () => {
process.env.OPENID_ADMIN_ROLE = 'admin';
process.env.OPENID_ADMIN_ROLE_PARAMETER_PATH = 'is_admin';
jwtDecode.mockReturnValue({
roles: ['requiredRole'],
is_admin: true,
});
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallback();
const { user } = await validate(tokenset);
expect(user.role).toBe('ADMIN');
});
it('should handle string admin role value matching exactly', async () => {
process.env.OPENID_ADMIN_ROLE = 'super-admin';
process.env.OPENID_ADMIN_ROLE_PARAMETER_PATH = 'role';
jwtDecode.mockReturnValue({
roles: ['requiredRole'],
role: 'super-admin',
});
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallback();
const { user } = await validate(tokenset);
expect(user.role).toBe('ADMIN');
});
it('should not set admin role when string value does not match', async () => {
process.env.OPENID_ADMIN_ROLE = 'super-admin';
process.env.OPENID_ADMIN_ROLE_PARAMETER_PATH = 'role';
jwtDecode.mockReturnValue({
roles: ['requiredRole'],
role: 'regular-user',
});
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallback();
const { user } = await validate(tokenset);
expect(user.role).toBeUndefined();
});
it('should handle array admin role value', async () => {
process.env.OPENID_ADMIN_ROLE = 'site-admin';
process.env.OPENID_ADMIN_ROLE_PARAMETER_PATH = 'app_roles';
jwtDecode.mockReturnValue({
roles: ['requiredRole'],
app_roles: ['user', 'site-admin', 'moderator'],
});
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallback();
const { user } = await validate(tokenset);
expect(user.role).toBe('ADMIN');
});
it('should not set admin when role is not in array', async () => {
process.env.OPENID_ADMIN_ROLE = 'site-admin';
process.env.OPENID_ADMIN_ROLE_PARAMETER_PATH = 'app_roles';
jwtDecode.mockReturnValue({
roles: ['requiredRole'],
app_roles: ['user', 'moderator'],
});
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallback();
const { user } = await validate(tokenset);
expect(user.role).toBeUndefined();
});
it('should handle nested path with special characters in keys', async () => {
process.env.OPENID_REQUIRED_ROLE = 'app-user';
process.env.OPENID_REQUIRED_ROLE_PARAMETER_PATH = 'resource_access.my-app-123.roles';
jwtDecode.mockReturnValue({
resource_access: {
'my-app-123': {
roles: ['app-user'],
},
},
});
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallback();
const { user } = await validate(tokenset);
expect(user).toBeTruthy();
});
it('should handle empty object at nested path', async () => {
const { logger } = require('@librechat/data-schemas');
process.env.OPENID_REQUIRED_ROLE = 'user';
process.env.OPENID_REQUIRED_ROLE_PARAMETER_PATH = 'access.roles';
jwtDecode.mockReturnValue({
access: {},
});
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallback();
const { user } = await validate(tokenset);
expect(logger.error).toHaveBeenCalledWith(
expect.stringContaining("Key 'access.roles' not found or invalid type in id token!"),
);
expect(user).toBe(false);
});
it('should handle null value at intermediate path', async () => {
const { logger } = require('@librechat/data-schemas');
process.env.OPENID_REQUIRED_ROLE = 'user';
process.env.OPENID_REQUIRED_ROLE_PARAMETER_PATH = 'data.roles';
jwtDecode.mockReturnValue({
data: null,
});
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallback();
const { user } = await validate(tokenset);
expect(logger.error).toHaveBeenCalledWith(
expect.stringContaining("Key 'data.roles' not found or invalid type in id token!"),
);
expect(user).toBe(false);
});
it('should reject login with invalid admin role token kind', async () => {
process.env.OPENID_ADMIN_ROLE = 'admin';
process.env.OPENID_ADMIN_ROLE_PARAMETER_PATH = 'roles';
process.env.OPENID_ADMIN_ROLE_TOKEN_KIND = 'invalid';
const { logger } = require('@librechat/data-schemas');
jwtDecode.mockReturnValue({
roles: ['requiredRole', 'admin'],
});
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallback();
await expect(validate(tokenset)).rejects.toThrow('Invalid admin role token kind');
expect(logger.error).toHaveBeenCalledWith(
expect.stringContaining(
"Invalid admin role token kind: invalid. Must be one of 'access', 'id', or 'userinfo'",
),
);
});
it('should reject login when roles path returns invalid type (object)', async () => {
const { logger } = require('@librechat/data-schemas');
process.env.OPENID_REQUIRED_ROLE = 'app-user';
process.env.OPENID_REQUIRED_ROLE_PARAMETER_PATH = 'roles';
jwtDecode.mockReturnValue({
roles: { admin: true, user: false },
});
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallback();
const { user, details } = await validate(tokenset);
expect(logger.error).toHaveBeenCalledWith(
expect.stringContaining("Key 'roles' not found or invalid type in id token!"),
);
expect(user).toBe(false);
expect(details.message).toContain('role to log in');
});
it('should reject login when roles path returns invalid type (number)', async () => {
const { logger } = require('@librechat/data-schemas');
process.env.OPENID_REQUIRED_ROLE = 'user';
process.env.OPENID_REQUIRED_ROLE_PARAMETER_PATH = 'roleCount';
jwtDecode.mockReturnValue({
roleCount: 5,
});
await setupOpenId();
verifyCallback = require('openid-client/passport').__getVerifyCallback();
const { user } = await validate(tokenset);
expect(logger.error).toHaveBeenCalledWith(
expect.stringContaining("Key 'roleCount' not found or invalid type in id token!"),
);
expect(user).toBe(false);
});
});
});

View File

@@ -1054,7 +1054,7 @@
/**
* @exports TWebSearchKeys
* @typedef {import('librechat-data-provider').TWebSearchKeys} TWebSearchKeys
* @typedef {import('@librechat/data-schemas').TWebSearchKeys} TWebSearchKeys
* @memberof typedefs
*/
@@ -1103,13 +1103,13 @@
/**
* @exports AppConfig
* @typedef {import('@librechat/api').AppConfig} AppConfig
* @typedef {import('@librechat/data-schemas').AppConfig} AppConfig
* @memberof typedefs
*/
/**
* @exports JsonSchemaType
* @typedef {import('@librechat/api').JsonSchemaType} JsonSchemaType
* @typedef {import('@librechat/data-schemas').JsonSchemaType} JsonSchemaType
* @memberof typedefs
*/
@@ -1371,12 +1371,7 @@
/**
* @exports FunctionTool
* @typedef {Object} FunctionTool
* @property {'function'} type - The type of tool, 'function'.
* @property {Object} function - The function definition.
* @property {string} function.description - A description of what the function does.
* @property {string} function.name - The name of the function to be called.
* @property {Object} function.parameters - The parameters the function accepts, described as a JSON Schema object.
* @typedef {import('@librechat/data-schemas').FunctionTool} FunctionTool
* @memberof typedefs
*/

View File

@@ -186,6 +186,19 @@ describe('getModelMaxTokens', () => {
);
});
test('should return correct tokens for gpt-5-pro matches', () => {
expect(getModelMaxTokens('gpt-5-pro')).toBe(maxTokensMap[EModelEndpoint.openAI]['gpt-5-pro']);
expect(getModelMaxTokens('gpt-5-pro-preview')).toBe(
maxTokensMap[EModelEndpoint.openAI]['gpt-5-pro'],
);
expect(getModelMaxTokens('openai/gpt-5-pro')).toBe(
maxTokensMap[EModelEndpoint.openAI]['gpt-5-pro'],
);
expect(getModelMaxTokens('gpt-5-pro-2025-01-30')).toBe(
maxTokensMap[EModelEndpoint.openAI]['gpt-5-pro'],
);
});
test('should return correct tokens for Anthropic models', () => {
const models = [
'claude-2.1',
@@ -396,15 +409,80 @@ describe('getModelMaxTokens', () => {
});
test('should return correct tokens for GPT-OSS models', () => {
const expected = maxTokensMap[EModelEndpoint.openAI]['gpt-oss-20b'];
['gpt-oss-20b', 'gpt-oss-120b', 'openai/gpt-oss-20b', 'openai/gpt-oss-120b'].forEach((name) => {
const expected = maxTokensMap[EModelEndpoint.openAI]['gpt-oss'];
[
'gpt-oss:20b',
'gpt-oss-20b',
'gpt-oss-120b',
'openai/gpt-oss-20b',
'openai/gpt-oss-120b',
'openai/gpt-oss:120b',
].forEach((name) => {
expect(getModelMaxTokens(name)).toBe(expected);
});
});
test('should return correct tokens for GLM models', () => {
expect(getModelMaxTokens('glm-4.6')).toBe(maxTokensMap[EModelEndpoint.openAI]['glm-4.6']);
expect(getModelMaxTokens('glm-4.5v')).toBe(maxTokensMap[EModelEndpoint.openAI]['glm-4.5v']);
expect(getModelMaxTokens('glm-4.5-air')).toBe(
maxTokensMap[EModelEndpoint.openAI]['glm-4.5-air'],
);
expect(getModelMaxTokens('glm-4.5')).toBe(maxTokensMap[EModelEndpoint.openAI]['glm-4.5']);
expect(getModelMaxTokens('glm-4-32b')).toBe(maxTokensMap[EModelEndpoint.openAI]['glm-4-32b']);
expect(getModelMaxTokens('glm-4')).toBe(maxTokensMap[EModelEndpoint.openAI]['glm-4']);
expect(getModelMaxTokens('glm4')).toBe(maxTokensMap[EModelEndpoint.openAI]['glm4']);
});
test('should return correct tokens for GLM models with provider prefixes', () => {
expect(getModelMaxTokens('z-ai/glm-4.6')).toBe(maxTokensMap[EModelEndpoint.openAI]['glm-4.6']);
expect(getModelMaxTokens('z-ai/glm-4.5')).toBe(maxTokensMap[EModelEndpoint.openAI]['glm-4.5']);
expect(getModelMaxTokens('z-ai/glm-4.5-air')).toBe(
maxTokensMap[EModelEndpoint.openAI]['glm-4.5-air'],
);
expect(getModelMaxTokens('z-ai/glm-4.5v')).toBe(
maxTokensMap[EModelEndpoint.openAI]['glm-4.5v'],
);
expect(getModelMaxTokens('z-ai/glm-4-32b')).toBe(
maxTokensMap[EModelEndpoint.openAI]['glm-4-32b'],
);
expect(getModelMaxTokens('zai/glm-4.6')).toBe(maxTokensMap[EModelEndpoint.openAI]['glm-4.6']);
expect(getModelMaxTokens('zai/glm-4.5-air')).toBe(
maxTokensMap[EModelEndpoint.openAI]['glm-4.5-air'],
);
expect(getModelMaxTokens('zai/glm-4.5v')).toBe(maxTokensMap[EModelEndpoint.openAI]['glm-4.5v']);
expect(getModelMaxTokens('zai-org/GLM-4.6')).toBe(
maxTokensMap[EModelEndpoint.openAI]['glm-4.6'],
);
expect(getModelMaxTokens('zai-org/GLM-4.5')).toBe(
maxTokensMap[EModelEndpoint.openAI]['glm-4.5'],
);
expect(getModelMaxTokens('zai-org/GLM-4.5-Air')).toBe(
maxTokensMap[EModelEndpoint.openAI]['glm-4.5-air'],
);
expect(getModelMaxTokens('zai-org/GLM-4.5V')).toBe(
maxTokensMap[EModelEndpoint.openAI]['glm-4.5v'],
);
expect(getModelMaxTokens('zai-org/GLM-4-32B-0414')).toBe(
maxTokensMap[EModelEndpoint.openAI]['glm-4-32b'],
);
});
test('should return correct tokens for GLM models with suffixes', () => {
expect(getModelMaxTokens('glm-4.6-fp8')).toBe(maxTokensMap[EModelEndpoint.openAI]['glm-4.6']);
expect(getModelMaxTokens('zai-org/GLM-4.6-FP8')).toBe(
maxTokensMap[EModelEndpoint.openAI]['glm-4.6'],
);
expect(getModelMaxTokens('zai-org/GLM-4.5-Air-FP8')).toBe(
maxTokensMap[EModelEndpoint.openAI]['glm-4.5-air'],
);
});
test('should return correct max output tokens for GPT-5 models', () => {
const { getModelMaxOutputTokens } = require('@librechat/api');
['gpt-5', 'gpt-5-mini', 'gpt-5-nano'].forEach((model) => {
['gpt-5', 'gpt-5-mini', 'gpt-5-nano', 'gpt-5-pro'].forEach((model) => {
expect(getModelMaxOutputTokens(model)).toBe(maxOutputTokensMap[EModelEndpoint.openAI][model]);
expect(getModelMaxOutputTokens(model, EModelEndpoint.openAI)).toBe(
maxOutputTokensMap[EModelEndpoint.openAI][model],
@@ -517,6 +595,13 @@ describe('matchModelName', () => {
expect(matchModelName('gpt-5-nano-2025-01-30')).toBe('gpt-5-nano');
});
it('should return the closest matching key for gpt-5-pro matches', () => {
expect(matchModelName('openai/gpt-5-pro')).toBe('gpt-5-pro');
expect(matchModelName('gpt-5-pro-preview')).toBe('gpt-5-pro');
expect(matchModelName('gpt-5-pro-2025-01-30')).toBe('gpt-5-pro');
expect(matchModelName('gpt-5-pro-2025-01-30-0130')).toBe('gpt-5-pro');
});
// Tests for Google models
it('should return the exact model name if it exists in maxTokensMap - Google models', () => {
expect(matchModelName('text-bison-32k', EModelEndpoint.google)).toBe('text-bison-32k');
@@ -767,6 +852,49 @@ describe('Claude Model Tests', () => {
);
});
it('should return correct context length for Claude Haiku 4.5', () => {
expect(getModelMaxTokens('claude-haiku-4-5', EModelEndpoint.anthropic)).toBe(
maxTokensMap[EModelEndpoint.anthropic]['claude-haiku-4-5'],
);
expect(getModelMaxTokens('claude-haiku-4-5')).toBe(
maxTokensMap[EModelEndpoint.anthropic]['claude-haiku-4-5'],
);
});
it('should handle Claude Haiku 4.5 model name variations', () => {
const modelVariations = [
'claude-haiku-4-5',
'claude-haiku-4-5-20250420',
'claude-haiku-4-5-latest',
'anthropic/claude-haiku-4-5',
'claude-haiku-4-5/anthropic',
'claude-haiku-4-5-preview',
];
modelVariations.forEach((model) => {
const modelKey = findMatchingPattern(model, maxTokensMap[EModelEndpoint.anthropic]);
expect(modelKey).toBe('claude-haiku-4-5');
expect(getModelMaxTokens(model, EModelEndpoint.anthropic)).toBe(
maxTokensMap[EModelEndpoint.anthropic]['claude-haiku-4-5'],
);
});
});
it('should match model names correctly for Claude Haiku 4.5', () => {
const modelVariations = [
'claude-haiku-4-5',
'claude-haiku-4-5-20250420',
'claude-haiku-4-5-latest',
'anthropic/claude-haiku-4-5',
'claude-haiku-4-5/anthropic',
'claude-haiku-4-5-preview',
];
modelVariations.forEach((model) => {
expect(matchModelName(model, EModelEndpoint.anthropic)).toBe('claude-haiku-4-5');
});
});
it('should handle Claude 4 model name variations with different prefixes and suffixes', () => {
const modelVariations = [
'claude-sonnet-4',
@@ -858,3 +986,206 @@ describe('Kimi Model Tests', () => {
});
});
});
describe('Qwen3 Model Tests', () => {
describe('getModelMaxTokens', () => {
test('should return correct tokens for Qwen3 base pattern', () => {
expect(getModelMaxTokens('qwen3')).toBe(maxTokensMap[EModelEndpoint.openAI]['qwen3']);
});
test('should return correct tokens for qwen3-4b (falls back to qwen3)', () => {
expect(getModelMaxTokens('qwen3-4b')).toBe(maxTokensMap[EModelEndpoint.openAI]['qwen3']);
});
test('should return correct tokens for Qwen3 base models', () => {
expect(getModelMaxTokens('qwen3-8b')).toBe(maxTokensMap[EModelEndpoint.openAI]['qwen3-8b']);
expect(getModelMaxTokens('qwen3-14b')).toBe(maxTokensMap[EModelEndpoint.openAI]['qwen3-14b']);
expect(getModelMaxTokens('qwen3-32b')).toBe(maxTokensMap[EModelEndpoint.openAI]['qwen3-32b']);
expect(getModelMaxTokens('qwen3-235b-a22b')).toBe(
maxTokensMap[EModelEndpoint.openAI]['qwen3-235b-a22b'],
);
});
test('should return correct tokens for Qwen3 VL (Vision-Language) models', () => {
expect(getModelMaxTokens('qwen3-vl-8b-thinking')).toBe(
maxTokensMap[EModelEndpoint.openAI]['qwen3-vl-8b-thinking'],
);
expect(getModelMaxTokens('qwen3-vl-8b-instruct')).toBe(
maxTokensMap[EModelEndpoint.openAI]['qwen3-vl-8b-instruct'],
);
expect(getModelMaxTokens('qwen3-vl-30b-a3b')).toBe(
maxTokensMap[EModelEndpoint.openAI]['qwen3-vl-30b-a3b'],
);
expect(getModelMaxTokens('qwen3-vl-235b-a22b')).toBe(
maxTokensMap[EModelEndpoint.openAI]['qwen3-vl-235b-a22b'],
);
});
test('should return correct tokens for Qwen3 specialized models', () => {
expect(getModelMaxTokens('qwen3-max')).toBe(maxTokensMap[EModelEndpoint.openAI]['qwen3-max']);
expect(getModelMaxTokens('qwen3-coder')).toBe(
maxTokensMap[EModelEndpoint.openAI]['qwen3-coder'],
);
expect(getModelMaxTokens('qwen3-coder-30b-a3b')).toBe(
maxTokensMap[EModelEndpoint.openAI]['qwen3-coder-30b-a3b'],
);
expect(getModelMaxTokens('qwen3-coder-plus')).toBe(
maxTokensMap[EModelEndpoint.openAI]['qwen3-coder-plus'],
);
expect(getModelMaxTokens('qwen3-coder-flash')).toBe(
maxTokensMap[EModelEndpoint.openAI]['qwen3-coder-flash'],
);
expect(getModelMaxTokens('qwen3-next-80b-a3b')).toBe(
maxTokensMap[EModelEndpoint.openAI]['qwen3-next-80b-a3b'],
);
});
test('should handle Qwen3 models with provider prefixes', () => {
expect(getModelMaxTokens('alibaba/qwen3')).toBe(maxTokensMap[EModelEndpoint.openAI]['qwen3']);
expect(getModelMaxTokens('alibaba/qwen3-4b')).toBe(
maxTokensMap[EModelEndpoint.openAI]['qwen3'],
);
expect(getModelMaxTokens('qwen/qwen3-8b')).toBe(
maxTokensMap[EModelEndpoint.openAI]['qwen3-8b'],
);
expect(getModelMaxTokens('openrouter/qwen3-max')).toBe(
maxTokensMap[EModelEndpoint.openAI]['qwen3-max'],
);
expect(getModelMaxTokens('alibaba/qwen3-vl-8b-instruct')).toBe(
maxTokensMap[EModelEndpoint.openAI]['qwen3-vl-8b-instruct'],
);
expect(getModelMaxTokens('qwen/qwen3-coder')).toBe(
maxTokensMap[EModelEndpoint.openAI]['qwen3-coder'],
);
});
test('should handle Qwen3 models with suffixes', () => {
expect(getModelMaxTokens('qwen3-preview')).toBe(maxTokensMap[EModelEndpoint.openAI]['qwen3']);
expect(getModelMaxTokens('qwen3-4b-preview')).toBe(
maxTokensMap[EModelEndpoint.openAI]['qwen3'],
);
expect(getModelMaxTokens('qwen3-8b-latest')).toBe(
maxTokensMap[EModelEndpoint.openAI]['qwen3-8b'],
);
expect(getModelMaxTokens('qwen3-max-2024')).toBe(
maxTokensMap[EModelEndpoint.openAI]['qwen3-max'],
);
});
});
describe('matchModelName', () => {
test('should match exact Qwen3 model names', () => {
expect(matchModelName('qwen3')).toBe('qwen3');
expect(matchModelName('qwen3-4b')).toBe('qwen3');
expect(matchModelName('qwen3-8b')).toBe('qwen3-8b');
expect(matchModelName('qwen3-vl-8b-thinking')).toBe('qwen3-vl-8b-thinking');
expect(matchModelName('qwen3-max')).toBe('qwen3-max');
expect(matchModelName('qwen3-coder')).toBe('qwen3-coder');
});
test('should match Qwen3 model variations with provider prefixes', () => {
expect(matchModelName('alibaba/qwen3')).toBe('qwen3');
expect(matchModelName('alibaba/qwen3-4b')).toBe('qwen3');
expect(matchModelName('qwen/qwen3-8b')).toBe('qwen3-8b');
expect(matchModelName('openrouter/qwen3-max')).toBe('qwen3-max');
expect(matchModelName('alibaba/qwen3-vl-8b-instruct')).toBe('qwen3-vl-8b-instruct');
expect(matchModelName('qwen/qwen3-coder')).toBe('qwen3-coder');
});
test('should match Qwen3 model variations with suffixes', () => {
expect(matchModelName('qwen3-preview')).toBe('qwen3');
expect(matchModelName('qwen3-4b-preview')).toBe('qwen3');
expect(matchModelName('qwen3-8b-latest')).toBe('qwen3-8b');
expect(matchModelName('qwen3-max-2024')).toBe('qwen3-max');
expect(matchModelName('qwen3-coder-v1')).toBe('qwen3-coder');
});
});
});
describe('GLM Model Tests (Zhipu AI)', () => {
describe('getModelMaxTokens', () => {
test('should return correct tokens for GLM models', () => {
expect(getModelMaxTokens('glm-4.6')).toBe(200000);
expect(getModelMaxTokens('glm-4.5v')).toBe(66000);
expect(getModelMaxTokens('glm-4.5-air')).toBe(131000);
expect(getModelMaxTokens('glm-4.5')).toBe(131000);
expect(getModelMaxTokens('glm-4-32b')).toBe(128000);
expect(getModelMaxTokens('glm-4')).toBe(128000);
expect(getModelMaxTokens('glm4')).toBe(128000);
});
test('should handle partial matches for GLM models with provider prefixes', () => {
expect(getModelMaxTokens('z-ai/glm-4.6')).toBe(200000);
expect(getModelMaxTokens('z-ai/glm-4.5')).toBe(131000);
expect(getModelMaxTokens('z-ai/glm-4.5-air')).toBe(131000);
expect(getModelMaxTokens('z-ai/glm-4.5v')).toBe(66000);
expect(getModelMaxTokens('z-ai/glm-4-32b')).toBe(128000);
expect(getModelMaxTokens('zai/glm-4.6')).toBe(200000);
expect(getModelMaxTokens('zai/glm-4.5')).toBe(131000);
expect(getModelMaxTokens('zai/glm-4.5-air')).toBe(131000);
expect(getModelMaxTokens('zai/glm-4.5v')).toBe(66000);
expect(getModelMaxTokens('zai-org/GLM-4.6')).toBe(200000);
expect(getModelMaxTokens('zai-org/GLM-4.5')).toBe(131000);
expect(getModelMaxTokens('zai-org/GLM-4.5-Air')).toBe(131000);
expect(getModelMaxTokens('zai-org/GLM-4.5V')).toBe(66000);
expect(getModelMaxTokens('zai-org/GLM-4-32B-0414')).toBe(128000);
});
test('should handle GLM model variations with suffixes', () => {
expect(getModelMaxTokens('glm-4.6-fp8')).toBe(200000);
expect(getModelMaxTokens('zai-org/GLM-4.6-FP8')).toBe(200000);
expect(getModelMaxTokens('zai-org/GLM-4.5-Air-FP8')).toBe(131000);
});
test('should prioritize more specific GLM patterns', () => {
expect(getModelMaxTokens('glm-4.5-air-custom')).toBe(131000);
expect(getModelMaxTokens('glm-4.5-custom')).toBe(131000);
expect(getModelMaxTokens('glm-4.5v-custom')).toBe(66000);
});
});
describe('matchModelName', () => {
test('should match exact GLM model names', () => {
expect(matchModelName('glm-4.6')).toBe('glm-4.6');
expect(matchModelName('glm-4.5v')).toBe('glm-4.5v');
expect(matchModelName('glm-4.5-air')).toBe('glm-4.5-air');
expect(matchModelName('glm-4.5')).toBe('glm-4.5');
expect(matchModelName('glm-4-32b')).toBe('glm-4-32b');
expect(matchModelName('glm-4')).toBe('glm-4');
expect(matchModelName('glm4')).toBe('glm4');
});
test('should match GLM model variations with provider prefixes', () => {
expect(matchModelName('z-ai/glm-4.6')).toBe('glm-4.6');
expect(matchModelName('z-ai/glm-4.5')).toBe('glm-4.5');
expect(matchModelName('z-ai/glm-4.5-air')).toBe('glm-4.5-air');
expect(matchModelName('z-ai/glm-4.5v')).toBe('glm-4.5v');
expect(matchModelName('z-ai/glm-4-32b')).toBe('glm-4-32b');
expect(matchModelName('zai/glm-4.6')).toBe('glm-4.6');
expect(matchModelName('zai/glm-4.5')).toBe('glm-4.5');
expect(matchModelName('zai/glm-4.5-air')).toBe('glm-4.5-air');
expect(matchModelName('zai/glm-4.5v')).toBe('glm-4.5v');
expect(matchModelName('zai-org/GLM-4.6')).toBe('glm-4.6');
expect(matchModelName('zai-org/GLM-4.5')).toBe('glm-4.5');
expect(matchModelName('zai-org/GLM-4.5-Air')).toBe('glm-4.5-air');
expect(matchModelName('zai-org/GLM-4.5V')).toBe('glm-4.5v');
expect(matchModelName('zai-org/GLM-4-32B-0414')).toBe('glm-4-32b');
});
test('should match GLM model variations with suffixes', () => {
expect(matchModelName('glm-4.6-fp8')).toBe('glm-4.6');
expect(matchModelName('zai-org/GLM-4.6-FP8')).toBe('glm-4.6');
expect(matchModelName('zai-org/GLM-4.5-Air-FP8')).toBe('glm-4.5-air');
});
test('should handle case-insensitive matching for GLM models', () => {
expect(matchModelName('zai-org/GLM-4.6')).toBe('glm-4.6');
expect(matchModelName('zai-org/GLM-4.5V')).toBe('glm-4.5v');
expect(matchModelName('zai-org/GLM-4-32B-0414')).toBe('glm-4-32b');
});
});
});

View File

@@ -136,6 +136,7 @@
"babel-plugin-transform-import-meta": "^2.3.2",
"babel-plugin-transform-vite-meta-env": "^1.0.3",
"eslint-plugin-jest": "^28.11.0",
"fs-extra": "^11.3.2",
"identity-obj-proxy": "^3.0.0",
"jest": "^29.7.0",
"jest-canvas-mock": "^2.5.2",
@@ -148,7 +149,7 @@
"tailwindcss": "^3.4.1",
"ts-jest": "^29.2.5",
"typescript": "^5.3.3",
"vite": "^6.3.6",
"vite": "^6.4.1",
"vite-plugin-compression2": "^2.2.1",
"vite-plugin-node-polyfills": "^0.23.0",
"vite-plugin-pwa": "^0.21.2"

View File

@@ -1,3 +1,4 @@
import { useEffect } from 'react';
import { RecoilRoot } from 'recoil';
import { DndProvider } from 'react-dnd';
import { RouterProvider } from 'react-router-dom';
@@ -8,6 +9,7 @@ import { Toast, ThemeProvider, ToastProvider } from '@librechat/client';
import { QueryClient, QueryClientProvider, QueryCache } from '@tanstack/react-query';
import { ScreenshotProvider, useApiErrorBoundary } from './hooks';
import { getThemeFromEnv } from './utils/getThemeFromEnv';
import { initializeFontSize } from '~/store/fontSize';
import { LiveAnnouncer } from '~/a11y';
import { router } from './routes';
@@ -24,6 +26,10 @@ const App = () => {
}),
});
useEffect(() => {
initializeFontSize();
}, []);
// Load theme from environment variables if available
const envTheme = getThemeFromEnv();

View File

@@ -1,23 +1,38 @@
import React, { createContext, useContext, useMemo } from 'react';
import type { EModelEndpoint } from 'librechat-data-provider';
import { useGetEndpointsQuery } from '~/data-provider';
import { getEndpointField } from '~/utils/endpoints';
import { useChatContext } from './ChatContext';
interface DragDropContextValue {
conversationId: string | null | undefined;
agentId: string | null | undefined;
endpoint: string | null | undefined;
endpointType?: EModelEndpoint | undefined;
}
const DragDropContext = createContext<DragDropContextValue | undefined>(undefined);
export function DragDropProvider({ children }: { children: React.ReactNode }) {
const { conversation } = useChatContext();
const { data: endpointsConfig } = useGetEndpointsQuery();
const endpointType = useMemo(() => {
return (
getEndpointField(endpointsConfig, conversation?.endpoint, 'type') ||
(conversation?.endpoint as EModelEndpoint | undefined)
);
}, [conversation?.endpoint, endpointsConfig]);
/** Context value only created when conversation fields change */
const contextValue = useMemo<DragDropContextValue>(
() => ({
conversationId: conversation?.conversationId,
agentId: conversation?.agent_id,
endpoint: conversation?.endpoint,
endpointType: endpointType,
}),
[conversation?.conversationId, conversation?.agent_id],
[conversation?.conversationId, conversation?.agent_id, conversation?.endpoint, endpointType],
);
return <DragDropContext.Provider value={contextValue}>{children}</DragDropContext.Provider>;

View File

@@ -11,9 +11,9 @@ import {
AgentListResponse,
} from 'librechat-data-provider';
import type t from 'librechat-data-provider';
import { renderAgentAvatar, clearMessagesCache } from '~/utils';
import { useLocalize, useDefaultConvo } from '~/hooks';
import { useChatContext } from '~/Providers';
import { renderAgentAvatar } from '~/utils';
interface SupportContact {
name?: string;
@@ -56,10 +56,7 @@ const AgentDetail: React.FC<AgentDetailProps> = ({ agent, isOpen, onClose }) =>
localStorage.setItem(`${LocalStorageKeys.AGENT_ID_PREFIX}0`, agent.id);
queryClient.setQueryData<t.TMessage[]>(
[QueryKeys.messages, conversation?.conversationId ?? Constants.NEW_CONVO],
[],
);
clearMessagesCache(queryClient, conversation?.conversationId);
queryClient.invalidateQueries([QueryKeys.messages]);
/** Template with agent configuration */

View File

@@ -4,7 +4,7 @@ import { useOutletContext } from 'react-router-dom';
import { useQueryClient } from '@tanstack/react-query';
import { useSearchParams, useParams, useNavigate } from 'react-router-dom';
import { TooltipAnchor, Button, NewChatIcon, useMediaQuery } from '@librechat/client';
import { PermissionTypes, Permissions, QueryKeys, Constants } from 'librechat-data-provider';
import { PermissionTypes, Permissions, QueryKeys } from 'librechat-data-provider';
import type t from 'librechat-data-provider';
import type { ContextType } from '~/common';
import { useDocumentTitle, useHasAccess, useLocalize, TranslationKeys } from '~/hooks';
@@ -13,11 +13,11 @@ import MarketplaceAdminSettings from './MarketplaceAdminSettings';
import { SidePanelProvider, useChatContext } from '~/Providers';
import { SidePanelGroup } from '~/components/SidePanel';
import { OpenSidebar } from '~/components/Chat/Menus';
import { cn, clearMessagesCache } from '~/utils';
import CategoryTabs from './CategoryTabs';
import AgentDetail from './AgentDetail';
import SearchBar from './SearchBar';
import AgentGrid from './AgentGrid';
import { cn } from '~/utils';
import store from '~/store';
interface AgentMarketplaceProps {
@@ -224,10 +224,7 @@ const AgentMarketplace: React.FC<AgentMarketplaceProps> = ({ className = '' }) =
window.open('/c/new', '_blank');
return;
}
queryClient.setQueryData<t.TMessage[]>(
[QueryKeys.messages, conversation?.conversationId ?? Constants.NEW_CONVO],
[],
);
clearMessagesCache(queryClient, conversation?.conversationId);
queryClient.invalidateQueries([QueryKeys.messages]);
newConversation();
};

View File

@@ -144,9 +144,10 @@ export const ArtifactCodeEditor = function ({
}
return {
...sharedOptions,
activeFile: '/' + fileKey,
bundlerURL: template === 'static' ? config.staticBundlerURL : config.bundlerURL,
};
}, [config, template]);
}, [config, template, fileKey]);
const [readOnly, setReadOnly] = useState(isSubmitting ?? false);
useEffect(() => {
setReadOnly(isSubmitting ?? false);

View File

@@ -13,7 +13,6 @@ export const ArtifactPreview = memo(function ({
files,
fileKey,
template,
isMermaid,
sharedProps,
previewRef,
currentCode,
@@ -21,7 +20,6 @@ export const ArtifactPreview = memo(function ({
}: {
files: ArtifactFiles;
fileKey: string;
isMermaid: boolean;
template: SandpackProviderProps['template'];
sharedProps: Partial<SandpackProviderProps>;
previewRef: React.MutableRefObject<SandpackPreviewRef>;
@@ -56,15 +54,6 @@ export const ArtifactPreview = memo(function ({
return _options;
}, [startupConfig, template]);
const style: PreviewProps['style'] | undefined = useMemo(() => {
if (isMermaid) {
return {
backgroundColor: '#282C34',
};
}
return;
}, [isMermaid]);
if (Object.keys(artifactFiles).length === 0) {
return null;
}
@@ -84,7 +73,6 @@ export const ArtifactPreview = memo(function ({
showRefreshButton={false}
tabIndex={0}
ref={previewRef}
style={style}
/>
</SandpackProvider>
);

View File

@@ -8,17 +8,14 @@ import { useAutoScroll } from '~/hooks/Artifacts/useAutoScroll';
import { ArtifactCodeEditor } from './ArtifactCodeEditor';
import { useGetStartupConfig } from '~/data-provider';
import { ArtifactPreview } from './ArtifactPreview';
import { MermaidMarkdown } from './MermaidMarkdown';
import { cn } from '~/utils';
export default function ArtifactTabs({
artifact,
isMermaid,
editorRef,
previewRef,
}: {
artifact: Artifact;
isMermaid: boolean;
editorRef: React.MutableRefObject<CodeEditorRef>;
previewRef: React.MutableRefObject<SandpackPreviewRef>;
}) {
@@ -44,26 +41,22 @@ export default function ArtifactTabs({
value="code"
id="artifacts-code"
className={cn('flex-grow overflow-auto')}
tabIndex={-1}
>
{isMermaid ? (
<MermaidMarkdown content={content} isSubmitting={isSubmitting} />
) : (
<ArtifactCodeEditor
files={files}
fileKey={fileKey}
template={template}
artifact={artifact}
editorRef={editorRef}
sharedProps={sharedProps}
/>
)}
<ArtifactCodeEditor
files={files}
fileKey={fileKey}
template={template}
artifact={artifact}
editorRef={editorRef}
sharedProps={sharedProps}
/>
</Tabs.Content>
<Tabs.Content value="preview" className="flex-grow overflow-auto">
<Tabs.Content value="preview" className="flex-grow overflow-auto" tabIndex={-1}>
<ArtifactPreview
files={files}
fileKey={fileKey}
template={template}
isMermaid={isMermaid}
previewRef={previewRef}
sharedProps={sharedProps}
currentCode={currentCode}

View File

@@ -27,7 +27,6 @@ export default function Artifacts() {
const {
activeTab,
isMermaid,
setActiveTab,
currentIndex,
cycleArtifact,
@@ -116,7 +115,6 @@ export default function Artifacts() {
</div>
{/* Content */}
<ArtifactTabs
isMermaid={isMermaid}
artifact={currentArtifact}
editorRef={editorRef as React.MutableRefObject<CodeEditorRef>}
previewRef={previewRef as React.MutableRefObject<SandpackPreviewRef>}

View File

@@ -1,11 +0,0 @@
import { CodeMarkdown } from './Code';
export function MermaidMarkdown({
content,
isSubmitting,
}: {
content: string;
isSubmitting: boolean;
}) {
return <CodeMarkdown content={`\`\`\`mermaid\n${content}\`\`\``} isSubmitting={isSubmitting} />;
}

View File

@@ -19,9 +19,11 @@ export function BrowserVoiceDropdown() {
}
};
const labelId = 'browser-voice-dropdown-label';
return (
<div className="flex items-center justify-between">
<div>{localize('com_nav_voice_select')}</div>
<div id={labelId}>{localize('com_nav_voice_select')}</div>
<Dropdown
key={`browser-voice-dropdown-${voices.length}`}
value={voice ?? ''}
@@ -30,6 +32,7 @@ export function BrowserVoiceDropdown() {
sizeClasses="min-w-[200px] !max-w-[400px] [--anchor-max-width:400px]"
testId="BrowserVoiceDropdown"
className="z-50"
aria-labelledby={labelId}
/>
</div>
);
@@ -48,9 +51,11 @@ export function ExternalVoiceDropdown() {
}
};
const labelId = 'external-voice-dropdown-label';
return (
<div className="flex items-center justify-between">
<div>{localize('com_nav_voice_select')}</div>
<div id={labelId}>{localize('com_nav_voice_select')}</div>
<Dropdown
key={`external-voice-dropdown-${voices.length}`}
value={voice ?? ''}
@@ -59,6 +64,7 @@ export function ExternalVoiceDropdown() {
sizeClasses="min-w-[200px] !max-w-[400px] [--anchor-max-width:400px]"
testId="ExternalVoiceDropdown"
className="z-50"
aria-labelledby={labelId}
/>
</div>
);

View File

@@ -2,13 +2,15 @@ import { memo, useMemo } from 'react';
import {
Constants,
supportsFiles,
EModelEndpoint,
mergeFileConfig,
isAgentsEndpoint,
isAssistantsEndpoint,
fileConfig as defaultFileConfig,
} from 'librechat-data-provider';
import type { EndpointFileConfig, TConversation } from 'librechat-data-provider';
import { useGetFileConfig } from '~/data-provider';
import { useGetFileConfig, useGetEndpointsQuery } from '~/data-provider';
import { getEndpointField } from '~/utils/endpoints';
import AttachFileMenu from './AttachFileMenu';
import AttachFile from './AttachFile';
@@ -20,7 +22,7 @@ function AttachFileChat({
conversation: TConversation | null;
}) {
const conversationId = conversation?.conversationId ?? Constants.NEW_CONVO;
const { endpoint, endpointType } = conversation ?? { endpoint: null };
const { endpoint } = conversation ?? { endpoint: null };
const isAgents = useMemo(() => isAgentsEndpoint(endpoint), [endpoint]);
const isAssistants = useMemo(() => isAssistantsEndpoint(endpoint), [endpoint]);
@@ -28,6 +30,15 @@ function AttachFileChat({
select: (data) => mergeFileConfig(data),
});
const { data: endpointsConfig } = useGetEndpointsQuery();
const endpointType = useMemo(() => {
return (
getEndpointField(endpointsConfig, endpoint, 'type') ||
(endpoint as EModelEndpoint | undefined)
);
}, [endpoint, endpointsConfig]);
const endpointFileConfig = fileConfig.endpoints[endpoint ?? ''] as EndpointFileConfig | undefined;
const endpointSupportsFiles: boolean = supportsFiles[endpointType ?? endpoint ?? ''] ?? false;
const isUploadDisabled = (disableInputs || endpointFileConfig?.disabled) ?? false;
@@ -37,7 +48,9 @@ function AttachFileChat({
} else if (isAgents || (endpointSupportsFiles && !isUploadDisabled)) {
return (
<AttachFileMenu
endpoint={endpoint}
disabled={disableInputs}
endpointType={endpointType}
conversationId={conversationId}
agentId={conversation?.agent_id}
endpointFileConfig={endpointFileConfig}

View File

@@ -1,8 +1,19 @@
import React, { useRef, useState, useMemo } from 'react';
import * as Ariakit from '@ariakit/react';
import { useRecoilState } from 'recoil';
import { FileSearch, ImageUpIcon, TerminalSquareIcon, FileType2Icon } from 'lucide-react';
import { EToolResources, EModelEndpoint, defaultAgentCapabilities } from 'librechat-data-provider';
import * as Ariakit from '@ariakit/react';
import {
FileSearch,
ImageUpIcon,
FileType2Icon,
FileImageIcon,
TerminalSquareIcon,
} from 'lucide-react';
import {
EToolResources,
EModelEndpoint,
defaultAgentCapabilities,
isDocumentSupportedProvider,
} from 'librechat-data-provider';
import {
FileUpload,
TooltipAnchor,
@@ -26,15 +37,19 @@ import { MenuItemProps } from '~/common';
import { cn } from '~/utils';
interface AttachFileMenuProps {
conversationId: string;
agentId?: string | null;
endpoint?: string | null;
disabled?: boolean | null;
conversationId: string;
endpointType?: EModelEndpoint;
endpointFileConfig?: EndpointFileConfig;
}
const AttachFileMenu = ({
agentId,
endpoint,
disabled,
endpointType,
conversationId,
endpointFileConfig,
}: AttachFileMenuProps) => {
@@ -55,44 +70,75 @@ const AttachFileMenu = ({
overrideEndpointFileConfig: endpointFileConfig,
toolResource,
});
const { agentsConfig } = useGetAgentsConfig();
const { data: startupConfig } = useGetStartupConfig();
const sharePointEnabled = startupConfig?.sharePointFilePickerEnabled;
const [isSharePointDialogOpen, setIsSharePointDialogOpen] = useState(false);
const { agentsConfig } = useGetAgentsConfig();
/** TODO: Ephemeral Agent Capabilities
* Allow defining agent capabilities on a per-endpoint basis
* Use definition for agents endpoint for ephemeral agents
* */
const capabilities = useAgentCapabilities(agentsConfig?.capabilities ?? defaultAgentCapabilities);
const { fileSearchAllowedByAgent, codeAllowedByAgent } = useAgentToolPermissions(
const { fileSearchAllowedByAgent, codeAllowedByAgent, provider } = useAgentToolPermissions(
agentId,
ephemeralAgent,
);
const handleUploadClick = (isImage?: boolean) => {
const handleUploadClick = (
fileType?: 'image' | 'document' | 'multimodal' | 'google_multimodal',
) => {
if (!inputRef.current) {
return;
}
inputRef.current.value = '';
inputRef.current.accept = isImage === true ? 'image/*' : '';
if (fileType === 'image') {
inputRef.current.accept = 'image/*';
} else if (fileType === 'document') {
inputRef.current.accept = '.pdf,application/pdf';
} else if (fileType === 'multimodal') {
inputRef.current.accept = 'image/*,.pdf,application/pdf';
} else if (fileType === 'google_multimodal') {
inputRef.current.accept = 'image/*,.pdf,application/pdf,video/*,audio/*';
} else {
inputRef.current.accept = '';
}
inputRef.current.click();
inputRef.current.accept = '';
};
const dropdownItems = useMemo(() => {
const createMenuItems = (onAction: (isImage?: boolean) => void) => {
const items: MenuItemProps[] = [
{
const createMenuItems = (
onAction: (fileType?: 'image' | 'document' | 'multimodal' | 'google_multimodal') => void,
) => {
const items: MenuItemProps[] = [];
const currentProvider = provider || endpoint;
if (isDocumentSupportedProvider(currentProvider || endpointType)) {
items.push({
label: localize('com_ui_upload_provider'),
onClick: () => {
setToolResource(undefined);
onAction(
(provider || endpoint) === EModelEndpoint.google ? 'google_multimodal' : 'multimodal',
);
},
icon: <FileImageIcon className="icon-md" />,
});
} else {
items.push({
label: localize('com_ui_upload_image_input'),
onClick: () => {
setToolResource(undefined);
onAction(true);
onAction('image');
},
icon: <ImageUpIcon className="icon-md" />,
},
];
});
}
if (capabilities.contextEnabled) {
items.push({
@@ -156,8 +202,11 @@ const AttachFileMenu = ({
return localItems;
}, [
capabilities,
localize,
endpoint,
provider,
endpointType,
capabilities,
setToolResource,
setEphemeralAgent,
sharePointEnabled,

View File

@@ -1,8 +1,19 @@
import React, { useMemo } from 'react';
import { useRecoilValue } from 'recoil';
import { OGDialog, OGDialogTemplate } from '@librechat/client';
import { EToolResources, defaultAgentCapabilities } from 'librechat-data-provider';
import { ImageUpIcon, FileSearch, TerminalSquareIcon, FileType2Icon } from 'lucide-react';
import {
EToolResources,
EModelEndpoint,
defaultAgentCapabilities,
isDocumentSupportedProvider,
} from 'librechat-data-provider';
import {
ImageUpIcon,
FileSearch,
FileType2Icon,
FileImageIcon,
TerminalSquareIcon,
} from 'lucide-react';
import {
useAgentToolPermissions,
useAgentCapabilities,
@@ -34,22 +45,45 @@ const DragDropModal = ({ onOptionSelect, setShowModal, files, isVisible }: DragD
* Use definition for agents endpoint for ephemeral agents
* */
const capabilities = useAgentCapabilities(agentsConfig?.capabilities ?? defaultAgentCapabilities);
const { conversationId, agentId } = useDragDropContext();
const { conversationId, agentId, endpoint, endpointType } = useDragDropContext();
const ephemeralAgent = useRecoilValue(ephemeralAgentByConvoId(conversationId ?? ''));
const { fileSearchAllowedByAgent, codeAllowedByAgent } = useAgentToolPermissions(
const { fileSearchAllowedByAgent, codeAllowedByAgent, provider } = useAgentToolPermissions(
agentId,
ephemeralAgent,
);
const options = useMemo(() => {
const _options: FileOption[] = [
{
const _options: FileOption[] = [];
const currentProvider = provider || endpoint;
// Check if provider supports document upload
if (isDocumentSupportedProvider(currentProvider || endpointType)) {
const isGoogleProvider = currentProvider === EModelEndpoint.google;
const validFileTypes = isGoogleProvider
? files.every(
(file) =>
file.type?.startsWith('image/') ||
file.type?.startsWith('video/') ||
file.type?.startsWith('audio/') ||
file.type === 'application/pdf',
)
: files.every((file) => file.type?.startsWith('image/') || file.type === 'application/pdf');
_options.push({
label: localize('com_ui_upload_provider'),
value: undefined,
icon: <FileImageIcon className="icon-md" />,
condition: validFileTypes,
});
} else {
// Only show image upload option if all files are images and provider doesn't support documents
_options.push({
label: localize('com_ui_upload_image_input'),
value: undefined,
icon: <ImageUpIcon className="icon-md" />,
condition: files.every((file) => file.type?.startsWith('image/')),
},
];
});
}
if (capabilities.fileSearchEnabled && fileSearchAllowedByAgent) {
_options.push({
label: localize('com_ui_upload_file_search'),
@@ -73,7 +107,16 @@ const DragDropModal = ({ onOptionSelect, setShowModal, files, isVisible }: DragD
}
return _options;
}, [capabilities, files, localize, fileSearchAllowedByAgent, codeAllowedByAgent]);
}, [
files,
localize,
provider,
endpoint,
endpointType,
capabilities,
codeAllowedByAgent,
fileSearchAllowedByAgent,
]);
if (!isVisible) {
return null;

View File

@@ -2,7 +2,12 @@ import React, { useMemo } from 'react';
import type { ModelSelectorProps } from '~/common';
import { ModelSelectorProvider, useModelSelectorContext } from './ModelSelectorContext';
import { ModelSelectorChatProvider } from './ModelSelectorChatContext';
import { renderModelSpecs, renderEndpoints, renderSearchResults } from './components';
import {
renderModelSpecs,
renderEndpoints,
renderSearchResults,
renderCustomGroups,
} from './components';
import { getSelectedIcon, getDisplayValue } from './utils';
import { CustomMenu as Menu } from './CustomMenu';
import DialogManager from './DialogManager';
@@ -86,8 +91,15 @@ function ModelSelectorContent() {
renderSearchResults(searchResults, localize, searchValue)
) : (
<>
{renderModelSpecs(modelSpecs, selectedValues.modelSpec || '')}
{/* Render ungrouped modelSpecs (no group field) */}
{renderModelSpecs(
modelSpecs?.filter((spec) => !spec.group) || [],
selectedValues.modelSpec || '',
)}
{/* Render endpoints (will include grouped specs matching endpoint names) */}
{renderEndpoints(mappedEndpoints ?? [])}
{/* Render custom groups (specs with group field not matching any endpoint) */}
{renderCustomGroups(modelSpecs || [], mappedEndpoints ?? [])}
</>
)}
</Menu>

View File

@@ -0,0 +1,66 @@
import React from 'react';
import type { TModelSpec } from 'librechat-data-provider';
import { CustomMenu as Menu } from '../CustomMenu';
import { ModelSpecItem } from './ModelSpecItem';
import { useModelSelectorContext } from '../ModelSelectorContext';
interface CustomGroupProps {
groupName: string;
specs: TModelSpec[];
}
export function CustomGroup({ groupName, specs }: CustomGroupProps) {
const { selectedValues } = useModelSelectorContext();
const { modelSpec: selectedSpec } = selectedValues;
if (!specs || specs.length === 0) {
return null;
}
return (
<Menu
id={`custom-group-${groupName}-menu`}
key={`custom-group-${groupName}`}
className="transition-opacity duration-200 ease-in-out"
label={
<div className="group flex w-full flex-shrink cursor-pointer items-center justify-between rounded-xl px-1 py-1 text-sm">
<div className="flex items-center gap-2">
<span className="truncate text-left">{groupName}</span>
</div>
</div>
}
>
{specs.map((spec: TModelSpec) => (
<ModelSpecItem key={spec.name} spec={spec} isSelected={selectedSpec === spec.name} />
))}
</Menu>
);
}
export function renderCustomGroups(
modelSpecs: TModelSpec[],
mappedEndpoints: Array<{ value: string }>,
) {
// Get all endpoint values to exclude them from custom groups
const endpointValues = new Set(mappedEndpoints.map((ep) => ep.value));
// Group specs by their group field (excluding endpoint-matched groups and ungrouped)
const customGroups = modelSpecs.reduce(
(acc, spec) => {
if (!spec.group || endpointValues.has(spec.group)) {
return acc;
}
if (!acc[spec.group]) {
acc[spec.group] = [];
}
acc[spec.group].push(spec);
return acc;
},
{} as Record<string, TModelSpec[]>,
);
// Render each custom group
return Object.entries(customGroups).map(([groupName, specs]) => (
<CustomGroup key={groupName} groupName={groupName} specs={specs} />
));
}

View File

@@ -2,10 +2,12 @@ import { useMemo } from 'react';
import { SettingsIcon } from 'lucide-react';
import { TooltipAnchor, Spinner } from '@librechat/client';
import { EModelEndpoint, isAgentsEndpoint, isAssistantsEndpoint } from 'librechat-data-provider';
import type { TModelSpec } from 'librechat-data-provider';
import type { Endpoint } from '~/common';
import { CustomMenu as Menu, CustomMenuItem as MenuItem } from '../CustomMenu';
import { useModelSelectorContext } from '../ModelSelectorContext';
import { renderEndpointModels } from './EndpointModelItem';
import { ModelSpecItem } from './ModelSpecItem';
import { filterModels } from '../utils';
import { useLocalize } from '~/hooks';
import { cn } from '~/utils';
@@ -57,6 +59,7 @@ export function EndpointItem({ endpoint }: EndpointItemProps) {
const {
agentsMap,
assistantsMap,
modelSpecs,
selectedValues,
handleOpenKeyDialog,
handleSelectEndpoint,
@@ -64,7 +67,19 @@ export function EndpointItem({ endpoint }: EndpointItemProps) {
setEndpointSearchValue,
endpointRequiresUserKey,
} = useModelSelectorContext();
const { model: selectedModel, endpoint: selectedEndpoint } = selectedValues;
const {
model: selectedModel,
endpoint: selectedEndpoint,
modelSpec: selectedSpec,
} = selectedValues;
// Filter modelSpecs for this endpoint (by group matching endpoint value)
const endpointSpecs = useMemo(() => {
if (!modelSpecs || !modelSpecs.length) {
return [];
}
return modelSpecs.filter((spec: TModelSpec) => spec.group === endpoint.value);
}, [modelSpecs, endpoint.value]);
const searchValue = endpointSearchValues[endpoint.value] || '';
const isUserProvided = useMemo(() => endpointRequiresUserKey(endpoint.value), [endpoint.value]);
@@ -138,10 +153,17 @@ export function EndpointItem({ endpoint }: EndpointItemProps) {
<div className="flex items-center justify-center p-2">
<Spinner />
</div>
) : filteredModels ? (
renderEndpointModels(endpoint, endpoint.models || [], selectedModel, filteredModels)
) : (
endpoint.models && renderEndpointModels(endpoint, endpoint.models, selectedModel)
<>
{/* Render modelSpecs for this endpoint */}
{endpointSpecs.map((spec: TModelSpec) => (
<ModelSpecItem key={spec.name} spec={spec} isSelected={selectedSpec === spec.name} />
))}
{/* Render endpoint models */}
{filteredModels
? renderEndpointModels(endpoint, endpoint.models || [], selectedModel, filteredModels)
: endpoint.models && renderEndpointModels(endpoint, endpoint.models, selectedModel)}
</>
)}
</Menu>
);

View File

@@ -36,38 +36,42 @@ export function EndpointModelItem({ modelId, endpoint, isSelected }: EndpointMod
<MenuItem
key={modelId}
onClick={() => handleSelectModel(endpoint, modelId ?? '')}
className="flex h-8 w-full cursor-pointer items-center justify-start rounded-lg px-3 py-2 text-sm"
className="flex w-full cursor-pointer items-center justify-between rounded-lg px-2 text-sm"
>
<div className="flex items-center gap-2">
<div className="flex w-full min-w-0 items-center gap-2 px-1 py-1">
{avatarUrl ? (
<div className="flex h-5 w-5 items-center justify-center overflow-hidden rounded-full">
<div className="flex h-5 w-5 flex-shrink-0 items-center justify-center overflow-hidden rounded-full">
<img src={avatarUrl} alt={modelName ?? ''} className="h-full w-full object-cover" />
</div>
) : (isAgentsEndpoint(endpoint.value) || isAssistantsEndpoint(endpoint.value)) &&
endpoint.icon ? (
<div className="flex h-5 w-5 items-center justify-center overflow-hidden rounded-full">
<div className="flex h-5 w-5 flex-shrink-0 items-center justify-center overflow-hidden rounded-full">
{endpoint.icon}
</div>
) : null}
<span>{modelName}</span>
<span className="truncate text-left">{modelName}</span>
{isGlobal && (
<EarthIcon className="ml-auto size-4 flex-shrink-0 self-center text-green-400" />
)}
</div>
{isGlobal && <EarthIcon className="ml-auto size-4 text-green-400" />}
{isSelected && (
<svg
width="16"
height="16"
viewBox="0 0 24 24"
fill="none"
xmlns="http://www.w3.org/2000/svg"
className="block"
>
<path
fillRule="evenodd"
clipRule="evenodd"
d="M2 12C2 6.47715 6.47715 2 12 2C17.5228 2 22 6.47715 22 12C22 17.5228 17.5228 22 12 22C6.47715 22 2 17.5228 2 12ZM16.0755 7.93219C16.5272 8.25003 16.6356 8.87383 16.3178 9.32549L11.5678 16.0755C11.3931 16.3237 11.1152 16.4792 10.8123 16.4981C10.5093 16.517 10.2142 16.3973 10.0101 16.1727L7.51006 13.4227C7.13855 13.014 7.16867 12.3816 7.57733 12.0101C7.98598 11.6386 8.61843 11.6687 8.98994 12.0773L10.6504 13.9039L14.6822 8.17451C15 7.72284 15.6238 7.61436 16.0755 7.93219Z"
fill="currentColor"
/>
</svg>
<div className="flex-shrink-0 self-center">
<svg
width="16"
height="16"
viewBox="0 0 24 24"
fill="none"
xmlns="http://www.w3.org/2000/svg"
className="block"
>
<path
fillRule="evenodd"
clipRule="evenodd"
d="M2 12C2 6.47715 6.47715 2 12 2C17.5228 2 22 6.47715 22 12C22 17.5228 17.5228 22 12 22C6.47715 22 2 17.5228 2 12ZM16.0755 7.93219C16.5272 8.25003 16.6356 8.87383 16.3178 9.32549L11.5678 16.0755C11.3931 16.3237 11.1152 16.4792 10.8123 16.4981C10.5093 16.517 10.2142 16.3973 10.0101 16.1727L7.51006 13.4227C7.13855 13.014 7.16867 12.3816 7.57733 12.0101C7.98598 11.6386 8.61843 11.6687 8.98994 12.0773L10.6504 13.9039L14.6822 8.17451C15 7.72284 15.6238 7.61436 16.0755 7.93219Z"
fill="currentColor"
/>
</svg>
</div>
)}
</MenuItem>
);

View File

@@ -2,3 +2,4 @@ export * from './ModelSpecItem';
export * from './EndpointModelItem';
export * from './EndpointItem';
export * from './SearchResults';
export * from './CustomGroup';

View File

@@ -1,8 +1,8 @@
import { QueryKeys } from 'librechat-data-provider';
import { useQueryClient } from '@tanstack/react-query';
import { QueryKeys, Constants } from 'librechat-data-provider';
import { TooltipAnchor, Button, NewChatIcon } from '@librechat/client';
import type { TMessage } from 'librechat-data-provider';
import { useChatContext } from '~/Providers';
import { clearMessagesCache } from '~/utils';
import { useLocalize } from '~/hooks';
export default function HeaderNewChat() {
@@ -15,10 +15,7 @@ export default function HeaderNewChat() {
window.open('/c/new', '_blank');
return;
}
queryClient.setQueryData<TMessage[]>(
[QueryKeys.messages, conversation?.conversationId ?? Constants.NEW_CONVO],
[],
);
clearMessagesCache(queryClient, conversation?.conversationId);
queryClient.invalidateQueries([QueryKeys.messages]);
newConversation();
};

View File

@@ -57,7 +57,7 @@ const Part = memo(
</>
);
} else if (part.type === ContentTypes.TEXT) {
const text = typeof part.text === 'string' ? part.text : part.text.value;
const text = typeof part.text === 'string' ? part.text : part.text?.value;
if (typeof text !== 'string') {
return null;
@@ -71,7 +71,7 @@ const Part = memo(
</Container>
);
} else if (part.type === ContentTypes.THINK) {
const reasoning = typeof part.think === 'string' ? part.think : part.think.value;
const reasoning = typeof part.think === 'string' ? part.think : part.think?.value;
if (typeof reasoning !== 'string') {
return null;
}

View File

@@ -37,7 +37,7 @@ const LogContent: React.FC<LogContentProps> = ({ output = '', renderImages, atta
attachments?.forEach((attachment) => {
const { width, height, filepath = null } = attachment as TFile & TAttachmentMetadata;
const isImage =
imageExtRegex.test(attachment.filename) &&
imageExtRegex.test(attachment.filename ?? '') &&
width != null &&
height != null &&
filepath != null;
@@ -56,21 +56,25 @@ const LogContent: React.FC<LogContentProps> = ({ output = '', renderImages, atta
const renderAttachment = (file: TAttachment) => {
const now = new Date();
const expiresAt = typeof file.expiresAt === 'number' ? new Date(file.expiresAt) : null;
const expiresAt =
'expiresAt' in file && typeof file.expiresAt === 'number' ? new Date(file.expiresAt) : null;
const isExpired = expiresAt ? isAfter(now, expiresAt) : false;
const filename = file.filename || '';
if (isExpired) {
return `${file.filename} ${localize('com_download_expired')}`;
return `${filename} ${localize('com_download_expired')}`;
}
const filepath = file.filepath || '';
// const expirationText = expiresAt
// ? ` ${localize('com_download_expires', { 0: format(expiresAt, 'MM/dd/yy HH:mm') })}`
// : ` ${localize('com_click_to_download')}`;
return (
<LogLink href={file.filepath} filename={file.filename}>
<LogLink href={filepath} filename={filename}>
{'- '}
{file.filename} {localize('com_click_to_download')}
{filename} {localize('com_click_to_download')}
</LogLink>
);
};

View File

@@ -1,10 +1,12 @@
import React, { useMemo } from 'react';
import { useAtomValue } from 'jotai';
import { useRecoilValue } from 'recoil';
import type { TMessageContentParts } from 'librechat-data-provider';
import type { TMessageProps, TMessageIcon } from '~/common';
import { useMessageHelpers, useLocalize, useAttachments } from '~/hooks';
import MessageIcon from '~/components/Chat/Messages/MessageIcon';
import ContentParts from './Content/ContentParts';
import { fontSizeAtom } from '~/store/fontSize';
import SiblingSwitch from './SiblingSwitch';
import MultiMessage from './MultiMessage';
import HoverButtons from './HoverButtons';
@@ -36,7 +38,7 @@ export default function Message(props: TMessageProps) {
regenerateMessage,
} = useMessageHelpers(props);
const fontSize = useRecoilValue(store.fontSize);
const fontSize = useAtomValue(fontSizeAtom);
const maximizeChatSpace = useRecoilValue(store.maximizeChatSpace);
const { children, messageId = null, isCreatedByUser } = message ?? {};

View File

@@ -1,10 +1,12 @@
import { useState } from 'react';
import { useAtomValue } from 'jotai';
import { useRecoilValue } from 'recoil';
import { CSSTransition } from 'react-transition-group';
import type { TMessage } from 'librechat-data-provider';
import { useScreenshot, useMessageScrolling, useLocalize } from '~/hooks';
import ScrollToBottom from '~/components/Messages/ScrollToBottom';
import { MessagesViewProvider } from '~/Providers';
import { fontSizeAtom } from '~/store/fontSize';
import MultiMessage from './MultiMessage';
import { cn } from '~/utils';
import store from '~/store';
@@ -15,7 +17,7 @@ function MessagesViewContent({
messagesTree?: TMessage[] | null;
}) {
const localize = useLocalize();
const fontSize = useRecoilValue(store.fontSize);
const fontSize = useAtomValue(fontSizeAtom);
const { screenshotTargetRef } = useScreenshot();
const scrollButtonPreference = useRecoilValue(store.showScrollButton);
const [currentEditId, setCurrentEditId] = useState<number | string | null>(-1);

View File

@@ -1,10 +1,12 @@
import { useMemo } from 'react';
import { useAtomValue } from 'jotai';
import { useRecoilValue } from 'recoil';
import { useAuthContext, useLocalize } from '~/hooks';
import type { TMessageProps, TMessageIcon } from '~/common';
import MinimalHoverButtons from '~/components/Chat/Messages/MinimalHoverButtons';
import Icon from '~/components/Chat/Messages/MessageIcon';
import SearchContent from './Content/SearchContent';
import { fontSizeAtom } from '~/store/fontSize';
import SearchButtons from './SearchButtons';
import SubRow from './SubRow';
import { cn } from '~/utils';
@@ -34,8 +36,8 @@ const MessageBody = ({ message, messageLabel, fontSize }) => (
);
export default function SearchMessage({ message }: Pick<TMessageProps, 'message'>) {
const fontSize = useAtomValue(fontSizeAtom);
const UsernameDisplay = useRecoilValue<boolean>(store.UsernameDisplay);
const fontSize = useRecoilValue(store.fontSize);
const { user } = useAuthContext();
const localize = useLocalize();

View File

@@ -1,4 +1,5 @@
import React, { useCallback, useMemo, memo } from 'react';
import { useAtomValue } from 'jotai';
import { useRecoilValue } from 'recoil';
import { type TMessage } from 'librechat-data-provider';
import type { TMessageProps, TMessageIcon } from '~/common';
@@ -9,6 +10,7 @@ import HoverButtons from '~/components/Chat/Messages/HoverButtons';
import MessageIcon from '~/components/Chat/Messages/MessageIcon';
import { Plugin } from '~/components/Messages/Content';
import SubRow from '~/components/Chat/Messages/SubRow';
import { fontSizeAtom } from '~/store/fontSize';
import { MessageContext } from '~/Providers';
import { useMessageActions } from '~/hooks';
import { cn, logger } from '~/utils';
@@ -58,8 +60,8 @@ const MessageRender = memo(
isMultiMessage,
setCurrentEditId,
});
const fontSize = useAtomValue(fontSizeAtom);
const maximizeChatSpace = useRecoilValue(store.maximizeChatSpace);
const fontSize = useRecoilValue(store.fontSize);
const handleRegenerateMessage = useCallback(() => regenerateMessage(), [regenerateMessage]);
const hasNoChildren = !(msg?.children?.length ?? 0);
@@ -97,7 +99,10 @@ const MessageRender = memo(
() =>
showCardRender && !isLatestMessage
? () => {
logger.log(`Message Card click: Setting ${msg?.messageId} as latest message`);
logger.log(
'latest_message',
`Message Card click: Setting ${msg?.messageId} as latest message`,
);
logger.dir(msg);
setLatestMessage(msg!);
}

View File

@@ -28,6 +28,8 @@ const LoadingSpinner = memo(() => {
);
});
LoadingSpinner.displayName = 'LoadingSpinner';
const DateLabel: FC<{ groupName: string }> = memo(({ groupName }) => {
const localize = useLocalize();
return (
@@ -74,6 +76,7 @@ const Conversations: FC<ConversationsProps> = ({
isLoading,
isSearchLoading,
}) => {
const localize = useLocalize();
const isSmallScreen = useMediaQuery('(max-width: 768px)');
const convoHeight = isSmallScreen ? 44 : 34;
@@ -181,7 +184,7 @@ const Conversations: FC<ConversationsProps> = ({
{isSearchLoading ? (
<div className="flex flex-1 items-center justify-center">
<Spinner className="text-text-primary" />
<span className="ml-2 text-text-primary">Loading...</span>
<span className="ml-2 text-text-primary">{localize('com_ui_loading')}</span>
</div>
) : (
<div className="flex-1">

View File

@@ -135,8 +135,9 @@ export default function Conversation({ conversation, retainView, toggleNav }: Co
'group relative flex h-12 w-full items-center rounded-lg transition-colors duration-200 md:h-9',
isActiveConvo ? 'bg-surface-active-alt' : 'hover:bg-surface-active-alt',
)}
role="listitem"
tabIndex={0}
role="button"
tabIndex={renaming ? -1 : 0}
aria-label={`${title || localize('com_ui_untitled')} conversation`}
onClick={(e) => {
if (renaming) {
return;
@@ -149,7 +150,8 @@ export default function Conversation({ conversation, retainView, toggleNav }: Co
if (renaming) {
return;
}
if (e.key === 'Enter') {
if (e.key === 'Enter' || e.key === ' ') {
e.preventDefault();
handleNavigation(false);
}
}}

View File

@@ -40,8 +40,7 @@ const ConvoLink: React.FC<ConvoLinkProps> = ({
e.stopPropagation();
onRename();
}}
role="button"
aria-label={isSmallScreen ? undefined : title || localize('com_ui_untitled')}
aria-label={title || localize('com_ui_untitled')}
>
{title || localize('com_ui_untitled')}
</div>

View File

@@ -201,6 +201,7 @@ function ConvoOptions({
<Menu.MenuButton
id={`conversation-menu-${conversationId}`}
aria-label={localize('com_nav_convo_menu_options')}
aria-readonly={undefined}
className={cn(
'inline-flex h-7 w-7 items-center justify-center gap-2 rounded-md border-none p-0 text-sm font-medium ring-ring-primary transition-all duration-200 ease-in-out focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-offset-2 disabled:opacity-50',
isActiveConvo === true || isPopoverActive

View File

@@ -1,4 +1,5 @@
import React, { useState, useEffect } from 'react';
import { useRecoilValue } from 'recoil';
import { QRCodeSVG } from 'qrcode.react';
import { Copy, CopyCheck } from 'lucide-react';
import { useGetSharedLinkQuery } from 'librechat-data-provider/react-query';
@@ -6,6 +7,7 @@ import { OGDialogTemplate, Button, Spinner, OGDialog } from '@librechat/client';
import { useLocalize, useCopyToClipboard } from '~/hooks';
import SharedLinkButton from './SharedLinkButton';
import { cn } from '~/utils';
import store from '~/store';
export default function ShareButton({
conversationId,
@@ -24,8 +26,9 @@ export default function ShareButton({
const [showQR, setShowQR] = useState(false);
const [sharedLink, setSharedLink] = useState('');
const [isCopying, setIsCopying] = useState(false);
const { data: share, isLoading } = useGetSharedLinkQuery(conversationId);
const copyLink = useCopyToClipboard({ text: sharedLink });
const latestMessage = useRecoilValue(store.latestMessageFamily(0));
const { data: share, isLoading } = useGetSharedLinkQuery(conversationId);
useEffect(() => {
if (share?.shareId !== undefined) {
@@ -39,6 +42,7 @@ export default function ShareButton({
<SharedLinkButton
share={share}
conversationId={conversationId}
targetMessageId={latestMessage?.messageId}
setShareDialogOpen={onOpenChange}
showQR={showQR}
setShowQR={setShowQR}

View File

@@ -21,6 +21,7 @@ import { useLocalize } from '~/hooks';
export default function SharedLinkButton({
share,
conversationId,
targetMessageId,
setShareDialogOpen,
showQR,
setShowQR,
@@ -28,6 +29,7 @@ export default function SharedLinkButton({
}: {
share: TSharedLinkGetResponse | undefined;
conversationId: string;
targetMessageId?: string;
setShareDialogOpen: React.Dispatch<React.SetStateAction<boolean>>;
showQR: boolean;
setShowQR: (showQR: boolean) => void;
@@ -86,7 +88,7 @@ export default function SharedLinkButton({
};
const createShareLink = async () => {
const share = await mutate({ conversationId });
const share = await mutate({ conversationId, targetMessageId });
const newLink = generateShareLink(share.shareId);
setSharedLink(newLink);
};

View File

@@ -1,16 +1,33 @@
import React, { useState } from 'react';
import { useForm, FormProvider } from 'react-hook-form';
import { OGDialogTemplate, OGDialog, Dropdown, useToastContext } from '@librechat/client';
import {
OGDialog,
OGDialogContent,
OGDialogHeader,
OGDialogTitle,
OGDialogFooter,
Dropdown,
useToastContext,
Button,
Label,
OGDialogTrigger,
Spinner,
} from '@librechat/client';
import { EModelEndpoint, alternateName, isAssistantsEndpoint } from 'librechat-data-provider';
import {
useRevokeAllUserKeysMutation,
useRevokeUserKeyMutation,
} from 'librechat-data-provider/react-query';
import type { TDialogProps } from '~/common';
import { useGetEndpointsQuery } from '~/data-provider';
import { RevokeKeysButton } from '~/components/Nav';
import { useUserKey, useLocalize } from '~/hooks';
import { NotificationSeverity } from '~/common';
import CustomConfig from './CustomEndpoint';
import GoogleConfig from './GoogleConfig';
import OpenAIConfig from './OpenAIConfig';
import OtherConfig from './OtherConfig';
import HelpText from './HelpText';
import { logger } from '~/utils';
const endpointComponents = {
[EModelEndpoint.google]: GoogleConfig,
@@ -42,6 +59,94 @@ const EXPIRY = {
NEVER: { label: 'never', value: 0 },
};
const RevokeKeysButton = ({
endpoint,
disabled,
setDialogOpen,
}: {
endpoint: string;
disabled: boolean;
setDialogOpen: (open: boolean) => void;
}) => {
const localize = useLocalize();
const [open, setOpen] = useState(false);
const { showToast } = useToastContext();
const revokeKeyMutation = useRevokeUserKeyMutation(endpoint);
const revokeKeysMutation = useRevokeAllUserKeysMutation();
const handleSuccess = () => {
showToast({
message: localize('com_ui_revoke_key_success'),
status: NotificationSeverity.SUCCESS,
});
if (!setDialogOpen) {
return;
}
setDialogOpen(false);
};
const handleError = () => {
showToast({
message: localize('com_ui_revoke_key_error'),
status: NotificationSeverity.ERROR,
});
};
const onClick = () => {
revokeKeyMutation.mutate(
{},
{
onSuccess: handleSuccess,
onError: handleError,
},
);
};
const isLoading = revokeKeyMutation.isLoading || revokeKeysMutation.isLoading;
return (
<div className="flex items-center justify-between">
<OGDialog open={open} onOpenChange={setOpen}>
<OGDialogTrigger asChild>
<Button
variant="destructive"
className="flex items-center justify-center rounded-lg transition-colors duration-200"
onClick={() => setOpen(true)}
disabled={disabled}
>
{localize('com_ui_revoke')}
</Button>
</OGDialogTrigger>
<OGDialogContent className="max-w-[450px]">
<OGDialogHeader>
<OGDialogTitle>{localize('com_ui_revoke_key_endpoint', { 0: endpoint })}</OGDialogTitle>
</OGDialogHeader>
<div className="py-4">
<Label className="text-left text-sm font-medium">
{localize('com_ui_revoke_key_confirm')}
</Label>
</div>
<OGDialogFooter>
<Button variant="outline" onClick={() => setOpen(false)}>
{localize('com_ui_cancel')}
</Button>
<Button
variant="destructive"
onClick={onClick}
disabled={isLoading}
className="bg-destructive text-white transition-all duration-200 hover:bg-destructive/80"
>
{isLoading ? <Spinner /> : localize('com_ui_revoke')}
</Button>
</OGDialogFooter>
</OGDialogContent>
</OGDialog>
</div>
);
};
const SetKeyDialog = ({
open,
onOpenChange,
@@ -83,7 +188,7 @@ const SetKeyDialog = ({
const submit = () => {
const selectedOption = expirationOptions.find((option) => option.label === expiresAtLabel);
let expiresAt;
let expiresAt: number | null;
if (selectedOption?.value === 0) {
expiresAt = null;
@@ -92,8 +197,20 @@ const SetKeyDialog = ({
}
const saveKey = (key: string) => {
saveUserKey(key, expiresAt);
onOpenChange(false);
try {
saveUserKey(key, expiresAt);
showToast({
message: localize('com_ui_save_key_success'),
status: NotificationSeverity.SUCCESS,
});
onOpenChange(false);
} catch (error) {
logger.error('Error saving user key:', error);
showToast({
message: localize('com_ui_save_key_error'),
status: NotificationSeverity.ERROR,
});
}
};
if (formSet.has(endpoint) || formSet.has(endpointType ?? '')) {
@@ -148,6 +265,14 @@ const SetKeyDialog = ({
return;
}
if (!userKey.trim()) {
showToast({
message: localize('com_ui_key_required'),
status: NotificationSeverity.ERROR,
});
return;
}
saveKey(userKey);
setUserKey('');
};
@@ -159,56 +284,54 @@ const SetKeyDialog = ({
return (
<OGDialog open={open} onOpenChange={onOpenChange}>
<OGDialogTemplate
title={`${localize('com_endpoint_config_key_for')} ${alternateName[endpoint] ?? endpoint}`}
className="w-11/12 max-w-2xl"
showCancelButton={false}
main={
<div className="grid w-full items-center gap-2">
<small className="text-red-600">
{expiryTime === 'never'
? localize('com_endpoint_config_key_never_expires')
: `${localize('com_endpoint_config_key_encryption')} ${new Date(
expiryTime ?? 0,
).toLocaleString()}`}
</small>
<Dropdown
label="Expires "
value={expiresAtLabel}
onChange={handleExpirationChange}
options={expirationOptions.map((option) => option.label)}
sizeClasses="w-[185px]"
portal={false}
<OGDialogContent className="w-11/12 max-w-2xl">
<OGDialogHeader>
<OGDialogTitle>
{`${localize('com_endpoint_config_key_for')} ${alternateName[endpoint] ?? endpoint}`}
</OGDialogTitle>
</OGDialogHeader>
<div className="grid w-full items-center gap-2 py-4">
<small className="text-red-600">
{expiryTime === 'never'
? localize('com_endpoint_config_key_never_expires')
: `${localize('com_endpoint_config_key_encryption')} ${new Date(
expiryTime ?? 0,
).toLocaleString()}`}
</small>
<Dropdown
label="Expires "
value={expiresAtLabel}
onChange={handleExpirationChange}
options={expirationOptions.map((option) => option.label)}
sizeClasses="w-[185px]"
portal={false}
/>
<div className="mt-2" />
<FormProvider {...methods}>
<EndpointComponent
userKey={userKey}
setUserKey={setUserKey}
endpoint={
endpoint === EModelEndpoint.gptPlugins && (config?.azure ?? false)
? EModelEndpoint.azureOpenAI
: endpoint
}
userProvideURL={userProvideURL}
/>
<div className="mt-2" />
<FormProvider {...methods}>
<EndpointComponent
userKey={userKey}
setUserKey={setUserKey}
endpoint={
endpoint === EModelEndpoint.gptPlugins && (config?.azure ?? false)
? EModelEndpoint.azureOpenAI
: endpoint
}
userProvideURL={userProvideURL}
/>
</FormProvider>
<HelpText endpoint={endpoint} />
</div>
}
selection={{
selectHandler: submit,
selectClasses: 'btn btn-primary',
selectText: localize('com_ui_submit'),
}}
leftButtons={
</FormProvider>
<HelpText endpoint={endpoint} />
</div>
<OGDialogFooter>
<RevokeKeysButton
endpoint={endpoint}
disabled={!(expiryTime ?? '')}
setDialogOpen={onOpenChange}
/>
}
/>
<Button variant="submit" onClick={submit}>
{localize('com_ui_submit')}
</Button>
</OGDialogFooter>
</OGDialogContent>
</OGDialog>
);
};

Some files were not shown because too many files have changed in this diff Show More