Compare commits

..

3 Commits

Author SHA1 Message Date
Danny Avila
4f9c50e9fe 🎨 style: Theming and Consistency Improvements for Agent Marketplace
style: AccessRolesPicker to use DropdownPopup, theming, import order, localization

refactor: Update localization keys for Agent Marketplace in NewChat component, remove duplicate key

style: Adjust layout and font size in NewChat component for Agent Marketplace button

style: theming in AgentGrid

style: Update theming and text colors across Agent components for improved consistency

chore: import order

style: Replace Dialog with OGDialog and update content components in AgentDetail

refactor: Simplify AgentDetail component by removing dropdown menu and replacing it with a copy link button

style: Enhance scrollbar visibility and layout in AgentMarketplace and CategoryTabs components

style: Adjust layout in AgentMarketplace component by removing unnecessary padding from the container

style: Refactor layout in AgentMarketplace component by reorganizing hero section and sticky wrapper for improved structure with collapsible header effect

style: Improve responsiveness and layout in AgentMarketplace component by adjusting header visibility and modifying container styles based on screen size

fix: Update localization key for no categories message in CategoryTabs component and corresponding test

style: Add className prop to OpenSidebar component for improved styling flexibility and update Header to utilize it for responsive design

style: Enhance layout and scrolling behavior in CategoryTabs component by adding scroll snap properties and adjusting class names for improved user experience

style: Update AgentGrid component layout and skeleton structure for improved visual consistency and responsiveness
2025-07-24 23:54:16 -04:00
“Praneeth
1b45cc47c0 🏪 feat: Agent Marketplace
bugfix: Enhance Agent and AgentCategory schemas with new fields for category, support contact, and promotion status

refactored and moved agent category methods and schema to data-schema package

🔧 fix: Merge and Rebase Conflicts

- Move AgentCategory from api/models to @packages/data-schemas structure
  - Add schema, types, methods, and model following codebase conventions
  - Implement auto-seeding of default categories during AppService startup
  - Update marketplace controller to use new data-schemas methods
  - Remove old model file and standalone seed script

refactor: unify agent marketplace to single endpoint with cursor pagination

  - Replace multiple marketplace routes with unified /marketplace endpoint
  - Add query string controls: category, search, limit, cursor, promoted, requiredPermission
  - Implement cursor-based pagination replacing page-based system
  - Integrate ACL permissions for proper access control
  - Fix ObjectId constructor error in Agent model
  - Update React components to use unified useGetMarketplaceAgentsQuery hook
  - Enhance type safety and remove deprecated useDynamicAgentQuery
  - Update tests for new marketplace architecture
  -Known issues:
  see more button after category switching + Unit tests

feat: add icon property to ProcessedAgentCategory interface

- Add useMarketplaceAgentsInfiniteQuery and useGetAgentCategoriesQuery to client/src/data-provider/Agents/
  - Replace manual pagination in AgentGrid with infinite query pattern
  - Update imports to use local data provider instead of librechat-data-provider
  - Add proper permission handling with PERMISSION_BITS.VIEW/EDIT constants
  - Improve agent access control by adding requiredPermission validation in backend
  - Remove manual cursor/state management in favor of infinite query built-ins
  - Maintain existing search and category filtering functionality

refactor: consolidate agent marketplace endpoints into main agents API and improve data management consistency

  - Remove dedicated marketplace controller and routes, merging functionality into main agents v1 API
  - Add countPromotedAgents function to Agent model for promoted agents count
  - Enhance getListAgents handler with marketplace filtering (category, search, promoted status)
  - Move getAgentCategories from marketplace to v1 controller with same functionality
  - Update agent mutations to invalidate marketplace queries and handle multiple permission levels
  - Improve cache management by updating all agent query variants (VIEW/EDIT permissions)
  - Consolidate agent data access patterns for better maintainability and consistency
  - Remove duplicate marketplace route definitions and middleware

selected view only agents injected in the drop down

fix: remove minlength validation for support contact name in agent schema

feat: add validation and error messages for agent name in AgentConfig and AgentPanel

fix: update agent permission check logic in AgentPanel to simplify condition

Fix linting WIP

Fix Unit tests WIP

ESLint fixes

eslint fix

refactor: enhance isDuplicateVersion function in Agent model for improved comparison logic

- Introduced handling for undefined/null values in array and object comparisons.
- Normalized array comparisons to treat undefined/null as empty arrays.
- Added deep comparison for objects and improved handling of primitive values.
- Enhanced projectIds comparison to ensure consistent MongoDB ObjectId handling.

refactor: remove redundant properties from IAgent interface in agent schema

chore: update localization for agent detail component and clean up imports

ci: update access middleware tests

chore: remove unused PermissionTypes import from Role model

ci: update AclEntry model tests

ci: update button accessibility labels in AgentDetail tests

refactor: update exhaustive dep. lint warning

🔧 fix: Fixed agent actions access

feat: Add role-level permissions for agent sharing people picker

  - Add PEOPLE_PICKER permission type with VIEW_USERS and VIEW_GROUPS permissions
  - Create custom middleware for query-aware permission validation
  - Implement permission-based type filtering in PeoplePicker component
  - Hide people picker UI when user lacks permissions, show only public toggle
  - Support granular access: users-only, groups-only, or mixed search modes

refactor: Replace marketplace interface config with permission-based system

  - Add MARKETPLACE permission type to handle marketplace access control
  - Update interface configuration to use role-based marketplace settings (admin/user)
  - Replace direct marketplace boolean config with permission-based checks
  - Modify frontend components to use marketplace permissions instead of interface config
  - Update agent query hooks to use marketplace permissions for determining permission levels
  - Add marketplace configuration structure similar to peoplePicker in YAML config
  - Backend now sets MARKETPLACE permissions based on interface configuration
  - When marketplace enabled: users get agents with EDIT permissions in dropdown lists  (builder mode)
  - When marketplace disabled: users get agents with VIEW permissions  in dropdown lists (browse mode)

🔧 fix: Redirect to New Chat if No Marketplace Access and Required Agent Name Placeholder (#8213)

* Fix: Fix the redirect to new chat page if access to marketplace is denied

* Fixed the required agent name placeholder

---------

Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com>

chore: fix tests, remove unnecessary imports

refactor: Implement permission checks for file access via agents

- Updated `hasAccessToFilesViaAgent` to utilize permission checks for VIEW and EDIT access.
- Replaced project-based access validation with permission-based checks.
- Enhanced tests to cover new permission logic and ensure proper access control for files associated with agents.
- Cleaned up imports and initialized models in test files for consistency.

refactor: Enhance test setup and cleanup for file access control

- Introduced modelsToCleanup array to track models added during tests for proper cleanup.
- Updated afterAll hooks in test files to ensure all collections are cleared and only added models are deleted.
- Improved consistency in model initialization across test files.
- Added comments for clarity on cleanup processes and test data management.

chore: Update Jest configuration and test setup for improved timeout handling

- Added a global test timeout of 30 seconds in jest.config.js.
- Configured jest.setTimeout in jestSetup.js to allow individual test overrides if needed.
- Enhanced test reliability by ensuring consistent timeout settings across all tests.

refactor: Implement file access filtering based on agent permissions

- Introduced `filterFilesByAgentAccess` function to filter files based on user access through agents.
- Updated `getFiles` and `primeFiles` functions to utilize the new filtering logic.
- Moved `hasAccessToFilesViaAgent` function from the File model to permission services, adjusting imports accordingly
- Enhanced tests to ensure proper access control and filtering behavior for files associated with agents.

fix: make support_contact field a nested object rather than a sub-document

refactor: Update support_contact field initialization in agent model

- Removed handling for empty support_contact object in createAgent function.
- Changed default value of support_contact in agent schema to undefined.

test: Add comprehensive tests for support_contact field handling and versioning

refactor: remove unused avatar upload mutation field and add informational toast for success

chore: add missing SidePanelProvider for AgentMarketplace and organize imports

fix: resolve agent selection race condition in marketplace HandleStartChat
- Set agent in localStorage before newConversation to prevent useSelectorEffects from auto-selecting previous agent

fix: resolve agent dropdown showing raw ID instead of agent info from URL

  - Add proactive agent fetching when agent_id is present in URL parameters
  - Inject fetched agent into agents cache so dropdowns display proper name/avatar
  - Use useAgentsMap dependency to ensure proper cache initialization timing
  - Prevents raw agent IDs from showing in UI when visiting shared agent links

Fix: Agents endpoint renamed to "My Agent" for less confusion with the Marketplace agents.

chore: fix ESLint issues and Test Mocks

ci: update permissions structure in loadDefaultInterface tests

- Refactored permissions for MEMORY and added new permissions for MARKETPLACE and PEOPLE_PICKER.
- Ensured consistent structure for permissions across different types.

feat:  support_contact validation to allow empty email strings
2025-07-24 18:25:01 -04:00
Danny Avila
2f3bbc3b34 🔐 feat: Granular Role-based Permissions + Entra ID Group Discovery (#7804)
WIP: pre-granular-permissions commit

feat: Add category and support contact fields to Agent schema and UI components

Revert "feat: Add category and support contact fields to Agent schema and UI components"

This reverts commit c43a52b4c9.

Fix: Update import for renderHook in useAgentCategories.spec.tsx

fix: Update icon rendering in AgentCategoryDisplay tests to use empty spans

refactor: Improve category synchronization logic and clean up AgentConfig component

refactor: Remove unused UI flow translations from translation.json

feat: agent marketplace features

🔐 feat: Granular Role-based Permissions + Entra ID Group Discovery (#7804)
2025-07-24 10:47:35 -04:00
1499 changed files with 24021 additions and 84783 deletions

View File

@@ -15,20 +15,6 @@ HOST=localhost
PORT=3080
MONGO_URI=mongodb://127.0.0.1:27017/LibreChat
#The maximum number of connections in the connection pool. */
MONGO_MAX_POOL_SIZE=
#The minimum number of connections in the connection pool. */
MONGO_MIN_POOL_SIZE=
#The maximum number of connections that may be in the process of being established concurrently by the connection pool. */
MONGO_MAX_CONNECTING=
#The maximum number of milliseconds that a connection can remain idle in the pool before being removed and closed. */
MONGO_MAX_IDLE_TIME_MS=
#The maximum time in milliseconds that a thread can wait for a connection to become available. */
MONGO_WAIT_QUEUE_TIMEOUT_MS=
# Set to false to disable automatic index creation for all models associated with this connection. */
MONGO_AUTO_INDEX=
# Set to `false` to disable Mongoose automatically calling `createCollection()` on every model created on this connection. */
MONGO_AUTO_CREATE=
DOMAIN_CLIENT=http://localhost:3080
DOMAIN_SERVER=http://localhost:3080
@@ -40,13 +26,6 @@ NO_INDEX=true
# Defaulted to 1.
TRUST_PROXY=1
# Minimum password length for user authentication
# Default: 8
# Note: When using LDAP authentication, you may want to set this to 1
# to bypass local password validation, as LDAP servers handle their own
# password policies.
# MIN_PASSWORD_LENGTH=8
#===============#
# JSON Logging #
#===============#
@@ -163,10 +142,10 @@ GOOGLE_KEY=user_provided
# GOOGLE_AUTH_HEADER=true
# Gemini API (AI Studio)
# GOOGLE_MODELS=gemini-2.5-pro,gemini-2.5-flash,gemini-2.5-flash-lite,gemini-2.0-flash,gemini-2.0-flash-lite
# GOOGLE_MODELS=gemini-2.5-pro,gemini-2.5-flash,gemini-2.5-flash-lite-preview-06-17,gemini-2.0-flash,gemini-2.0-flash-lite
# Vertex AI
# GOOGLE_MODELS=gemini-2.5-pro,gemini-2.5-flash,gemini-2.5-flash-lite,gemini-2.0-flash-001,gemini-2.0-flash-lite-001
# GOOGLE_MODELS=gemini-2.5-pro,gemini-2.5-flash,gemini-2.5-flash-lite-preview-06-17,gemini-2.0-flash-001,gemini-2.0-flash-lite-001
# GOOGLE_TITLE_MODEL=gemini-2.0-flash-lite-001
@@ -196,7 +175,7 @@ GOOGLE_KEY=user_provided
#============#
OPENAI_API_KEY=user_provided
# OPENAI_MODELS=gpt-5,gpt-5-codex,gpt-5-mini,gpt-5-nano,o3-pro,o3,o4-mini,gpt-4.1,gpt-4.1-mini,gpt-4.1-nano,o3-mini,o1-pro,o1,gpt-4o,gpt-4o-mini
# OPENAI_MODELS=o1,o1-mini,o1-preview,gpt-4o,gpt-4.5-preview,chatgpt-4o-latest,gpt-4o-mini,gpt-3.5-turbo-0125,gpt-3.5-turbo-0301,gpt-3.5-turbo,gpt-4,gpt-4-0613,gpt-4-vision-preview,gpt-3.5-turbo-0613,gpt-3.5-turbo-16k-0613,gpt-4-0125-preview,gpt-4-turbo-preview,gpt-4-1106-preview,gpt-3.5-turbo-1106,gpt-3.5-turbo-instruct,gpt-3.5-turbo-instruct-0914,gpt-3.5-turbo-16k
DEBUG_OPENAI=false
@@ -459,15 +438,10 @@ OPENID_CALLBACK_URL=/oauth/openid/callback
OPENID_REQUIRED_ROLE=
OPENID_REQUIRED_ROLE_TOKEN_KIND=
OPENID_REQUIRED_ROLE_PARAMETER_PATH=
OPENID_ADMIN_ROLE=
OPENID_ADMIN_ROLE_PARAMETER_PATH=
OPENID_ADMIN_ROLE_TOKEN_KIND=
# Set to determine which user info property returned from OpenID Provider to store as the User's username
OPENID_USERNAME_CLAIM=
# Set to determine which user info property returned from OpenID Provider to store as the User's name
OPENID_NAME_CLAIM=
# Optional audience parameter for OpenID authorization requests
OPENID_AUDIENCE=
OPENID_BUTTON_LABEL=
OPENID_IMAGE_URL=
@@ -489,21 +463,6 @@ OPENID_ON_BEHALF_FLOW_USERINFO_SCOPE="user.read" # example for Scope Needed for
# Set to true to use the OpenID Connect end session endpoint for logout
OPENID_USE_END_SESSION_ENDPOINT=
#========================#
# SharePoint Integration #
#========================#
# Requires Entra ID (OpenID) authentication to be configured
# Enable SharePoint file picker in chat and agent panels
# ENABLE_SHAREPOINT_FILEPICKER=true
# SharePoint tenant base URL (e.g., https://yourtenant.sharepoint.com)
# SHAREPOINT_BASE_URL=https://yourtenant.sharepoint.com
# Microsoft Graph API And SharePoint scopes for file picker
# SHAREPOINT_PICKER_SHAREPOINT_SCOPE==https://yourtenant.sharepoint.com/AllSites.Read
# SHAREPOINT_PICKER_GRAPH_SCOPE=Files.Read.All
#========================#
# SAML
# Note: If OpenID is enabled, SAML authentication will be automatically disabled.
@@ -653,12 +612,6 @@ HELP_AND_FAQ_URL=https://librechat.ai
# Google tag manager id
#ANALYTICS_GTM_ID=user provided google tag manager id
# limit conversation file imports to a certain number of bytes in size to avoid the container
# maxing out memory limitations by unremarking this line and supplying a file size in bytes
# such as the below example of 250 mib
# CONVERSATION_IMPORT_MAX_FILE_SIZE_BYTES=262144000
#===============#
# REDIS Options #
#===============#
@@ -676,10 +629,6 @@ HELP_AND_FAQ_URL=https://librechat.ai
# REDIS_URI=rediss://127.0.0.1:6380
# REDIS_CA=/path/to/ca-cert.pem
# Elasticache may need to use an alternate dnsLookup for TLS connections. see "Special Note: Aws Elasticache Clusters with TLS" on this webpage: https://www.npmjs.com/package/ioredis
# Enable alternative dnsLookup for redis
# REDIS_USE_ALTERNATIVE_DNS_LOOKUP=true
# Redis authentication (if required)
# REDIS_USERNAME=your_redis_username
# REDIS_PASSWORD=your_redis_password
@@ -693,15 +642,6 @@ HELP_AND_FAQ_URL=https://librechat.ai
# Redis connection limits
# REDIS_MAX_LISTENERS=40
# Redis ping interval in seconds (0 = disabled, >0 = enabled)
# When set to a positive integer, Redis clients will ping the server at this interval to keep connections alive
# When unset or 0, no pinging is performed (recommended for most use cases)
# REDIS_PING_INTERVAL=300
# Force specific cache namespaces to use in-memory storage even when Redis is enabled
# Comma-separated list of CacheKeys (e.g., ROLES,MESSAGES)
# FORCED_IN_MEMORY_CACHE_NAMESPACES=ROLES,MESSAGES
#==================================================#
# Others #
#==================================================#
@@ -762,16 +702,3 @@ OPENWEATHER_API_KEY=
# JINA_API_KEY=your_jina_api_key
# or
# COHERE_API_KEY=your_cohere_api_key
#======================#
# MCP Configuration #
#======================#
# Treat 401/403 responses as OAuth requirement when no oauth metadata found
# MCP_OAUTH_ON_AUTH_ERROR=true
# Timeout for OAuth detection requests in milliseconds
# MCP_OAUTH_DETECTION_TIMEOUT=5000
# Cache connection status checks for this many milliseconds to avoid expensive verification
# MCP_CONNECTION_CHECK_TTL=60000

View File

@@ -147,7 +147,7 @@ Apply the following naming conventions to branches, labels, and other Git-relate
## 8. Module Import Conventions
- `npm` packages first,
- from longest line (top) to shortest (bottom)
- from shortest line (top) to longest (bottom)
- Followed by typescript types (pertains to data-provider and client workspaces)
- longest line (top) to shortest (bottom)
@@ -157,8 +157,6 @@ Apply the following naming conventions to branches, labels, and other Git-relate
- longest line (top) to shortest (bottom)
- imports with alias `~` treated the same as relative import with respect to line length
**Note:** ESLint will automatically enforce these import conventions when you run `npm run lint --fix` or through pre-commit hooks.
---
Please ensure that you adapt this summary to fit the specific context and nuances of your project.

View File

@@ -1,78 +0,0 @@
name: Cache Integration Tests
on:
pull_request:
branches:
- main
- dev
- release/*
paths:
- 'packages/api/src/cache/**'
- 'redis-config/**'
- '.github/workflows/cache-integration-tests.yml'
jobs:
cache_integration_tests:
name: Run Cache Integration Tests
timeout-minutes: 30
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Use Node.js 20.x
uses: actions/setup-node@v4
with:
node-version: 20
cache: 'npm'
- name: Install Redis tools
run: |
sudo apt-get update
sudo apt-get install -y redis-server redis-tools
- name: Start Single Redis Instance
run: |
redis-server --daemonize yes --port 6379
sleep 2
# Verify single Redis is running
redis-cli -p 6379 ping || exit 1
- name: Start Redis Cluster
working-directory: redis-config
run: |
chmod +x start-cluster.sh stop-cluster.sh
./start-cluster.sh
sleep 10
# Verify cluster is running
redis-cli -p 7001 cluster info || exit 1
redis-cli -p 7002 cluster info || exit 1
redis-cli -p 7003 cluster info || exit 1
- name: Install dependencies
run: npm ci
- name: Build packages
run: |
npm run build:data-provider
npm run build:data-schemas
npm run build:api
- name: Run cache integration tests
working-directory: packages/api
env:
NODE_ENV: test
USE_REDIS: true
REDIS_URI: redis://127.0.0.1:6379
REDIS_CLUSTER_URI: redis://127.0.0.1:7001,redis://127.0.0.1:7002,redis://127.0.0.1:7003
run: npm run test:cache:integration
- name: Stop Redis Cluster
if: always()
working-directory: redis-config
run: ./stop-cluster.sh || true
- name: Stop Single Redis Instance
if: always()
run: redis-cli -p 6379 shutdown || true

View File

@@ -1,11 +1,6 @@
name: Publish `@librechat/client` to NPM
on:
push:
branches:
- main
paths:
- 'packages/client/package.json'
workflow_dispatch:
inputs:
reason:
@@ -22,37 +17,16 @@ jobs:
- name: Use Node.js
uses: actions/setup-node@v4
with:
node-version: '20.x'
- name: Install client dependencies
run: cd packages/client && npm ci
- name: Build client
run: cd packages/client && npm run build
- name: Set up npm authentication
run: echo "//registry.npmjs.org/:_authToken=${{ secrets.PUBLISH_NPM_TOKEN }}" > ~/.npmrc
- name: Check version change
id: check
working-directory: packages/client
node-version: '18.x'
- name: Check if client package exists
run: |
PACKAGE_VERSION=$(node -p "require('./package.json').version")
PUBLISHED_VERSION=$(npm view @librechat/client version 2>/dev/null || echo "0.0.0")
if [ "$PACKAGE_VERSION" = "$PUBLISHED_VERSION" ]; then
echo "No version change, skipping publish"
echo "skip=true" >> $GITHUB_OUTPUT
if [ -d "packages/client" ]; then
echo "Client package directory found"
else
echo "Version changed, proceeding with publish"
echo "skip=false" >> $GITHUB_OUTPUT
echo "Client package directory not found - workflow ready for future use"
exit 0
fi
- name: Pack package
if: steps.check.outputs.skip != 'true'
working-directory: packages/client
run: npm pack
- name: Publish
if: steps.check.outputs.skip != 'true'
working-directory: packages/client
run: npm publish *.tgz --access public
- name: Placeholder for future publishing
run: echo "Client package publishing workflow is ready"

View File

@@ -1,4 +1,4 @@
name: Publish `librechat-data-provider` to NPM
name: Node.js Package
on:
push:
@@ -6,12 +6,6 @@ on:
- main
paths:
- 'packages/data-provider/package.json'
workflow_dispatch:
inputs:
reason:
description: 'Reason for manual trigger'
required: false
default: 'Manual publish requested'
jobs:
build:
@@ -20,7 +14,7 @@ jobs:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
node-version: 16
- run: cd packages/data-provider && npm ci
- run: cd packages/data-provider && npm run build
@@ -31,7 +25,7 @@ jobs:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
node-version: 16
registry-url: 'https://registry.npmjs.org'
- run: cd packages/data-provider && npm ci
- run: cd packages/data-provider && npm run build

View File

@@ -22,7 +22,7 @@ jobs:
- name: Use Node.js
uses: actions/setup-node@v4
with:
node-version: '20.x'
node-version: '18.x'
- name: Install dependencies
run: cd packages/data-schemas && npm ci

View File

@@ -0,0 +1,95 @@
name: Generate Release Changelog PR
on:
push:
tags:
- 'v*.*.*'
workflow_dispatch:
jobs:
generate-release-changelog-pr:
permissions:
contents: write # Needed for pushing commits and creating branches.
pull-requests: write
runs-on: ubuntu-latest
steps:
# 1. Checkout the repository (with full history).
- name: Checkout Repository
uses: actions/checkout@v4
with:
fetch-depth: 0
# 2. Generate the release changelog using our custom configuration.
- name: Generate Release Changelog
id: generate_release
uses: mikepenz/release-changelog-builder-action@v5.1.0
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
configuration: ".github/configuration-release.json"
owner: ${{ github.repository_owner }}
repo: ${{ github.event.repository.name }}
outputFile: CHANGELOG-release.md
# 3. Update the main CHANGELOG.md:
# - If it doesn't exist, create it with a basic header.
# - Remove the "Unreleased" section (if present).
# - Prepend the new release changelog above previous releases.
# - Remove all temporary files before committing.
- name: Update CHANGELOG.md
run: |
# Determine the release tag, e.g. "v1.2.3"
TAG=${GITHUB_REF##*/}
echo "Using release tag: $TAG"
# Ensure CHANGELOG.md exists; if not, create a basic header.
if [ ! -f CHANGELOG.md ]; then
echo "# Changelog" > CHANGELOG.md
echo "" >> CHANGELOG.md
echo "All notable changes to this project will be documented in this file." >> CHANGELOG.md
echo "" >> CHANGELOG.md
fi
echo "Updating CHANGELOG.md…"
# Remove the "Unreleased" section (from "## [Unreleased]" until the first occurrence of '---') if it exists.
if grep -q "^## \[Unreleased\]" CHANGELOG.md; then
awk '/^## \[Unreleased\]/{flag=1} flag && /^---/{flag=0; next} !flag' CHANGELOG.md > CHANGELOG.cleaned
else
cp CHANGELOG.md CHANGELOG.cleaned
fi
# Split the cleaned file into:
# - header.md: content before the first release header ("## [v...").
# - tail.md: content from the first release header onward.
awk '/^## \[v/{exit} {print}' CHANGELOG.cleaned > header.md
awk 'f{print} /^## \[v/{f=1; print}' CHANGELOG.cleaned > tail.md
# Combine header, the new release changelog, and the tail.
echo "Combining updated changelog parts..."
cat header.md CHANGELOG-release.md > CHANGELOG.md.new
echo "" >> CHANGELOG.md.new
cat tail.md >> CHANGELOG.md.new
mv CHANGELOG.md.new CHANGELOG.md
# Remove temporary files.
rm -f CHANGELOG.cleaned header.md tail.md CHANGELOG-release.md
echo "Final CHANGELOG.md content:"
cat CHANGELOG.md
# 4. Create (or update) the Pull Request with the updated CHANGELOG.md.
- name: Create Pull Request
uses: peter-evans/create-pull-request@v7
with:
token: ${{ secrets.GITHUB_TOKEN }}
sign-commits: true
commit-message: "chore: update CHANGELOG for release ${{ github.ref_name }}"
base: main
branch: "changelog/${{ github.ref_name }}"
reviewers: danny-avila
title: "📜 docs: Changelog for release ${{ github.ref_name }}"
body: |
**Description**:
- This PR updates the CHANGELOG.md by removing the "Unreleased" section and adding new release notes for release ${{ github.ref_name }} above previous releases.

View File

@@ -0,0 +1,107 @@
name: Generate Unreleased Changelog PR
on:
schedule:
- cron: "0 0 * * 1" # Runs every Monday at 00:00 UTC
workflow_dispatch:
jobs:
generate-unreleased-changelog-pr:
permissions:
contents: write # Needed for pushing commits and creating branches.
pull-requests: write
runs-on: ubuntu-latest
steps:
# 1. Checkout the repository on main.
- name: Checkout Repository on Main
uses: actions/checkout@v4
with:
ref: main
fetch-depth: 0
# 4. Get the latest version tag.
- name: Get Latest Tag
id: get_latest_tag
run: |
LATEST_TAG=$(git describe --tags $(git rev-list --tags --max-count=1) || echo "none")
echo "Latest tag: $LATEST_TAG"
echo "tag=$LATEST_TAG" >> $GITHUB_OUTPUT
# 5. Generate the Unreleased changelog.
- name: Generate Unreleased Changelog
id: generate_unreleased
uses: mikepenz/release-changelog-builder-action@v5.1.0
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
configuration: ".github/configuration-unreleased.json"
owner: ${{ github.repository_owner }}
repo: ${{ github.event.repository.name }}
outputFile: CHANGELOG-unreleased.md
fromTag: ${{ steps.get_latest_tag.outputs.tag }}
toTag: main
# 7. Update CHANGELOG.md with the new Unreleased section.
- name: Update CHANGELOG.md
id: update_changelog
run: |
# Create CHANGELOG.md if it doesn't exist.
if [ ! -f CHANGELOG.md ]; then
echo "# Changelog" > CHANGELOG.md
echo "" >> CHANGELOG.md
echo "All notable changes to this project will be documented in this file." >> CHANGELOG.md
echo "" >> CHANGELOG.md
fi
echo "Updating CHANGELOG.md…"
# Extract content before the "## [Unreleased]" (or first version header if missing).
if grep -q "^## \[Unreleased\]" CHANGELOG.md; then
awk '/^## \[Unreleased\]/{exit} {print}' CHANGELOG.md > CHANGELOG_TMP.md
else
awk '/^## \[v/{exit} {print}' CHANGELOG.md > CHANGELOG_TMP.md
fi
# Append the generated Unreleased changelog.
echo "" >> CHANGELOG_TMP.md
cat CHANGELOG-unreleased.md >> CHANGELOG_TMP.md
echo "" >> CHANGELOG_TMP.md
# Append the remainder of the original changelog (starting from the first version header).
awk 'f{print} /^## \[v/{f=1; print}' CHANGELOG.md >> CHANGELOG_TMP.md
# Replace the old file with the updated file.
mv CHANGELOG_TMP.md CHANGELOG.md
# Remove the temporary generated file.
rm -f CHANGELOG-unreleased.md
echo "Final CHANGELOG.md:"
cat CHANGELOG.md
# 8. Check if CHANGELOG.md has any updates.
- name: Check for CHANGELOG.md changes
id: changelog_changes
run: |
if git diff --quiet CHANGELOG.md; then
echo "has_changes=false" >> $GITHUB_OUTPUT
else
echo "has_changes=true" >> $GITHUB_OUTPUT
fi
# 9. Create (or update) the Pull Request only if there are changes.
- name: Create Pull Request
if: steps.changelog_changes.outputs.has_changes == 'true'
uses: peter-evans/create-pull-request@v7
with:
token: ${{ secrets.GITHUB_TOKEN }}
base: main
branch: "changelog/unreleased-update"
sign-commits: true
commit-message: "action: update Unreleased changelog"
title: "📜 docs: Unreleased Changelog"
body: |
**Description**:
- This PR updates the Unreleased section in CHANGELOG.md.
- It compares the current main branch with the latest version tag (determined as ${{ steps.get_latest_tag.outputs.tag }}),
regenerates the Unreleased changelog, removes any old Unreleased block, and inserts the new content.

View File

@@ -4,13 +4,12 @@ name: Build Helm Charts on Tag
on:
push:
tags:
- "chart-*"
- "*"
jobs:
release:
permissions:
contents: write
packages: write
runs-on: ubuntu-latest
steps:
- name: Checkout
@@ -27,49 +26,15 @@ jobs:
uses: azure/setup-helm@v4
env:
GITHUB_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
- name: Build Subchart Deps
run: |
cd helm/librechat
helm dependency build
cd ../librechat-rag-api
helm dependency build
cd helm/librechat-rag-api
helm dependency build
- name: Get Chart Version
id: chart-version
run: |
CHART_VERSION=$(echo "${{ github.ref_name }}" | cut -d'-' -f2)
echo "CHART_VERSION=${CHART_VERSION}" >> "$GITHUB_OUTPUT"
# Log in to GitHub Container Registry
- name: Log in to GitHub Container Registry
uses: docker/login-action@v3
- name: Run chart-releaser
uses: helm/chart-releaser-action@v1.6.0
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
# Run Helm OCI Charts Releaser
# This is for the librechat chart
- name: Release Helm OCI Charts for librechat
uses: appany/helm-oci-chart-releaser@v0.4.2
with:
name: librechat
repository: ${{ github.actor }}/librechat-chart
tag: ${{ steps.chart-version.outputs.CHART_VERSION }}
path: helm/librechat
registry: ghcr.io
registry_username: ${{ github.actor }}
registry_password: ${{ secrets.GITHUB_TOKEN }}
# this is for the librechat-rag-api chart
- name: Release Helm OCI Charts for librechat-rag-api
uses: appany/helm-oci-chart-releaser@v0.4.2
with:
name: librechat-rag-api
repository: ${{ github.actor }}/librechat-chart
tag: ${{ steps.chart-version.outputs.CHART_VERSION }}
path: helm/librechat-rag-api
registry: ghcr.io
registry_username: ${{ github.actor }}
registry_password: ${{ secrets.GITHUB_TOKEN }}
charts_dir: helm
skip_existing: true
env:
CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"

View File

@@ -1,18 +1,11 @@
name: Detect Unused i18next Strings
# This workflow checks for unused i18n keys in translation files.
# It has special handling for:
# - com_ui_special_var_* keys that are dynamically constructed
# - com_agents_category_* keys that are stored in the database and used dynamically
on:
pull_request:
paths:
- "client/src/**"
- "api/**"
- "packages/data-provider/src/**"
- "packages/client/**"
- "packages/data-schemas/src/**"
jobs:
detect-unused-i18n-keys:
@@ -30,7 +23,7 @@ jobs:
# Define paths
I18N_FILE="client/src/locales/en/translation.json"
SOURCE_DIRS=("client/src" "api" "packages/data-provider/src" "packages/client" "packages/data-schemas/src")
SOURCE_DIRS=("client/src" "api" "packages/data-provider/src")
# Check if translation file exists
if [[ ! -f "$I18N_FILE" ]]; then
@@ -58,31 +51,6 @@ jobs:
fi
done
# Also check if the key is directly used somewhere
if [[ "$FOUND" == false ]]; then
for DIR in "${SOURCE_DIRS[@]}"; do
if grep -r --include=\*.{js,jsx,ts,tsx} -q "$KEY" "$DIR"; then
FOUND=true
break
fi
done
fi
# Special case for agent category keys that are dynamically used from database
elif [[ "$KEY" == com_agents_category_* ]]; then
# Check if agent category localization is being used
for DIR in "${SOURCE_DIRS[@]}"; do
# Check for dynamic category label/description usage
if grep -r --include=\*.{js,jsx,ts,tsx} -E "category\.(label|description).*startsWith.*['\"]com_" "$DIR" > /dev/null 2>&1 || \
# Check for the method that defines these keys
grep -r --include=\*.{js,jsx,ts,tsx} "ensureDefaultCategories" "$DIR" > /dev/null 2>&1 || \
# Check for direct usage in agentCategory.ts
grep -r --include=\*.ts -E "label:.*['\"]$KEY['\"]" "$DIR" > /dev/null 2>&1 || \
grep -r --include=\*.ts -E "description:.*['\"]$KEY['\"]" "$DIR" > /dev/null 2>&1; then
FOUND=true
break
fi
done
# Also check if the key is directly used somewhere
if [[ "$FOUND" == false ]]; then
for DIR in "${SOURCE_DIRS[@]}"; do

View File

@@ -48,7 +48,7 @@ jobs:
# 2. Download translation files from locize.
- name: Download Translations from locize
uses: locize/download@v2
uses: locize/download@v1
with:
project-id: ${{ secrets.LOCIZE_PROJECT_ID }}
path: "client/src/locales"

View File

@@ -7,7 +7,6 @@ on:
- 'package-lock.json'
- 'client/**'
- 'api/**'
- 'packages/client/**'
jobs:
detect-unused-packages:
@@ -29,7 +28,7 @@ jobs:
- name: Validate JSON files
run: |
for FILE in package.json client/package.json api/package.json packages/client/package.json; do
for FILE in package.json client/package.json api/package.json; do
if [[ -f "$FILE" ]]; then
jq empty "$FILE" || (echo "::error title=Invalid JSON::$FILE is invalid" && exit 1)
fi
@@ -64,31 +63,12 @@ jobs:
local folder=$1
local output_file=$2
if [[ -d "$folder" ]]; then
# Extract require() statements
grep -rEho "require\\(['\"]([a-zA-Z0-9@/._-]+)['\"]\\)" "$folder" --include=\*.{js,ts,tsx,jsx,mjs,cjs} | \
grep -rEho "require\\(['\"]([a-zA-Z0-9@/._-]+)['\"]\\)" "$folder" --include=\*.{js,ts,mjs,cjs} | \
sed -E "s/require\\(['\"]([a-zA-Z0-9@/._-]+)['\"]\\)/\1/" > "$output_file"
# Extract ES6 imports - various patterns
# import x from 'module'
grep -rEho "import .* from ['\"]([a-zA-Z0-9@/._-]+)['\"]" "$folder" --include=\*.{js,ts,tsx,jsx,mjs,cjs} | \
grep -rEho "import .* from ['\"]([a-zA-Z0-9@/._-]+)['\"]" "$folder" --include=\*.{js,ts,mjs,cjs} | \
sed -E "s/import .* from ['\"]([a-zA-Z0-9@/._-]+)['\"]/\1/" >> "$output_file"
# import 'module' (side-effect imports)
grep -rEho "import ['\"]([a-zA-Z0-9@/._-]+)['\"]" "$folder" --include=\*.{js,ts,tsx,jsx,mjs,cjs} | \
sed -E "s/import ['\"]([a-zA-Z0-9@/._-]+)['\"]/\1/" >> "$output_file"
# export { x } from 'module' or export * from 'module'
grep -rEho "export .* from ['\"]([a-zA-Z0-9@/._-]+)['\"]" "$folder" --include=\*.{js,ts,tsx,jsx,mjs,cjs} | \
sed -E "s/export .* from ['\"]([a-zA-Z0-9@/._-]+)['\"]/\1/" >> "$output_file"
# import type { x } from 'module' (TypeScript)
grep -rEho "import type .* from ['\"]([a-zA-Z0-9@/._-]+)['\"]" "$folder" --include=\*.{ts,tsx} | \
sed -E "s/import type .* from ['\"]([a-zA-Z0-9@/._-]+)['\"]/\1/" >> "$output_file"
# Remove subpath imports but keep the base package
# e.g., '@tanstack/react-query/devtools' becomes '@tanstack/react-query'
sed -i -E 's|^(@?[a-zA-Z0-9-]+(/[a-zA-Z0-9-]+)?)/.*|\1|' "$output_file"
sort -u "$output_file" -o "$output_file"
else
touch "$output_file"
@@ -98,80 +78,13 @@ jobs:
extract_deps_from_code "." root_used_code.txt
extract_deps_from_code "client" client_used_code.txt
extract_deps_from_code "api" api_used_code.txt
# Extract dependencies used by @librechat/client package
extract_deps_from_code "packages/client" packages_client_used_code.txt
- name: Get @librechat/client dependencies
id: get-librechat-client-deps
run: |
if [[ -f "packages/client/package.json" ]]; then
# Get all dependencies from @librechat/client (dependencies, devDependencies, and peerDependencies)
DEPS=$(jq -r '.dependencies // {} | keys[]' packages/client/package.json 2>/dev/null || echo "")
DEV_DEPS=$(jq -r '.devDependencies // {} | keys[]' packages/client/package.json 2>/dev/null || echo "")
PEER_DEPS=$(jq -r '.peerDependencies // {} | keys[]' packages/client/package.json 2>/dev/null || echo "")
# Combine all dependencies
echo "$DEPS" > librechat_client_deps.txt
echo "$DEV_DEPS" >> librechat_client_deps.txt
echo "$PEER_DEPS" >> librechat_client_deps.txt
# Also include dependencies that are imported in packages/client
cat packages_client_used_code.txt >> librechat_client_deps.txt
# Remove empty lines and sort
grep -v '^$' librechat_client_deps.txt | sort -u > temp_deps.txt
mv temp_deps.txt librechat_client_deps.txt
else
touch librechat_client_deps.txt
fi
- name: Extract Workspace Dependencies
id: extract-workspace-deps
run: |
# Function to get dependencies from a workspace package that are used by another package
get_workspace_package_deps() {
local package_json=$1
local output_file=$2
# Get all workspace dependencies (starting with @librechat/)
if [[ -f "$package_json" ]]; then
local workspace_deps=$(jq -r '.dependencies // {} | to_entries[] | select(.key | startswith("@librechat/")) | .key' "$package_json" 2>/dev/null || echo "")
# For each workspace dependency, get its dependencies
for dep in $workspace_deps; do
# Convert @librechat/api to packages/api
local workspace_path=$(echo "$dep" | sed 's/@librechat\//packages\//')
local workspace_package_json="${workspace_path}/package.json"
if [[ -f "$workspace_package_json" ]]; then
# Extract all dependencies from the workspace package
jq -r '.dependencies // {} | keys[]' "$workspace_package_json" 2>/dev/null >> "$output_file"
# Also extract peerDependencies
jq -r '.peerDependencies // {} | keys[]' "$workspace_package_json" 2>/dev/null >> "$output_file"
fi
done
fi
if [[ -f "$output_file" ]]; then
sort -u "$output_file" -o "$output_file"
else
touch "$output_file"
fi
}
# Get workspace dependencies for each package
get_workspace_package_deps "package.json" root_workspace_deps.txt
get_workspace_package_deps "client/package.json" client_workspace_deps.txt
get_workspace_package_deps "api/package.json" api_workspace_deps.txt
- name: Run depcheck for root package.json
id: check-root
run: |
if [[ -f "package.json" ]]; then
UNUSED=$(depcheck --json | jq -r '.dependencies | join("\n")' || echo "")
# Exclude dependencies used in scripts, code, and workspace packages
UNUSED=$(comm -23 <(echo "$UNUSED" | sort) <(cat root_used_deps.txt root_used_code.txt root_workspace_deps.txt | sort) || echo "")
UNUSED=$(comm -23 <(echo "$UNUSED" | sort) <(cat root_used_deps.txt root_used_code.txt | sort) || echo "")
echo "ROOT_UNUSED<<EOF" >> $GITHUB_ENV
echo "$UNUSED" >> $GITHUB_ENV
echo "EOF" >> $GITHUB_ENV
@@ -184,8 +97,7 @@ jobs:
chmod -R 755 client
cd client
UNUSED=$(depcheck --json | jq -r '.dependencies | join("\n")' || echo "")
# Exclude dependencies used in scripts, code, and workspace packages
UNUSED=$(comm -23 <(echo "$UNUSED" | sort) <(cat ../client_used_deps.txt ../client_used_code.txt ../client_workspace_deps.txt | sort) || echo "")
UNUSED=$(comm -23 <(echo "$UNUSED" | sort) <(cat ../client_used_deps.txt ../client_used_code.txt | sort) || echo "")
# Filter out false positives
UNUSED=$(echo "$UNUSED" | grep -v "^micromark-extension-llm-math$" || echo "")
echo "CLIENT_UNUSED<<EOF" >> $GITHUB_ENV
@@ -201,8 +113,7 @@ jobs:
chmod -R 755 api
cd api
UNUSED=$(depcheck --json | jq -r '.dependencies | join("\n")' || echo "")
# Exclude dependencies used in scripts, code, and workspace packages
UNUSED=$(comm -23 <(echo "$UNUSED" | sort) <(cat ../api_used_deps.txt ../api_used_code.txt ../api_workspace_deps.txt | sort) || echo "")
UNUSED=$(comm -23 <(echo "$UNUSED" | sort) <(cat ../api_used_deps.txt ../api_used_code.txt | sort) || echo "")
echo "API_UNUSED<<EOF" >> $GITHUB_ENV
echo "$UNUSED" >> $GITHUB_ENV
echo "EOF" >> $GITHUB_ENV

4
.gitignore vendored
View File

@@ -13,9 +13,6 @@ pids
*.seed
.git
# CI/CD data
test-image*
# Directory for instrumented libs generated by jscoverage/JSCover
lib-cov
@@ -137,4 +134,3 @@ helm/**/.values.yaml
/.openai/
/.tabnine/
/.codeium
*.local.md

View File

@@ -1,2 +1,5 @@
#!/usr/bin/env sh
set -e
. "$(dirname -- "$0")/_/husky.sh"
[ -n "$CI" ] && exit 0
npx lint-staged --config ./.husky/lint-staged.config.js

View File

@@ -1,4 +1,4 @@
# v0.8.0
# v0.7.9
# Base node image
FROM node:20-alpine AS node
@@ -19,31 +19,24 @@ WORKDIR /app
USER node
COPY --chown=node:node package.json package-lock.json ./
COPY --chown=node:node api/package.json ./api/package.json
COPY --chown=node:node client/package.json ./client/package.json
COPY --chown=node:node packages/data-provider/package.json ./packages/data-provider/package.json
COPY --chown=node:node packages/data-schemas/package.json ./packages/data-schemas/package.json
COPY --chown=node:node packages/api/package.json ./packages/api/package.json
COPY --chown=node:node . .
RUN \
# Allow mounting of these files, which have no default
touch .env ; \
# Create directories for the volumes to inherit the correct permissions
mkdir -p /app/client/public/images /app/api/logs /app/uploads ; \
mkdir -p /app/client/public/images /app/api/logs ; \
npm config set fetch-retry-maxtimeout 600000 ; \
npm config set fetch-retries 5 ; \
npm config set fetch-retry-mintimeout 15000 ; \
npm ci --no-audit
COPY --chown=node:node . .
RUN \
npm install --no-audit; \
# React client build
NODE_OPTIONS="--max-old-space-size=2048" npm run frontend; \
npm prune --production; \
npm cache clean --force
RUN mkdir -p /app/client/public/images /app/api/logs
# Node API setup
EXPOSE 3080
ENV HOST=0.0.0.0
@@ -54,4 +47,4 @@ CMD ["npm", "run", "backend"]
# WORKDIR /usr/share/nginx/html
# COPY --from=node /app/client/dist /usr/share/nginx/html
# COPY client/nginx.conf /etc/nginx/conf.d/default.conf
# ENTRYPOINT ["nginx", "-g", "daemon off;"]
# ENTRYPOINT ["nginx", "-g", "daemon off;"]

View File

@@ -1,5 +1,5 @@
# Dockerfile.multi
# v0.8.0
# v0.7.9
# Base for all builds
FROM node:20-alpine AS base-min
@@ -16,7 +16,6 @@ COPY package*.json ./
COPY packages/data-provider/package*.json ./packages/data-provider/
COPY packages/api/package*.json ./packages/api/
COPY packages/data-schemas/package*.json ./packages/data-schemas/
COPY packages/client/package*.json ./packages/client/
COPY client/package*.json ./client/
COPY api/package*.json ./api/
@@ -46,19 +45,11 @@ COPY --from=data-provider-build /app/packages/data-provider/dist /app/packages/d
COPY --from=data-schemas-build /app/packages/data-schemas/dist /app/packages/data-schemas/dist
RUN npm run build
# Build `client` package
FROM base AS client-package-build
WORKDIR /app/packages/client
COPY packages/client ./
RUN npm run build
# Client build
FROM base AS client-build
WORKDIR /app/client
COPY client ./
COPY --from=data-provider-build /app/packages/data-provider/dist /app/packages/data-provider/dist
COPY --from=client-package-build /app/packages/client/dist /app/packages/client/dist
COPY --from=client-package-build /app/packages/client/src /app/packages/client/src
ENV NODE_OPTIONS="--max-old-space-size=2048"
RUN npm run build

View File

@@ -65,17 +65,14 @@
- 🔦 **Agents & Tools Integration**:
- **[LibreChat Agents](https://www.librechat.ai/docs/features/agents)**:
- No-Code Custom Assistants: Build specialized, AI-driven helpers
- Agent Marketplace: Discover and deploy community-built agents
- Collaborative Sharing: Share agents with specific users and groups
- Flexible & Extensible: Use MCP Servers, tools, file search, code execution, and more
- No-Code Custom Assistants: Build specialized, AI-driven helpers without coding
- Flexible & Extensible: Use MCP Servers, tools, file search, code execution, and more
- Compatible with Custom Endpoints, OpenAI, Azure, Anthropic, AWS Bedrock, Google, Vertex AI, Responses API, and more
- [Model Context Protocol (MCP) Support](https://modelcontextprotocol.io/clients#librechat) for Tools
- 🔍 **Web Search**:
- Search the internet and retrieve relevant information to enhance your AI context
- Combines search providers, content scrapers, and result rerankers for optimal results
- **Customizable Jina Reranking**: Configure custom Jina API URLs for reranking services
- **[Learn More →](https://www.librechat.ai/docs/features/web_search)**
- 🪄 **Generative UI with Code Artifacts**:
@@ -90,18 +87,15 @@
- Create, Save, & Share Custom Presets
- Switch between AI Endpoints and Presets mid-chat
- Edit, Resubmit, and Continue Messages with Conversation branching
- Create and share prompts with specific users and groups
- [Fork Messages & Conversations](https://www.librechat.ai/docs/features/fork) for Advanced Context control
- 💬 **Multimodal & File Interactions**:
- Upload and analyze images with Claude 3, GPT-4.5, GPT-4o, o1, Llama-Vision, and Gemini 📸
- Chat with Files using Custom Endpoints, OpenAI, Azure, Anthropic, AWS Bedrock, & Google 🗃️
- 🌎 **Multilingual UI**:
- English, 中文 (简体), 中文 (繁體), العربية, Deutsch, Español, Français, Italiano
- Polski, Português (PT), Português (BR), Русский, 日本語, Svenska, 한국어, Tiếng Việt
- Türkçe, Nederlands, עברית, Català, Čeština, Dansk, Eesti, فارسی
- Suomi, Magyar, Հայերեն, Bahasa Indonesia, ქართული, Latviešu, ไทย, ئۇيغۇرچە
- 🌎 **Multilingual UI**:
- English, 中文, Deutsch, Español, Français, Italiano, Polski, Português Brasileiro
- Русский, 日本語, Svenska, 한국어, Tiếng Việt, 繁體中文, العربية, Türkçe, Nederlands, עברית
- 🧠 **Reasoning UI**:
- Dynamic Reasoning UI for Chain-of-Thought/Reasoning AI models like DeepSeek-R1

View File

@@ -1,5 +1,4 @@
const Anthropic = require('@anthropic-ai/sdk');
const { logger } = require('@librechat/data-schemas');
const { HttpsProxyAgent } = require('https-proxy-agent');
const {
Constants,
@@ -10,18 +9,8 @@ const {
getResponseSender,
validateVisionModel,
} = require('librechat-data-provider');
const { sleep, SplitStreamHandler: _Handler } = require('@librechat/agents');
const {
Tokenizer,
createFetch,
matchModelName,
getClaudeHeaders,
getModelMaxTokens,
configureReasoning,
checkPromptCacheSupport,
getModelMaxOutputTokens,
createStreamEventHandlers,
} = require('@librechat/api');
const { SplitStreamHandler: _Handler } = require('@librechat/agents');
const { Tokenizer, createFetch, createStreamEventHandlers } = require('@librechat/api');
const {
truncateText,
formatMessage,
@@ -30,9 +19,17 @@ const {
parseParamFromPrompt,
createContextHandlers,
} = require('./prompts');
const {
getClaudeHeaders,
configureReasoning,
checkPromptCacheSupport,
} = require('~/server/services/Endpoints/anthropic/helpers');
const { getModelMaxTokens, getModelMaxOutputTokens, matchModelName } = require('~/utils');
const { spendTokens, spendStructuredTokens } = require('~/models/spendTokens');
const { encodeAndFormat } = require('~/server/services/Files/images/encode');
const { sleep } = require('~/server/utils');
const BaseClient = require('./BaseClient');
const { logger } = require('~/config');
const HUMAN_PROMPT = '\n\nHuman:';
const AI_PROMPT = '\n\nAssistant:';

View File

@@ -1,31 +1,21 @@
const crypto = require('crypto');
const fetch = require('node-fetch');
const { logger } = require('@librechat/data-schemas');
const {
getBalanceConfig,
extractFileContext,
encodeAndFormatAudios,
encodeAndFormatVideos,
encodeAndFormatDocuments,
} = require('@librechat/api');
const {
Constants,
ErrorTypes,
FileSources,
supportsBalanceCheck,
isAgentsEndpoint,
isParamEndpoint,
EModelEndpoint,
ContentTypes,
excludedKeys,
EModelEndpoint,
isParamEndpoint,
isAgentsEndpoint,
supportsBalanceCheck,
ErrorTypes,
Constants,
} = require('librechat-data-provider');
const { getMessages, saveMessage, updateMessage, saveConvo, getConvo } = require('~/models');
const { getStrategyFunctions } = require('~/server/services/Files/strategies');
const { checkBalance } = require('~/models/balanceMethods');
const { truncateToolCallOutputs } = require('./prompts');
const countTokens = require('~/server/utils/countTokens');
const { getFiles } = require('~/models/File');
const TextStream = require('./TextStream');
const { logger } = require('~/config');
class BaseClient {
constructor(apiKey, options = {}) {
@@ -47,8 +37,6 @@ class BaseClient {
this.conversationId;
/** @type {string} */
this.responseMessageId;
/** @type {string} */
this.parentMessageId;
/** @type {TAttachment[]} */
this.attachments;
/** The key for the usage object's input tokens
@@ -122,15 +110,13 @@ class BaseClient {
* If a correction to the token usage is needed, the method should return an object with the corrected token counts.
* Should only be used if `recordCollectedUsage` was not used instead.
* @param {string} [model]
* @param {AppConfig['balance']} [balance]
* @param {number} promptTokens
* @param {number} completionTokens
* @returns {Promise<void>}
*/
async recordTokenUsage({ model, balance, promptTokens, completionTokens }) {
async recordTokenUsage({ model, promptTokens, completionTokens }) {
logger.debug('[BaseClient] `recordTokenUsage` not implemented.', {
model,
balance,
promptTokens,
completionTokens,
});
@@ -199,8 +185,7 @@ class BaseClient {
this.user = user;
const saveOptions = this.getSaveOptions();
this.abortController = opts.abortController ?? new AbortController();
const requestConvoId = overrideConvoId ?? opts.conversationId;
const conversationId = requestConvoId ?? crypto.randomUUID();
const conversationId = overrideConvoId ?? opts.conversationId ?? crypto.randomUUID();
const parentMessageId = opts.parentMessageId ?? Constants.NO_PARENT;
const userMessageId =
overrideUserMessageId ?? opts.overrideParentMessageId ?? crypto.randomUUID();
@@ -225,12 +210,11 @@ class BaseClient {
...opts,
user,
head,
saveOptions,
userMessageId,
requestConvoId,
conversationId,
parentMessageId,
userMessageId,
responseMessageId,
saveOptions,
};
}
@@ -249,12 +233,11 @@ class BaseClient {
const {
user,
head,
saveOptions,
userMessageId,
requestConvoId,
conversationId,
parentMessageId,
userMessageId,
responseMessageId,
saveOptions,
} = await this.setMessageOptions(opts);
const userMessage = opts.isEdited
@@ -276,8 +259,7 @@ class BaseClient {
}
if (typeof opts?.onStart === 'function') {
const isNewConvo = !requestConvoId && parentMessageId === Constants.NO_PARENT;
opts.onStart(userMessage, responseMessageId, isNewConvo);
opts.onStart(userMessage, responseMessageId);
}
return {
@@ -583,7 +565,6 @@ class BaseClient {
}
async sendMessage(message, opts = {}) {
const appConfig = this.options.req?.config;
/** @type {Promise<TMessage>} */
let userMessagePromise;
const { user, head, isEdited, conversationId, responseMessageId, saveOptions, userMessage } =
@@ -633,19 +614,15 @@ class BaseClient {
this.currentMessages.push(userMessage);
}
/**
* When the userMessage is pushed to currentMessages, the parentMessage is the userMessageId.
* this only matters when buildMessages is utilizing the parentMessageId, and may vary on implementation
*/
const parentMessageId = isEdited ? head : userMessage.messageId;
this.parentMessageId = parentMessageId;
let {
prompt: payload,
tokenCountMap,
promptTokens,
} = await this.buildMessages(
this.currentMessages,
parentMessageId,
// When the userMessage is pushed to currentMessages, the parentMessage is the userMessageId.
// this only matters when buildMessages is utilizing the parentMessageId, and may vary on implementation
isEdited ? head : userMessage.messageId,
this.getBuildMessagesOptions(opts),
opts,
);
@@ -670,9 +647,9 @@ class BaseClient {
}
}
const balanceConfig = getBalanceConfig(appConfig);
const balance = this.options.req?.app?.locals?.balance;
if (
balanceConfig?.enabled &&
balance?.enabled &&
supportsBalanceCheck[this.options.endpointType ?? this.options.endpoint]
) {
await checkBalance({
@@ -771,7 +748,6 @@ class BaseClient {
usage,
promptTokens,
completionTokens,
balance: balanceConfig,
model: responseMessage.model,
});
}
@@ -1207,135 +1183,8 @@ class BaseClient {
return await this.sendCompletion(payload, opts);
}
async addDocuments(message, attachments) {
const documentResult = await encodeAndFormatDocuments(
this.options.req,
attachments,
{
provider: this.options.agent?.provider,
useResponsesApi: this.options.agent?.model_parameters?.useResponsesApi,
},
getStrategyFunctions,
);
message.documents =
documentResult.documents && documentResult.documents.length
? documentResult.documents
: undefined;
return documentResult.files;
}
async addVideos(message, attachments) {
const videoResult = await encodeAndFormatVideos(
this.options.req,
attachments,
this.options.agent.provider,
getStrategyFunctions,
);
message.videos =
videoResult.videos && videoResult.videos.length ? videoResult.videos : undefined;
return videoResult.files;
}
async addAudios(message, attachments) {
const audioResult = await encodeAndFormatAudios(
this.options.req,
attachments,
this.options.agent.provider,
getStrategyFunctions,
);
message.audios =
audioResult.audios && audioResult.audios.length ? audioResult.audios : undefined;
return audioResult.files;
}
/**
* Extracts text context from attachments and sets it on the message.
* This handles text that was already extracted from files (OCR, transcriptions, document text, etc.)
* @param {TMessage} message - The message to add context to
* @param {MongoFile[]} attachments - Array of file attachments
* @returns {Promise<void>}
*/
async addFileContextToMessage(message, attachments) {
const fileContext = await extractFileContext({
attachments,
req: this.options?.req,
tokenCountFn: (text) => countTokens(text),
});
if (fileContext) {
message.fileContext = fileContext;
}
}
async processAttachments(message, attachments) {
const categorizedAttachments = {
images: [],
videos: [],
audios: [],
documents: [],
};
const allFiles = [];
for (const file of attachments) {
/** @type {FileSources} */
const source = file.source ?? FileSources.local;
if (source === FileSources.text) {
allFiles.push(file);
continue;
}
if (file.embedded === true || file.metadata?.fileIdentifier != null) {
allFiles.push(file);
continue;
}
if (file.type.startsWith('image/')) {
categorizedAttachments.images.push(file);
} else if (file.type === 'application/pdf') {
categorizedAttachments.documents.push(file);
allFiles.push(file);
} else if (file.type.startsWith('video/')) {
categorizedAttachments.videos.push(file);
allFiles.push(file);
} else if (file.type.startsWith('audio/')) {
categorizedAttachments.audios.push(file);
allFiles.push(file);
}
}
const [imageFiles] = await Promise.all([
categorizedAttachments.images.length > 0
? this.addImageURLs(message, categorizedAttachments.images)
: Promise.resolve([]),
categorizedAttachments.documents.length > 0
? this.addDocuments(message, categorizedAttachments.documents)
: Promise.resolve([]),
categorizedAttachments.videos.length > 0
? this.addVideos(message, categorizedAttachments.videos)
: Promise.resolve([]),
categorizedAttachments.audios.length > 0
? this.addAudios(message, categorizedAttachments.audios)
: Promise.resolve([]),
]);
allFiles.push(...imageFiles);
const seenFileIds = new Set();
const uniqueFiles = [];
for (const file of allFiles) {
if (file.file_id && !seenFileIds.has(file.file_id)) {
seenFileIds.add(file.file_id);
uniqueFiles.push(file);
} else if (!file.file_id) {
uniqueFiles.push(file);
}
}
return uniqueFiles;
}
/**
*
* @param {TMessage[]} _messages
* @returns {Promise<TMessage[]>}
*/
@@ -1384,8 +1233,7 @@ class BaseClient {
{},
);
await this.addFileContextToMessage(message, files);
await this.processAttachments(message, files);
await this.addImageURLs(message, files, this.visionMode);
this.message_file_map[message.messageId] = files;
return message;

View File

@@ -1,7 +1,4 @@
const { google } = require('googleapis');
const { sleep } = require('@librechat/agents');
const { logger } = require('@librechat/data-schemas');
const { getModelMaxTokens } = require('@librechat/api');
const { concat } = require('@langchain/core/utils/stream');
const { ChatVertexAI } = require('@langchain/google-vertexai');
const { Tokenizer, getSafetySettings } = require('@librechat/api');
@@ -24,6 +21,9 @@ const {
} = require('librechat-data-provider');
const { encodeAndFormat } = require('~/server/services/Files/images');
const { spendTokens } = require('~/models/spendTokens');
const { getModelMaxTokens } = require('~/utils');
const { sleep } = require('~/server/utils');
const { logger } = require('~/config');
const {
formatMessage,
createContextHandlers,

View File

@@ -1,15 +1,13 @@
const { logger } = require('@librechat/data-schemas');
const { OllamaClient } = require('./OllamaClient');
const { HttpsProxyAgent } = require('https-proxy-agent');
const { sleep, SplitStreamHandler, CustomOpenAIClient: OpenAI } = require('@librechat/agents');
const { SplitStreamHandler, CustomOpenAIClient: OpenAI } = require('@librechat/agents');
const {
isEnabled,
Tokenizer,
createFetch,
resolveHeaders,
constructAzureURL,
getModelMaxTokens,
genAzureChatCompletion,
getModelMaxOutputTokens,
createStreamEventHandlers,
} = require('@librechat/api');
const {
@@ -33,16 +31,17 @@ const {
titleInstruction,
createContextHandlers,
} = require('./prompts');
const { extractBaseURL, getModelMaxTokens, getModelMaxOutputTokens } = require('~/utils');
const { encodeAndFormat } = require('~/server/services/Files/images/encode');
const { addSpaceIfNeeded, sleep } = require('~/server/utils');
const { spendTokens } = require('~/models/spendTokens');
const { addSpaceIfNeeded } = require('~/server/utils');
const { handleOpenAIErrors } = require('./tools/util');
const { OllamaClient } = require('./OllamaClient');
const { createLLM, RunManager } = require('./llm');
const { summaryBuffer } = require('./memory');
const { runTitleChain } = require('./chains');
const { extractBaseURL } = require('~/utils');
const { tokenSplit } = require('./document');
const BaseClient = require('./BaseClient');
const { logger } = require('~/config');
class OpenAIClient extends BaseClient {
constructor(apiKey, options = {}) {
@@ -613,8 +612,77 @@ class OpenAIClient extends BaseClient {
return (reply ?? '').trim();
}
initializeLLM() {
throw new Error('Deprecated');
initializeLLM({
model = openAISettings.model.default,
modelName,
temperature = 0.2,
max_tokens,
streaming,
context,
tokenBuffer,
initialMessageCount,
conversationId,
}) {
const modelOptions = {
modelName: modelName ?? model,
temperature,
user: this.user,
};
if (max_tokens) {
modelOptions.max_tokens = max_tokens;
}
const configOptions = {};
if (this.langchainProxy) {
configOptions.basePath = this.langchainProxy;
}
if (this.useOpenRouter) {
configOptions.basePath = 'https://openrouter.ai/api/v1';
configOptions.baseOptions = {
headers: {
'HTTP-Referer': 'https://librechat.ai',
'X-Title': 'LibreChat',
},
};
}
const { headers } = this.options;
if (headers && typeof headers === 'object' && !Array.isArray(headers)) {
configOptions.baseOptions = {
headers: resolveHeaders({
...headers,
...configOptions?.baseOptions?.headers,
}),
};
}
if (this.options.proxy) {
configOptions.httpAgent = new HttpsProxyAgent(this.options.proxy);
configOptions.httpsAgent = new HttpsProxyAgent(this.options.proxy);
}
const { req, res, debug } = this.options;
const runManager = new RunManager({ req, res, debug, abortController: this.abortController });
this.runManager = runManager;
const llm = createLLM({
modelOptions,
configOptions,
openAIApiKey: this.apiKey,
azure: this.azure,
streaming,
callbacks: runManager.createCallbacks({
context,
tokenBuffer,
conversationId: this.conversationId ?? conversationId,
initialMessageCount,
}),
});
return llm;
}
/**
@@ -632,7 +700,6 @@ class OpenAIClient extends BaseClient {
* In case of failure, it will return the default title, "New Chat".
*/
async titleConvo({ text, conversationId, responseText = '' }) {
const appConfig = this.options.req?.config;
this.conversationId = conversationId;
if (this.options.attachments) {
@@ -661,7 +728,8 @@ class OpenAIClient extends BaseClient {
max_tokens: 16,
};
const azureConfig = appConfig?.endpoints?.[EModelEndpoint.azureOpenAI];
/** @type {TAzureConfig | undefined} */
const azureConfig = this.options?.req?.app?.locals?.[EModelEndpoint.azureOpenAI];
const resetTitleOptions = !!(
(this.azure && azureConfig) ||
@@ -681,7 +749,7 @@ class OpenAIClient extends BaseClient {
groupMap,
});
this.options.headers = resolveHeaders({ headers });
this.options.headers = resolveHeaders(headers);
this.options.reverseProxyUrl = baseURL ?? null;
this.langchainProxy = extractBaseURL(this.options.reverseProxyUrl);
this.apiKey = azureOptions.azureOpenAIApiKey;
@@ -1050,7 +1118,6 @@ ${convo}
}
async chatCompletion({ payload, onProgress, abortController = null }) {
const appConfig = this.options.req?.config;
let error = null;
let intermediateReply = [];
const errorCallback = (err) => (error = err);
@@ -1096,7 +1163,8 @@ ${convo}
opts.fetchOptions.agent = new HttpsProxyAgent(this.options.proxy);
}
const azureConfig = appConfig?.endpoints?.[EModelEndpoint.azureOpenAI];
/** @type {TAzureConfig | undefined} */
const azureConfig = this.options?.req?.app?.locals?.[EModelEndpoint.azureOpenAI];
if (
(this.azure && this.isVisionModel && azureConfig) ||
@@ -1113,7 +1181,7 @@ ${convo}
modelGroupMap,
groupMap,
});
opts.defaultHeaders = resolveHeaders({ headers });
opts.defaultHeaders = resolveHeaders(headers);
this.langchainProxy = extractBaseURL(baseURL);
this.apiKey = azureOptions.azureOpenAIApiKey;
@@ -1154,9 +1222,7 @@ ${convo}
}
if (this.isOmni === true && modelOptions.max_tokens != null) {
const paramName =
modelOptions.useResponsesApi === true ? 'max_output_tokens' : 'max_completion_tokens';
modelOptions[paramName] = modelOptions.max_tokens;
modelOptions.max_completion_tokens = modelOptions.max_tokens;
delete modelOptions.max_tokens;
}
if (this.isOmni === true && modelOptions.temperature != null) {

View File

@@ -1,5 +1,5 @@
const { Readable } = require('stream');
const { logger } = require('@librechat/data-schemas');
const { logger } = require('~/config');
class TextStream extends Readable {
constructor(text, options = {}) {

View File

@@ -1,5 +1,5 @@
const { logger } = require('@librechat/data-schemas');
const { ZeroShotAgentOutputParser } = require('langchain/agents');
const { logger } = require('~/config');
class CustomOutputParser extends ZeroShotAgentOutputParser {
constructor(fields) {

View File

@@ -0,0 +1,95 @@
const { promptTokensEstimate } = require('openai-chat-tokens');
const { EModelEndpoint, supportsBalanceCheck } = require('librechat-data-provider');
const { formatFromLangChain } = require('~/app/clients/prompts');
const { getBalanceConfig } = require('~/server/services/Config');
const { checkBalance } = require('~/models/balanceMethods');
const { logger } = require('~/config');
const createStartHandler = ({
context,
conversationId,
tokenBuffer = 0,
initialMessageCount,
manager,
}) => {
return async (_llm, _messages, runId, parentRunId, extraParams) => {
const { invocation_params } = extraParams;
const { model, functions, function_call } = invocation_params;
const messages = _messages[0].map(formatFromLangChain);
logger.debug(`[createStartHandler] handleChatModelStart: ${context}`, {
model,
function_call,
});
if (context !== 'title') {
logger.debug(`[createStartHandler] handleChatModelStart: ${context}`, {
functions,
});
}
const payload = { messages };
let prelimPromptTokens = 1;
if (functions) {
payload.functions = functions;
prelimPromptTokens += 2;
}
if (function_call) {
payload.function_call = function_call;
prelimPromptTokens -= 5;
}
prelimPromptTokens += promptTokensEstimate(payload);
logger.debug('[createStartHandler]', {
prelimPromptTokens,
tokenBuffer,
});
prelimPromptTokens += tokenBuffer;
try {
const balance = await getBalanceConfig();
if (balance?.enabled && supportsBalanceCheck[EModelEndpoint.openAI]) {
const generations =
initialMessageCount && messages.length > initialMessageCount
? messages.slice(initialMessageCount)
: null;
await checkBalance({
req: manager.req,
res: manager.res,
txData: {
user: manager.user,
tokenType: 'prompt',
amount: prelimPromptTokens,
debug: manager.debug,
generations,
model,
endpoint: EModelEndpoint.openAI,
},
});
}
} catch (err) {
logger.error(`[createStartHandler][${context}] checkBalance error`, err);
manager.abortController.abort();
if (context === 'summary' || context === 'plugins') {
manager.addRun(runId, { conversationId, error: err.message });
throw new Error(err);
}
return;
}
manager.addRun(runId, {
model,
messages,
functions,
function_call,
runId,
parentRunId,
conversationId,
prelimPromptTokens,
});
};
};
module.exports = createStartHandler;

View File

@@ -0,0 +1,5 @@
const createStartHandler = require('./createStartHandler');
module.exports = {
createStartHandler,
};

View File

@@ -1,7 +1,7 @@
const { z } = require('zod');
const { logger } = require('@librechat/data-schemas');
const { langPrompt, createTitlePrompt, escapeBraces, getSnippet } = require('../prompts');
const { createStructuredOutputChainFromZod } = require('langchain/chains/openai_functions');
const { logger } = require('~/config');
const langSchema = z.object({
language: z.string().describe('The language of the input text (full noun, no abbreviations).'),

View File

@@ -0,0 +1,105 @@
const { createStartHandler } = require('~/app/clients/callbacks');
const { spendTokens } = require('~/models/spendTokens');
const { logger } = require('~/config');
class RunManager {
constructor(fields) {
const { req, res, abortController, debug } = fields;
this.abortController = abortController;
this.user = req.user.id;
this.req = req;
this.res = res;
this.debug = debug;
this.runs = new Map();
this.convos = new Map();
}
addRun(runId, runData) {
if (!this.runs.has(runId)) {
this.runs.set(runId, runData);
if (runData.conversationId) {
this.convos.set(runData.conversationId, runId);
}
return runData;
} else {
const existingData = this.runs.get(runId);
const update = { ...existingData, ...runData };
this.runs.set(runId, update);
if (update.conversationId) {
this.convos.set(update.conversationId, runId);
}
return update;
}
}
removeRun(runId) {
if (this.runs.has(runId)) {
this.runs.delete(runId);
} else {
logger.error(`[api/app/clients/llm/RunManager] Run with ID ${runId} does not exist.`);
}
}
getAllRuns() {
return Array.from(this.runs.values());
}
getRunById(runId) {
return this.runs.get(runId);
}
getRunByConversationId(conversationId) {
const runId = this.convos.get(conversationId);
return { run: this.runs.get(runId), runId };
}
createCallbacks(metadata) {
return [
{
handleChatModelStart: createStartHandler({ ...metadata, manager: this }),
handleLLMEnd: async (output, runId, _parentRunId) => {
const { llmOutput, ..._output } = output;
logger.debug(`[RunManager] handleLLMEnd: ${JSON.stringify(metadata)}`, {
runId,
_parentRunId,
llmOutput,
});
if (metadata.context !== 'title') {
logger.debug('[RunManager] handleLLMEnd:', {
output: _output,
});
}
const { tokenUsage } = output.llmOutput;
const run = this.getRunById(runId);
this.removeRun(runId);
const txData = {
user: this.user,
model: run?.model ?? 'gpt-3.5-turbo',
...metadata,
};
await spendTokens(txData, tokenUsage);
},
handleLLMError: async (err) => {
logger.error(`[RunManager] handleLLMError: ${JSON.stringify(metadata)}`, err);
if (metadata.context === 'title') {
return;
} else if (metadata.context === 'plugins') {
throw new Error(err);
}
const { conversationId } = metadata;
const { run } = this.getRunByConversationId(conversationId);
if (run && run.error) {
const { error } = run;
throw new Error(error);
}
},
},
];
}
}
module.exports = RunManager;

View File

@@ -0,0 +1,81 @@
const { ChatOpenAI } = require('@langchain/openai');
const { isEnabled, sanitizeModelName, constructAzureURL } = require('@librechat/api');
/**
* Creates a new instance of a language model (LLM) for chat interactions.
*
* @param {Object} options - The options for creating the LLM.
* @param {ModelOptions} options.modelOptions - The options specific to the model, including modelName, temperature, presence_penalty, frequency_penalty, and other model-related settings.
* @param {ConfigOptions} options.configOptions - Configuration options for the API requests, including proxy settings and custom headers.
* @param {Callbacks} [options.callbacks] - Callback functions for managing the lifecycle of the LLM, including token buffers, context, and initial message count.
* @param {boolean} [options.streaming=false] - Determines if the LLM should operate in streaming mode.
* @param {string} options.openAIApiKey - The API key for OpenAI, used for authentication.
* @param {AzureOptions} [options.azure={}] - Optional Azure-specific configurations. If provided, Azure configurations take precedence over OpenAI configurations.
*
* @returns {ChatOpenAI} An instance of the ChatOpenAI class, configured with the provided options.
*
* @example
* const llm = createLLM({
* modelOptions: { modelName: 'gpt-4o-mini', temperature: 0.2 },
* configOptions: { basePath: 'https://example.api/path' },
* callbacks: { onMessage: handleMessage },
* openAIApiKey: 'your-api-key'
* });
*/
function createLLM({
modelOptions,
configOptions,
callbacks,
streaming = false,
openAIApiKey,
azure = {},
}) {
let credentials = { openAIApiKey };
let configuration = {
apiKey: openAIApiKey,
...(configOptions.basePath && { baseURL: configOptions.basePath }),
};
/** @type {AzureOptions} */
let azureOptions = {};
if (azure) {
const useModelName = isEnabled(process.env.AZURE_USE_MODEL_AS_DEPLOYMENT_NAME);
credentials = {};
configuration = {};
azureOptions = azure;
azureOptions.azureOpenAIApiDeploymentName = useModelName
? sanitizeModelName(modelOptions.modelName)
: azureOptions.azureOpenAIApiDeploymentName;
}
if (azure && process.env.AZURE_OPENAI_DEFAULT_MODEL) {
modelOptions.modelName = process.env.AZURE_OPENAI_DEFAULT_MODEL;
}
if (azure && configOptions.basePath) {
const azureURL = constructAzureURL({
baseURL: configOptions.basePath,
azureOptions,
});
azureOptions.azureOpenAIBasePath = azureURL.split(
`/${azureOptions.azureOpenAIApiDeploymentName}`,
)[0];
}
return new ChatOpenAI(
{
streaming,
credentials,
configuration,
...azureOptions,
...modelOptions,
...credentials,
callbacks,
},
configOptions,
);
}
module.exports = createLLM;

View File

@@ -1,5 +1,9 @@
const createLLM = require('./createLLM');
const RunManager = require('./RunManager');
const createCoherePayload = require('./createCoherePayload');
module.exports = {
createLLM,
RunManager,
createCoherePayload,
};

View File

@@ -0,0 +1,31 @@
require('dotenv').config();
const { ChatOpenAI } = require('@langchain/openai');
const { getBufferString, ConversationSummaryBufferMemory } = require('langchain/memory');
const chatPromptMemory = new ConversationSummaryBufferMemory({
llm: new ChatOpenAI({ modelName: 'gpt-4o-mini', temperature: 0 }),
maxTokenLimit: 10,
returnMessages: true,
});
(async () => {
await chatPromptMemory.saveContext({ input: 'hi my name\'s Danny' }, { output: 'whats up' });
await chatPromptMemory.saveContext({ input: 'not much you' }, { output: 'not much' });
await chatPromptMemory.saveContext(
{ input: 'are you excited for the olympics?' },
{ output: 'not really' },
);
// We can also utilize the predict_new_summary method directly.
const messages = await chatPromptMemory.chatHistory.getMessages();
console.log('MESSAGES\n\n');
console.log(JSON.stringify(messages));
const previous_summary = '';
const predictSummary = await chatPromptMemory.predictNewSummary(messages, previous_summary);
console.log('SUMMARY\n\n');
console.log(JSON.stringify(getBufferString([{ role: 'system', content: predictSummary }])));
// const { history } = await chatPromptMemory.loadMemoryVariables({});
// console.log('HISTORY\n\n');
// console.log(JSON.stringify(history));
})();

View File

@@ -1,7 +1,7 @@
const { logger } = require('@librechat/data-schemas');
const { ConversationSummaryBufferMemory, ChatMessageHistory } = require('langchain/memory');
const { formatLangChainMessages, SUMMARY_PROMPT } = require('../prompts');
const { predictNewSummary } = require('../chains');
const { logger } = require('~/config');
const createSummaryBufferMemory = ({ llm, prompt, messages, ...rest }) => {
const chatHistory = new ChatMessageHistory(messages);

View File

@@ -1,4 +1,4 @@
const { logger } = require('@librechat/data-schemas');
const { logger } = require('~/config');
/**
* The `addImages` function corrects any erroneous image URLs in the `responseMessage.text`

View File

@@ -3,7 +3,6 @@ const { EModelEndpoint, ArtifactModes } = require('librechat-data-provider');
const { generateShadcnPrompt } = require('~/app/clients/prompts/shadcn-docs/generate');
const { components } = require('~/app/clients/prompts/shadcn-docs/components');
/** @deprecated */
// eslint-disable-next-line no-unused-vars
const artifactsPromptV1 = dedent`The assistant can create and reference artifacts during conversations.
@@ -116,7 +115,6 @@ Here are some examples of correct usage of artifacts:
</assistant_response>
</example>
</examples>`;
const artifactsPrompt = dedent`The assistant can create and reference artifacts during conversations.
Artifacts are for substantial, self-contained content that users might modify or reuse, displayed in a separate UI window for clarity.
@@ -167,10 +165,6 @@ Artifacts are for substantial, self-contained content that users might modify or
- SVG: "image/svg+xml"
- The user interface will render the Scalable Vector Graphics (SVG) image within the artifact tags.
- The assistant should specify the viewbox of the SVG rather than defining a width/height
- Markdown: "text/markdown" or "text/md"
- The user interface will render Markdown content placed within the artifact tags.
- Supports standard Markdown syntax including headers, lists, links, images, code blocks, tables, and more.
- Both "text/markdown" and "text/md" are accepted as valid MIME types for Markdown content.
- Mermaid Diagrams: "application/vnd.mermaid"
- The user interface will render Mermaid diagrams placed within the artifact tags.
- React Components: "application/vnd.react"
@@ -372,10 +366,6 @@ Artifacts are for substantial, self-contained content that users might modify or
- SVG: "image/svg+xml"
- The user interface will render the Scalable Vector Graphics (SVG) image within the artifact tags.
- The assistant should specify the viewbox of the SVG rather than defining a width/height
- Markdown: "text/markdown" or "text/md"
- The user interface will render Markdown content placed within the artifact tags.
- Supports standard Markdown syntax including headers, lists, links, images, code blocks, tables, and more.
- Both "text/markdown" and "text/md" are accepted as valid MIME types for Markdown content.
- Mermaid Diagrams: "application/vnd.mermaid"
- The user interface will render Mermaid diagrams placed within the artifact tags.
- React Components: "application/vnd.react"

View File

@@ -1,6 +1,7 @@
const axios = require('axios');
const { isEnabled } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { isEnabled, generateShortLivedToken } = require('@librechat/api');
const { generateShortLivedToken } = require('~/server/services/AuthService');
const footer = `Use the context as your learned knowledge to better answer the user.

View File

@@ -245,7 +245,7 @@ describe('AnthropicClient', () => {
});
describe('Claude 4 model headers', () => {
it('should add "prompt-caching" and "context-1m" beta headers for claude-sonnet-4 model', () => {
it('should add "prompt-caching" beta header for claude-sonnet-4 model', () => {
const client = new AnthropicClient('test-api-key');
const modelOptions = {
model: 'claude-sonnet-4-20250514',
@@ -255,30 +255,10 @@ describe('AnthropicClient', () => {
expect(anthropicClient._options.defaultHeaders).toBeDefined();
expect(anthropicClient._options.defaultHeaders).toHaveProperty('anthropic-beta');
expect(anthropicClient._options.defaultHeaders['anthropic-beta']).toBe(
'prompt-caching-2024-07-31,context-1m-2025-08-07',
'prompt-caching-2024-07-31',
);
});
it('should add "prompt-caching" and "context-1m" beta headers for claude-sonnet-4 model formats', () => {
const client = new AnthropicClient('test-api-key');
const modelVariations = [
'claude-sonnet-4-20250514',
'claude-sonnet-4-latest',
'anthropic/claude-sonnet-4-20250514',
];
modelVariations.forEach((model) => {
const modelOptions = { model };
client.setOptions({ modelOptions, promptCache: true });
const anthropicClient = client.getClient(modelOptions);
expect(anthropicClient._options.defaultHeaders).toBeDefined();
expect(anthropicClient._options.defaultHeaders).toHaveProperty('anthropic-beta');
expect(anthropicClient._options.defaultHeaders['anthropic-beta']).toBe(
'prompt-caching-2024-07-31,context-1m-2025-08-07',
);
});
});
it('should add "prompt-caching" beta header for claude-opus-4 model', () => {
const client = new AnthropicClient('test-api-key');
const modelOptions = {
@@ -293,6 +273,20 @@ describe('AnthropicClient', () => {
);
});
it('should add "prompt-caching" beta header for claude-4-sonnet model', () => {
const client = new AnthropicClient('test-api-key');
const modelOptions = {
model: 'claude-4-sonnet-20250514',
};
client.setOptions({ modelOptions, promptCache: true });
const anthropicClient = client.getClient(modelOptions);
expect(anthropicClient._options.defaultHeaders).toBeDefined();
expect(anthropicClient._options.defaultHeaders).toHaveProperty('anthropic-beta');
expect(anthropicClient._options.defaultHeaders['anthropic-beta']).toBe(
'prompt-caching-2024-07-31',
);
});
it('should add "prompt-caching" beta header for claude-4-opus model', () => {
const client = new AnthropicClient('test-api-key');
const modelOptions = {

View File

@@ -2,14 +2,6 @@ const { Constants } = require('librechat-data-provider');
const { initializeFakeClient } = require('./FakeClient');
jest.mock('~/db/connect');
jest.mock('~/server/services/Config', () => ({
getAppConfig: jest.fn().mockResolvedValue({
// Default app config for tests
paths: { uploads: '/tmp' },
fileStrategy: 'local',
memory: { disabled: false },
}),
}));
jest.mock('~/models', () => ({
User: jest.fn(),
Key: jest.fn(),
@@ -587,8 +579,6 @@ describe('BaseClient', () => {
expect(onStart).toHaveBeenCalledWith(
expect.objectContaining({ text: 'Hello, world!' }),
expect.any(String),
/** `isNewConvo` */
true,
);
});

View File

@@ -1,5 +1,5 @@
const { getModelMaxTokens } = require('@librechat/api');
const BaseClient = require('../BaseClient');
const { getModelMaxTokens } = require('../../../utils');
class FakeClient extends BaseClient {
constructor(apiKey, options = {}) {

View File

@@ -1,4 +1,4 @@
const manifest = require('./manifest');
const availableTools = require('./manifest.json');
// Structured Tools
const DALLE3 = require('./structured/DALLE3');
@@ -13,8 +13,23 @@ const TraversaalSearch = require('./structured/TraversaalSearch');
const createOpenAIImageTools = require('./structured/OpenAIImageTools');
const TavilySearchResults = require('./structured/TavilySearchResults');
/** @type {Record<string, TPlugin | undefined>} */
const manifestToolMap = {};
/** @type {Array<TPlugin>} */
const toolkits = [];
availableTools.forEach((tool) => {
manifestToolMap[tool.pluginKey] = tool;
if (tool.toolkit === true) {
toolkits.push(tool);
}
});
module.exports = {
...manifest,
toolkits,
availableTools,
manifestToolMap,
// Structured Tools
DALLE3,
FluxAPI,

View File

@@ -1,20 +0,0 @@
const availableTools = require('./manifest.json');
/** @type {Record<string, TPlugin | undefined>} */
const manifestToolMap = {};
/** @type {Array<TPlugin>} */
const toolkits = [];
availableTools.forEach((tool) => {
manifestToolMap[tool.pluginKey] = tool;
if (tool.toolkit === true) {
toolkits.push(tool);
}
});
module.exports = {
toolkits,
availableTools,
manifestToolMap,
};

View File

@@ -49,7 +49,7 @@
"pluginKey": "image_gen_oai",
"toolkit": true,
"description": "Image Generation and Editing using OpenAI's latest state-of-the-art models",
"icon": "assets/image_gen_oai.png",
"icon": "/assets/image_gen_oai.png",
"authConfig": [
{
"authField": "IMAGE_GEN_OAI_API_KEY",
@@ -75,7 +75,7 @@
"name": "Browser",
"pluginKey": "web-browser",
"description": "Scrape and summarize webpage data",
"icon": "assets/web-browser.svg",
"icon": "/assets/web-browser.svg",
"authConfig": [
{
"authField": "OPENAI_API_KEY",
@@ -170,7 +170,7 @@
"name": "OpenWeather",
"pluginKey": "open_weather",
"description": "Get weather forecasts and historical data from the OpenWeather API",
"icon": "assets/openweather.png",
"icon": "/assets/openweather.png",
"authConfig": [
{
"authField": "OPENWEATHER_API_KEY",

View File

@@ -1,7 +1,7 @@
const { z } = require('zod');
const { Tool } = require('@langchain/core/tools');
const { logger } = require('@librechat/data-schemas');
const { SearchClient, AzureKeyCredential } = require('@azure/search-documents');
const { logger } = require('~/config');
class AzureAISearch extends Tool {
// Constants for default values
@@ -18,7 +18,7 @@ class AzureAISearch extends Tool {
super();
this.name = 'azure-ai-search';
this.description =
"Use the 'azure-ai-search' tool to retrieve search results relevant to your input";
'Use the \'azure-ai-search\' tool to retrieve search results relevant to your input';
/* Used to initialize the Tool without necessary variables. */
this.override = fields.override ?? false;

View File

@@ -1,13 +1,14 @@
const { z } = require('zod');
const path = require('path');
const OpenAI = require('openai');
const fetch = require('node-fetch');
const { v4: uuidv4 } = require('uuid');
const { ProxyAgent, fetch } = require('undici');
const { Tool } = require('@langchain/core/tools');
const { logger } = require('@librechat/data-schemas');
const { getImageBasename } = require('@librechat/api');
const { HttpsProxyAgent } = require('https-proxy-agent');
const { FileContext, ContentTypes } = require('librechat-data-provider');
const { getImageBasename } = require('~/server/services/Files/images');
const extractBaseURL = require('~/utils/extractBaseURL');
const logger = require('~/config/winston');
const displayMessage =
"DALL-E displayed an image. All generated images are already plainly visible, so don't repeat the descriptions in detail. Do not list download links as they are available in the UI already. The user may download the images by clicking on them, but do not mention anything about downloading to the user.";
@@ -45,10 +46,7 @@ class DALLE3 extends Tool {
}
if (process.env.PROXY) {
const proxyAgent = new ProxyAgent(process.env.PROXY);
config.fetchOptions = {
dispatcher: proxyAgent,
};
config.httpAgent = new HttpsProxyAgent(process.env.PROXY);
}
/** @type {OpenAI} */
@@ -165,8 +163,7 @@ Error Message: ${error.message}`);
if (this.isAgent) {
let fetchOptions = {};
if (process.env.PROXY) {
const proxyAgent = new ProxyAgent(process.env.PROXY);
fetchOptions.dispatcher = proxyAgent;
fetchOptions.agent = new HttpsProxyAgent(process.env.PROXY);
}
const imageResponse = await fetch(theImageUrl, fetchOptions);
const arrayBuffer = await imageResponse.arrayBuffer();

View File

@@ -3,12 +3,12 @@ const axios = require('axios');
const fetch = require('node-fetch');
const { v4: uuidv4 } = require('uuid');
const { Tool } = require('@langchain/core/tools');
const { logger } = require('@librechat/data-schemas');
const { HttpsProxyAgent } = require('https-proxy-agent');
const { FileContext, ContentTypes } = require('librechat-data-provider');
const { logger } = require('~/config');
const displayMessage =
"Flux displayed an image. All generated images are already plainly visible, so don't repeat the descriptions in detail. Do not list download links as they are available in the UI already. The user may download the images by clicking on them, but do not mention anything about downloading to the user.";
'Flux displayed an image. All generated images are already plainly visible, so don\'t repeat the descriptions in detail. Do not list download links as they are available in the UI already. The user may download the images by clicking on them, but do not mention anything about downloading to the user.';
/**
* FluxAPI - A tool for generating high-quality images from text prompts using the Flux API.

View File

@@ -1,16 +1,69 @@
const { z } = require('zod');
const axios = require('axios');
const { v4 } = require('uuid');
const OpenAI = require('openai');
const FormData = require('form-data');
const { ProxyAgent } = require('undici');
const { tool } = require('@langchain/core/tools');
const { logAxiosError } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { logAxiosError, oaiToolkit } = require('@librechat/api');
const { HttpsProxyAgent } = require('https-proxy-agent');
const { ContentTypes, EImageOutputType } = require('librechat-data-provider');
const { getStrategyFunctions } = require('~/server/services/Files/strategies');
const extractBaseURL = require('~/utils/extractBaseURL');
const { extractBaseURL } = require('~/utils');
const { getFiles } = require('~/models/File');
/** Default descriptions for image generation tool */
const DEFAULT_IMAGE_GEN_DESCRIPTION = `
Generates high-quality, original images based solely on text, not using any uploaded reference images.
When to use \`image_gen_oai\`:
- To create entirely new images from detailed text descriptions that do NOT reference any image files.
When NOT to use \`image_gen_oai\`:
- If the user has uploaded any images and requests modifications, enhancements, or remixing based on those uploads → use \`image_edit_oai\` instead.
Generated image IDs will be returned in the response, so you can refer to them in future requests made to \`image_edit_oai\`.
`.trim();
/** Default description for image editing tool */
const DEFAULT_IMAGE_EDIT_DESCRIPTION =
`Generates high-quality, original images based on text and one or more uploaded/referenced images.
When to use \`image_edit_oai\`:
- The user wants to modify, extend, or remix one **or more** uploaded images, either:
- Previously generated, or in the current request (both to be included in the \`image_ids\` array).
- Always when the user refers to uploaded images for editing, enhancement, remixing, style transfer, or combining elements.
- Any current or existing images are to be used as visual guides.
- If there are any files in the current request, they are more likely than not expected as references for image edit requests.
When NOT to use \`image_edit_oai\`:
- Brand-new generations that do not rely on an existing image → use \`image_gen_oai\` instead.
Both generated and referenced image IDs will be returned in the response, so you can refer to them in future requests made to \`image_edit_oai\`.
`.trim();
/** Default prompt descriptions */
const DEFAULT_IMAGE_GEN_PROMPT_DESCRIPTION = `Describe the image you want in detail.
Be highly specific—break your idea into layers:
(1) main concept and subject,
(2) composition and position,
(3) lighting and mood,
(4) style, medium, or camera details,
(5) important features (age, expression, clothing, etc.),
(6) background.
Use positive, descriptive language and specify what should be included, not what to avoid.
List number and characteristics of people/objects, and mention style/technical requirements (e.g., "DSLR photo, 85mm lens, golden hour").
Do not reference any uploaded images—use for new image creation from text only.`;
const DEFAULT_IMAGE_EDIT_PROMPT_DESCRIPTION = `Describe the changes, enhancements, or new ideas to apply to the uploaded image(s).
Be highly specific—break your request into layers:
(1) main concept or transformation,
(2) specific edits/replacements or composition guidance,
(3) desired style, mood, or technique,
(4) features/items to keep, change, or add (such as objects, people, clothing, lighting, etc.).
Use positive, descriptive language and clarify what should be included or changed, not what to avoid.
Always base this prompt on the most recently uploaded reference images.`;
const displayMessage =
"The tool displayed an image. All generated images are already plainly visible, so don't repeat the descriptions in detail. Do not list download links as they are available in the UI already. The user may download the images by clicking on them, but do not mention anything about downloading to the user.";
@@ -38,6 +91,22 @@ function returnValue(value) {
return value;
}
const getImageGenDescription = () => {
return process.env.IMAGE_GEN_OAI_DESCRIPTION || DEFAULT_IMAGE_GEN_DESCRIPTION;
};
const getImageEditDescription = () => {
return process.env.IMAGE_EDIT_OAI_DESCRIPTION || DEFAULT_IMAGE_EDIT_DESCRIPTION;
};
const getImageGenPromptDescription = () => {
return process.env.IMAGE_GEN_OAI_PROMPT_DESCRIPTION || DEFAULT_IMAGE_GEN_PROMPT_DESCRIPTION;
};
const getImageEditPromptDescription = () => {
return process.env.IMAGE_EDIT_OAI_PROMPT_DESCRIPTION || DEFAULT_IMAGE_EDIT_PROMPT_DESCRIPTION;
};
function createAbortHandler() {
return function () {
logger.debug('[ImageGenOAI] Image generation aborted');
@@ -52,9 +121,7 @@ function createAbortHandler() {
* @param {string} fields.IMAGE_GEN_OAI_API_KEY - The OpenAI API key
* @param {boolean} [fields.override] - Whether to override the API key check, necessary for app initialization
* @param {MongoFile[]} [fields.imageFiles] - The images to be used for editing
* @param {string} [fields.imageOutputType] - The image output type configuration
* @param {string} [fields.fileStrategy] - The file storage strategy
* @returns {Array<ReturnType<tool>>} - Array of image tools
* @returns {Array} - Array of image tools
*/
function createOpenAIImageTools(fields = {}) {
/** @type {boolean} Used to initialize the Tool without necessary variables. */
@@ -64,8 +131,8 @@ function createOpenAIImageTools(fields = {}) {
throw new Error('This tool is only available for agents.');
}
const { req } = fields;
const imageOutputType = fields.imageOutputType || EImageOutputType.PNG;
const appFileStrategy = fields.fileStrategy;
const imageOutputType = req?.app.locals.imageOutputType || EImageOutputType.PNG;
const appFileStrategy = req?.app.locals.fileStrategy;
const getApiKey = () => {
const apiKey = process.env.IMAGE_GEN_OAI_API_KEY ?? '';
@@ -122,10 +189,7 @@ function createOpenAIImageTools(fields = {}) {
}
const clientConfig = { ...closureConfig };
if (process.env.PROXY) {
const proxyAgent = new ProxyAgent(process.env.PROXY);
clientConfig.fetchOptions = {
dispatcher: proxyAgent,
};
clientConfig.httpAgent = new HttpsProxyAgent(process.env.PROXY);
}
/** @type {OpenAI} */
@@ -218,7 +282,46 @@ Error Message: ${error.message}`);
];
return [response, { content, file_ids }];
},
oaiToolkit.image_gen_oai,
{
name: 'image_gen_oai',
description: getImageGenDescription(),
schema: z.object({
prompt: z.string().max(32000).describe(getImageGenPromptDescription()),
background: z
.enum(['transparent', 'opaque', 'auto'])
.optional()
.describe(
'Sets transparency for the background. Must be one of transparent, opaque or auto (default). When transparent, the output format should be png or webp.',
),
/*
n: z
.number()
.int()
.min(1)
.max(10)
.optional()
.describe('The number of images to generate. Must be between 1 and 10.'),
output_compression: z
.number()
.int()
.min(0)
.max(100)
.optional()
.describe('The compression level (0-100%) for webp or jpeg formats. Defaults to 100.'),
*/
quality: z
.enum(['auto', 'high', 'medium', 'low'])
.optional()
.describe('The quality of the image. One of auto (default), high, medium, or low.'),
size: z
.enum(['auto', '1024x1024', '1536x1024', '1024x1536'])
.optional()
.describe(
'The size of the generated image. One of 1024x1024, 1536x1024 (landscape), 1024x1536 (portrait), or auto (default).',
),
}),
responseFormat: 'content_and_artifact',
},
);
/**
@@ -232,10 +335,7 @@ Error Message: ${error.message}`);
const clientConfig = { ...closureConfig };
if (process.env.PROXY) {
const proxyAgent = new ProxyAgent(process.env.PROXY);
clientConfig.fetchOptions = {
dispatcher: proxyAgent,
};
clientConfig.httpAgent = new HttpsProxyAgent(process.env.PROXY);
}
const formData = new FormData();
@@ -347,19 +447,6 @@ Error Message: ${error.message}`);
baseURL,
};
if (process.env.PROXY) {
try {
const url = new URL(process.env.PROXY);
axiosConfig.proxy = {
host: url.hostname.replace(/^\[|\]$/g, ''),
port: url.port ? parseInt(url.port, 10) : undefined,
protocol: url.protocol.replace(':', ''),
};
} catch (error) {
logger.error('Error parsing proxy URL:', error);
}
}
if (process.env.IMAGE_GEN_OAI_AZURE_API_VERSION && process.env.IMAGE_GEN_OAI_BASEURL) {
axiosConfig.params = {
'api-version': process.env.IMAGE_GEN_OAI_AZURE_API_VERSION,
@@ -411,7 +498,48 @@ Error Message: ${error.message || 'Unknown error'}`);
}
}
},
oaiToolkit.image_edit_oai,
{
name: 'image_edit_oai',
description: getImageEditDescription(),
schema: z.object({
image_ids: z
.array(z.string())
.min(1)
.describe(
`
IDs (image ID strings) of previously generated or uploaded images that should guide the edit.
Guidelines:
- If the user's request depends on any prior image(s), copy their image IDs into the \`image_ids\` array (in the same order the user refers to them).
- Never invent or hallucinate IDs; only use IDs that are still visible in the conversation context.
- If no earlier image is relevant, omit the field entirely.
`.trim(),
),
prompt: z.string().max(32000).describe(getImageEditPromptDescription()),
/*
n: z
.number()
.int()
.min(1)
.max(10)
.optional()
.describe('The number of images to generate. Must be between 1 and 10. Defaults to 1.'),
*/
quality: z
.enum(['auto', 'high', 'medium', 'low'])
.optional()
.describe(
'The quality of the image. One of auto (default), high, medium, or low. High/medium/low only supported for gpt-image-1.',
),
size: z
.enum(['auto', '1024x1024', '1536x1024', '1024x1536', '256x256', '512x512'])
.optional()
.describe(
'The size of the generated images. For gpt-image-1: auto (default), 1024x1024, 1536x1024, 1024x1536. For dall-e-2: 256x256, 512x512, 1024x1024.',
),
}),
responseFormat: 'content_and_artifact',
},
);
return [imageGenTool, imageEditTool];

View File

@@ -6,19 +6,19 @@ const axios = require('axios');
const sharp = require('sharp');
const { v4: uuidv4 } = require('uuid');
const { Tool } = require('@langchain/core/tools');
const { logger } = require('@librechat/data-schemas');
const { FileContext, ContentTypes } = require('librechat-data-provider');
const paths = require('~/config/paths');
const { logger } = require('~/config');
const displayMessage =
"Stable Diffusion displayed an image. All generated images are already plainly visible, so don't repeat the descriptions in detail. Do not list download links as they are available in the UI already. The user may download the images by clicking on them, but do not mention anything about downloading to the user.";
'Stable Diffusion displayed an image. All generated images are already plainly visible, so don\'t repeat the descriptions in detail. Do not list download links as they are available in the UI already. The user may download the images by clicking on them, but do not mention anything about downloading to the user.';
class StableDiffusionAPI extends Tool {
constructor(fields) {
super();
/** @type {string} User ID */
this.userId = fields.userId;
/** @type {ServerRequest | undefined} Express Request object, only provided by ToolService */
/** @type {Express.Request | undefined} Express Request object, only provided by ToolService */
this.req = fields.req;
/** @type {boolean} Used to initialize the Tool without necessary variables. */
this.override = fields.override ?? false;
@@ -44,7 +44,7 @@ class StableDiffusionAPI extends Tool {
// "negative_prompt":"semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, out of frame, low quality, ugly, mutation, deformed"
// - Generate images only once per human query unless explicitly requested by the user`;
this.description =
"You can generate images using text with 'stable-diffusion'. This tool is exclusively for visual content.";
'You can generate images using text with \'stable-diffusion\'. This tool is exclusively for visual content.';
this.schema = z.object({
prompt: z
.string()

View File

@@ -1,7 +1,7 @@
const { z } = require('zod');
const { Tool } = require('@langchain/core/tools');
const { logger } = require('@librechat/data-schemas');
const { getEnvironmentVariable } = require('@langchain/core/utils/env');
const { logger } = require('~/config');
/**
* Tool for the Traversaal AI search API, Ares.
@@ -21,7 +21,7 @@ class TraversaalSearch extends Tool {
query: z
.string()
.describe(
"A properly written sentence to be interpreted by an AI to search the web according to the user's request.",
'A properly written sentence to be interpreted by an AI to search the web according to the user\'s request.',
),
});
@@ -38,6 +38,7 @@ class TraversaalSearch extends Tool {
return apiKey;
}
// eslint-disable-next-line no-unused-vars
async _call({ query }, _runManager) {
const body = {
query: [query],

View File

@@ -1,8 +1,8 @@
/* eslint-disable no-useless-escape */
const { z } = require('zod');
const axios = require('axios');
const { z } = require('zod');
const { Tool } = require('@langchain/core/tools');
const { logger } = require('@librechat/data-schemas');
const { logger } = require('~/config');
class WolframAlphaAPI extends Tool {
constructor(fields) {

View File

@@ -1,9 +1,9 @@
const { ytToolkit } = require('@librechat/api');
const { z } = require('zod');
const { tool } = require('@langchain/core/tools');
const { youtube } = require('@googleapis/youtube');
const { logger } = require('@librechat/data-schemas');
const { YoutubeTranscript } = require('youtube-transcript');
const { getApiKey } = require('./credentials');
const { logger } = require('~/config');
function extractVideoId(url) {
const rawIdRegex = /^[a-zA-Z0-9_-]{11}$/;
@@ -29,7 +29,7 @@ function parseTranscript(transcriptResponse) {
.map((entry) => entry.text.trim())
.filter((text) => text)
.join(' ')
.replaceAll('&amp;#39;', "'");
.replaceAll('&amp;#39;', '\'');
}
function createYouTubeTools(fields = {}) {
@@ -42,94 +42,160 @@ function createYouTubeTools(fields = {}) {
auth: apiKey,
});
const searchTool = tool(async ({ query, maxResults = 5 }) => {
const response = await youtubeClient.search.list({
part: 'snippet',
q: query,
type: 'video',
maxResults: maxResults || 5,
});
const result = response.data.items.map((item) => ({
title: item.snippet.title,
description: item.snippet.description,
url: `https://www.youtube.com/watch?v=${item.id.videoId}`,
}));
return JSON.stringify(result, null, 2);
}, ytToolkit.youtube_search);
const searchTool = tool(
async ({ query, maxResults = 5 }) => {
const response = await youtubeClient.search.list({
part: 'snippet',
q: query,
type: 'video',
maxResults: maxResults || 5,
});
const result = response.data.items.map((item) => ({
title: item.snippet.title,
description: item.snippet.description,
url: `https://www.youtube.com/watch?v=${item.id.videoId}`,
}));
return JSON.stringify(result, null, 2);
},
{
name: 'youtube_search',
description: `Search for YouTube videos by keyword or phrase.
- Required: query (search terms to find videos)
- Optional: maxResults (number of videos to return, 1-50, default: 5)
- Returns: List of videos with titles, descriptions, and URLs
- Use for: Finding specific videos, exploring content, research
Example: query="cooking pasta tutorials" maxResults=3`,
schema: z.object({
query: z.string().describe('Search query terms'),
maxResults: z.number().int().min(1).max(50).optional().describe('Number of results (1-50)'),
}),
},
);
const infoTool = tool(async ({ url }) => {
const videoId = extractVideoId(url);
if (!videoId) {
throw new Error('Invalid YouTube URL or video ID');
}
const infoTool = tool(
async ({ url }) => {
const videoId = extractVideoId(url);
if (!videoId) {
throw new Error('Invalid YouTube URL or video ID');
}
const response = await youtubeClient.videos.list({
part: 'snippet,statistics',
id: videoId,
});
const response = await youtubeClient.videos.list({
part: 'snippet,statistics',
id: videoId,
});
if (!response.data.items?.length) {
throw new Error('Video not found');
}
const video = response.data.items[0];
if (!response.data.items?.length) {
throw new Error('Video not found');
}
const video = response.data.items[0];
const result = {
title: video.snippet.title,
description: video.snippet.description,
views: video.statistics.viewCount,
likes: video.statistics.likeCount,
comments: video.statistics.commentCount,
};
return JSON.stringify(result, null, 2);
}, ytToolkit.youtube_info);
const result = {
title: video.snippet.title,
description: video.snippet.description,
views: video.statistics.viewCount,
likes: video.statistics.likeCount,
comments: video.statistics.commentCount,
};
return JSON.stringify(result, null, 2);
},
{
name: 'youtube_info',
description: `Get detailed metadata and statistics for a specific YouTube video.
- Required: url (full YouTube URL or video ID)
- Returns: Video title, description, view count, like count, comment count
- Use for: Getting video metrics and basic metadata
- DO NOT USE FOR VIDEO SUMMARIES, USE TRANSCRIPTS FOR COMPREHENSIVE ANALYSIS
- Accepts both full URLs and video IDs
Example: url="https://youtube.com/watch?v=abc123" or url="abc123"`,
schema: z.object({
url: z.string().describe('YouTube video URL or ID'),
}),
},
);
const commentsTool = tool(async ({ url, maxResults = 10 }) => {
const videoId = extractVideoId(url);
if (!videoId) {
throw new Error('Invalid YouTube URL or video ID');
}
const commentsTool = tool(
async ({ url, maxResults = 10 }) => {
const videoId = extractVideoId(url);
if (!videoId) {
throw new Error('Invalid YouTube URL or video ID');
}
const response = await youtubeClient.commentThreads.list({
part: 'snippet',
videoId,
maxResults: maxResults || 10,
});
const response = await youtubeClient.commentThreads.list({
part: 'snippet',
videoId,
maxResults: maxResults || 10,
});
const result = response.data.items.map((item) => ({
author: item.snippet.topLevelComment.snippet.authorDisplayName,
text: item.snippet.topLevelComment.snippet.textDisplay,
likes: item.snippet.topLevelComment.snippet.likeCount,
}));
return JSON.stringify(result, null, 2);
}, ytToolkit.youtube_comments);
const result = response.data.items.map((item) => ({
author: item.snippet.topLevelComment.snippet.authorDisplayName,
text: item.snippet.topLevelComment.snippet.textDisplay,
likes: item.snippet.topLevelComment.snippet.likeCount,
}));
return JSON.stringify(result, null, 2);
},
{
name: 'youtube_comments',
description: `Retrieve top-level comments from a YouTube video.
- Required: url (full YouTube URL or video ID)
- Optional: maxResults (number of comments, 1-50, default: 10)
- Returns: Comment text, author names, like counts
- Use for: Sentiment analysis, audience feedback, engagement review
Example: url="abc123" maxResults=20`,
schema: z.object({
url: z.string().describe('YouTube video URL or ID'),
maxResults: z
.number()
.int()
.min(1)
.max(50)
.optional()
.describe('Number of comments to retrieve'),
}),
},
);
const transcriptTool = tool(async ({ url }) => {
const videoId = extractVideoId(url);
if (!videoId) {
throw new Error('Invalid YouTube URL or video ID');
}
try {
try {
const transcript = await YoutubeTranscript.fetchTranscript(videoId, { lang: 'en' });
return parseTranscript(transcript);
} catch (e) {
logger.error(e);
const transcriptTool = tool(
async ({ url }) => {
const videoId = extractVideoId(url);
if (!videoId) {
throw new Error('Invalid YouTube URL or video ID');
}
try {
const transcript = await YoutubeTranscript.fetchTranscript(videoId, { lang: 'de' });
return parseTranscript(transcript);
} catch (e) {
logger.error(e);
}
try {
const transcript = await YoutubeTranscript.fetchTranscript(videoId, { lang: 'en' });
return parseTranscript(transcript);
} catch (e) {
logger.error(e);
}
const transcript = await YoutubeTranscript.fetchTranscript(videoId);
return parseTranscript(transcript);
} catch (error) {
throw new Error(`Failed to fetch transcript: ${error.message}`);
}
}, ytToolkit.youtube_transcript);
try {
const transcript = await YoutubeTranscript.fetchTranscript(videoId, { lang: 'de' });
return parseTranscript(transcript);
} catch (e) {
logger.error(e);
}
const transcript = await YoutubeTranscript.fetchTranscript(videoId);
return parseTranscript(transcript);
} catch (error) {
throw new Error(`Failed to fetch transcript: ${error.message}`);
}
},
{
name: 'youtube_transcript',
description: `Fetch and parse the transcript/captions of a YouTube video.
- Required: url (full YouTube URL or video ID)
- Returns: Full video transcript as plain text
- Use for: Content analysis, summarization, translation reference
- This is the "Go-to" tool for analyzing actual video content
- Attempts to fetch English first, then German, then any available language
Example: url="https://youtube.com/watch?v=abc123"`,
schema: z.object({
url: z.string().describe('YouTube video URL or ID'),
}),
},
);
return [searchTool, infoTool, commentsTool, transcriptTool];
}

View File

@@ -1,60 +0,0 @@
const DALLE3 = require('../DALLE3');
const { ProxyAgent } = require('undici');
jest.mock('tiktoken');
const processFileURL = jest.fn();
describe('DALLE3 Proxy Configuration', () => {
let originalEnv;
beforeAll(() => {
originalEnv = { ...process.env };
});
beforeEach(() => {
jest.resetModules();
process.env = { ...originalEnv };
});
afterEach(() => {
process.env = originalEnv;
});
it('should configure ProxyAgent in fetchOptions.dispatcher when PROXY env is set', () => {
// Set proxy environment variable
process.env.PROXY = 'http://proxy.example.com:8080';
process.env.DALLE_API_KEY = 'test-api-key';
// Create instance
const dalleWithProxy = new DALLE3({ processFileURL });
// Check that the openai client exists
expect(dalleWithProxy.openai).toBeDefined();
// Check that _options exists and has fetchOptions with a dispatcher
expect(dalleWithProxy.openai._options).toBeDefined();
expect(dalleWithProxy.openai._options.fetchOptions).toBeDefined();
expect(dalleWithProxy.openai._options.fetchOptions.dispatcher).toBeDefined();
expect(dalleWithProxy.openai._options.fetchOptions.dispatcher).toBeInstanceOf(ProxyAgent);
});
it('should not configure ProxyAgent when PROXY env is not set', () => {
// Ensure PROXY is not set
delete process.env.PROXY;
process.env.DALLE_API_KEY = 'test-api-key';
// Create instance
const dalleWithoutProxy = new DALLE3({ processFileURL });
// Check that the openai client exists
expect(dalleWithoutProxy.openai).toBeDefined();
// Check that _options exists but fetchOptions either doesn't exist or doesn't have a dispatcher
expect(dalleWithoutProxy.openai._options).toBeDefined();
// fetchOptions should either not exist or not have a dispatcher
if (dalleWithoutProxy.openai._options.fetchOptions) {
expect(dalleWithoutProxy.openai._options.fetchOptions.dispatcher).toBeUndefined();
}
});
});

View File

@@ -1,8 +1,9 @@
const OpenAI = require('openai');
const { logger } = require('@librechat/data-schemas');
const DALLE3 = require('../DALLE3');
const logger = require('~/config/winston');
jest.mock('openai');
jest.mock('@librechat/data-schemas', () => {
return {
logger: {
@@ -25,6 +26,25 @@ jest.mock('tiktoken', () => {
const processFileURL = jest.fn();
jest.mock('~/server/services/Files/images', () => ({
getImageBasename: jest.fn().mockImplementation((url) => {
// Split the URL by '/'
const parts = url.split('/');
// Get the last part of the URL
const lastPart = parts.pop();
// Check if the last part of the URL matches the image extension regex
const imageExtensionRegex = /\.(jpg|jpeg|png|gif|bmp|tiff|svg)$/i;
if (imageExtensionRegex.test(lastPart)) {
return lastPart;
}
// If the regex test fails, return an empty string
return '';
}),
}));
const generate = jest.fn();
OpenAI.mockImplementation(() => ({
images: {

View File

@@ -2,9 +2,9 @@ const { z } = require('zod');
const axios = require('axios');
const { tool } = require('@langchain/core/tools');
const { logger } = require('@librechat/data-schemas');
const { generateShortLivedToken } = require('@librechat/api');
const { Tools, EToolResources } = require('librechat-data-provider');
const { filterFilesByAgentAccess } = require('~/server/services/Files/permissions');
const { generateShortLivedToken } = require('~/server/services/AuthService');
const { getFiles } = require('~/models/File');
/**
@@ -30,12 +30,7 @@ const primeFiles = async (options) => {
// Filter by access if user and agent are provided
let dbFiles;
if (req?.user?.id && agentId) {
dbFiles = await filterFilesByAgentAccess({
files: allFiles,
userId: req.user.id,
role: req.user.role,
agentId,
});
dbFiles = await filterFilesByAgentAccess(allFiles, req.user.id, agentId);
} else {
dbFiles = allFiles;
}
@@ -68,19 +63,18 @@ const primeFiles = async (options) => {
/**
*
* @param {Object} options
* @param {string} options.userId
* @param {ServerRequest} options.req
* @param {Array<{ file_id: string; filename: string }>} options.files
* @param {string} [options.entity_id]
* @param {boolean} [options.fileCitations=false] - Whether to include citation instructions
* @returns
*/
const createFileSearchTool = async ({ userId, files, entity_id, fileCitations = false }) => {
const createFileSearchTool = async ({ req, files, entity_id }) => {
return tool(
async ({ query }) => {
if (files.length === 0) {
return 'No files to search. Instruct the user to add files for the search.';
}
const jwtToken = generateShortLivedToken(userId);
const jwtToken = generateShortLivedToken(req.user.id);
if (!jwtToken) {
return 'There was an error authenticating the file search request.';
}
@@ -126,13 +120,11 @@ const createFileSearchTool = async ({ userId, files, entity_id, fileCitations =
}
const formattedResults = validResults
.flatMap((result, fileIndex) =>
.flatMap((result) =>
result.data.map(([docInfo, distance]) => ({
filename: docInfo.metadata.source.split('/').pop(),
content: docInfo.page_content,
distance,
file_id: files[fileIndex]?.file_id,
page: docInfo.metadata.page || null,
})),
)
// TODO: results should be sorted by relevance, not distance
@@ -142,41 +134,18 @@ const createFileSearchTool = async ({ userId, files, entity_id, fileCitations =
const formattedString = formattedResults
.map(
(result, index) =>
`File: ${result.filename}${
fileCitations ? `\nAnchor: \\ue202turn0file${index} (${result.filename})` : ''
}\nRelevance: ${(1.0 - result.distance).toFixed(4)}\nContent: ${result.content}\n`,
(result) =>
`File: ${result.filename}\nRelevance: ${1.0 - result.distance.toFixed(4)}\nContent: ${
result.content
}\n`,
)
.join('\n---\n');
const sources = formattedResults.map((result) => ({
type: 'file',
fileId: result.file_id,
content: result.content,
fileName: result.filename,
relevance: 1.0 - result.distance,
pages: result.page ? [result.page] : [],
pageRelevance: result.page ? { [result.page]: 1.0 - result.distance } : {},
}));
return [formattedString, { [Tools.file_search]: { sources, fileCitations } }];
return formattedString;
},
{
name: Tools.file_search,
responseFormat: 'content_and_artifact',
description: `Performs semantic search across attached "${Tools.file_search}" documents using natural language queries. This tool analyzes the content of uploaded files to find relevant information, quotes, and passages that best match your query. Use this to extract specific information or find relevant sections within the available documents.${
fileCitations
? `
**CITE FILE SEARCH RESULTS:**
Use anchor markers immediately after statements derived from file content. Reference the filename in your text:
- File citation: "The document.pdf states that... \\ue202turn0file0"
- Page reference: "According to report.docx... \\ue202turn0file1"
- Multi-file: "Multiple sources confirm... \\ue200\\ue202turn0file0\\ue202turn0file1\\ue201"
**ALWAYS mention the filename in your text before the citation marker. NEVER use markdown links or footnotes.**`
: ''
}`,
description: `Performs semantic search across attached "${Tools.file_search}" documents using natural language queries. This tool analyzes the content of uploaded files to find relevant information, quotes, and passages that best match your query. Use this to extract specific information or find relevant sections within the available documents.`,
schema: z.object({
query: z
.string()

View File

@@ -1,5 +1,5 @@
const OpenAI = require('openai');
const { logger } = require('@librechat/data-schemas');
const { logger } = require('~/config');
/**
* Handles errors that may occur when making requests to OpenAI's API.

View File

@@ -1,21 +1,9 @@
const { logger } = require('@librechat/data-schemas');
const { SerpAPI } = require('@langchain/community/tools/serpapi');
const { Calculator } = require('@langchain/community/tools/calculator');
const { mcpToolPattern, loadWebSearchAuth } = require('@librechat/api');
const { EnvVar, createCodeExecutionTool, createSearchTool } = require('@librechat/agents');
const {
checkAccess,
createSafeUser,
mcpToolPattern,
loadWebSearchAuth,
} = require('@librechat/api');
const {
Tools,
Constants,
Permissions,
EToolResources,
PermissionTypes,
replaceSpecialVars,
} = require('librechat-data-provider');
const { Tools, EToolResources, replaceSpecialVars } = require('librechat-data-provider');
const {
availableTools,
manifestToolMap,
@@ -36,10 +24,9 @@ const {
const { primeFiles: primeCodeFiles } = require('~/server/services/Files/Code/process');
const { createFileSearchTool, primeFiles: primeSearchFiles } = require('./fileSearch');
const { getUserPluginAuthValue } = require('~/server/services/PluginService');
const { createMCPTool, createMCPTools } = require('~/server/services/MCP');
const { loadAuthValues } = require('~/server/services/Tools/credentials');
const { getMCPServerTools } = require('~/server/services/Config');
const { getRoleByName } = require('~/models/Role');
const { getCachedTools } = require('~/server/services/Config');
const { createMCPTool } = require('~/server/services/MCP');
/**
* Validates the availability and authentication of tools for a user based on environment variables or user-specific plugin authentication values.
@@ -134,37 +121,27 @@ const getAuthFields = (toolKey) => {
/**
*
* @param {object} params
* @param {string} params.user
* @param {Record<string, Record<string, string>>} [object.userMCPAuthMap]
* @param {AbortSignal} [object.signal]
* @param {Pick<Agent, 'id' | 'provider' | 'model'>} [params.agent]
* @param {string} [params.model]
* @param {EModelEndpoint} [params.endpoint]
* @param {LoadToolOptions} [params.options]
* @param {boolean} [params.useSpecs]
* @param {Array<string>} params.tools
* @param {boolean} [params.functions]
* @param {boolean} [params.returnMap]
* @param {AppConfig['webSearch']} [params.webSearch]
* @param {AppConfig['fileStrategy']} [params.fileStrategy]
* @param {AppConfig['imageOutputType']} [params.imageOutputType]
* @param {object} object
* @param {string} object.user
* @param {Pick<Agent, 'id' | 'provider' | 'model'>} [object.agent]
* @param {string} [object.model]
* @param {EModelEndpoint} [object.endpoint]
* @param {LoadToolOptions} [object.options]
* @param {boolean} [object.useSpecs]
* @param {Array<string>} object.tools
* @param {boolean} [object.functions]
* @param {boolean} [object.returnMap]
* @returns {Promise<{ loadedTools: Tool[], toolContextMap: Object<string, any> } | Record<string,Tool>>}
*/
const loadTools = async ({
user,
agent,
model,
signal,
endpoint,
userMCPAuthMap,
tools = [],
options = {},
functions = true,
returnMap = false,
webSearch,
fileStrategy,
imageOutputType,
}) => {
const toolConstructors = {
flux: FluxAPI,
@@ -223,8 +200,6 @@ const loadTools = async ({
...authValues,
isAgent: !!agent,
req: options.req,
imageOutputType,
fileStrategy,
imageFiles,
});
},
@@ -240,7 +215,7 @@ const loadTools = async ({
const imageGenOptions = {
isAgent: !!agent,
req: options.req,
fileStrategy,
fileStrategy: options.fileStrategy,
processFileURL: options.processFileURL,
returnMetadata: options.returnMetadata,
uploadImageBuffer: options.uploadImageBuffer,
@@ -255,7 +230,7 @@ const loadTools = async ({
/** @type {Record<string, string>} */
const toolContextMap = {};
const requestedMCPTools = {};
const cachedTools = (await getCachedTools({ userId: user, includeGlobal: true })) ?? {};
for (const tool of tools) {
if (tool === Tools.execute_code) {
@@ -293,36 +268,15 @@ const loadTools = async ({
if (toolContext) {
toolContextMap[tool] = toolContext;
}
/** @type {boolean | undefined} Check if user has FILE_CITATIONS permission */
let fileCitations;
if (fileCitations == null && options.req?.user != null) {
try {
fileCitations = await checkAccess({
user: options.req.user,
permissionType: PermissionTypes.FILE_CITATIONS,
permissions: [Permissions.USE],
getRoleByName,
});
} catch (error) {
logger.error('[handleTools] FILE_CITATIONS permission check failed:', error);
fileCitations = false;
}
}
return createFileSearchTool({
userId: user,
files,
entity_id: agent?.id,
fileCitations,
});
return createFileSearchTool({ req: options.req, files, entity_id: agent?.id });
};
continue;
} else if (tool === Tools.web_search) {
const webSearchConfig = options?.req?.app?.locals?.webSearch;
const result = await loadWebSearchAuth({
userId: user,
loadAuthValues,
webSearchConfig: webSearch,
webSearchConfig,
});
const { onSearchResults, onGetHighlights } = options?.[Tools.web_search] ?? {};
requestedTools[tool] = async () => {
@@ -344,34 +298,15 @@ Current Date & Time: ${replaceSpecialVars({ text: '{{iso_datetime}}' })}
});
};
continue;
} else if (tool && mcpToolPattern.test(tool)) {
const [toolName, serverName] = tool.split(Constants.mcp_delimiter);
if (toolName === Constants.mcp_server) {
/** Placeholder used for UI purposes */
continue;
}
if (serverName && options.req?.config?.mcpConfig?.[serverName] == null) {
logger.warn(
`MCP server "${serverName}" for "${toolName}" tool is not configured${agent?.id != null && agent.id ? ` but attached to "${agent.id}"` : ''}`,
);
continue;
}
if (toolName === Constants.mcp_all) {
requestedMCPTools[serverName] = [
{
type: 'all',
serverName,
},
];
continue;
}
requestedMCPTools[serverName] = requestedMCPTools[serverName] || [];
requestedMCPTools[serverName].push({
type: 'single',
toolKey: tool,
serverName,
});
} else if (tool && cachedTools && mcpToolPattern.test(tool)) {
requestedTools[tool] = async () =>
createMCPTool({
req: options.req,
res: options.res,
toolKey: tool,
model: agent?.model ?? model,
provider: agent?.provider ?? endpoint,
});
continue;
}
@@ -411,75 +346,6 @@ Current Date & Time: ${replaceSpecialVars({ text: '{{iso_datetime}}' })}
}
const loadedTools = (await Promise.all(toolPromises)).flatMap((plugin) => plugin || []);
const mcpToolPromises = [];
/** MCP server tools are initialized sequentially by server */
let index = -1;
const failedMCPServers = new Set();
const safeUser = createSafeUser(options.req?.user);
for (const [serverName, toolConfigs] of Object.entries(requestedMCPTools)) {
index++;
/** @type {LCAvailableTools} */
let availableTools;
for (const config of toolConfigs) {
try {
if (failedMCPServers.has(serverName)) {
continue;
}
const mcpParams = {
index,
signal,
user: safeUser,
userMCPAuthMap,
res: options.res,
model: agent?.model ?? model,
serverName: config.serverName,
provider: agent?.provider ?? endpoint,
};
if (config.type === 'all' && toolConfigs.length === 1) {
/** Handle async loading for single 'all' tool config */
mcpToolPromises.push(
createMCPTools(mcpParams).catch((error) => {
logger.error(`Error loading ${serverName} tools:`, error);
return null;
}),
);
continue;
}
if (!availableTools) {
try {
availableTools = await getMCPServerTools(serverName);
} catch (error) {
logger.error(`Error fetching available tools for MCP server ${serverName}:`, error);
}
}
/** Handle synchronous loading */
const mcpTool =
config.type === 'all'
? await createMCPTools(mcpParams)
: await createMCPTool({
...mcpParams,
availableTools,
toolKey: config.toolKey,
});
if (Array.isArray(mcpTool)) {
loadedTools.push(...mcpTool);
} else if (mcpTool) {
loadedTools.push(mcpTool);
} else {
failedMCPServers.add(serverName);
logger.warn(
`MCP tool creation failed for "${config.toolKey}", server may be unavailable or unauthenticated.`,
);
}
} catch (error) {
logger.error(`Error loading MCP tool for server ${serverName}:`, error);
}
}
}
loadedTools.push(...(await Promise.all(mcpToolPromises)).flatMap((plugin) => plugin || []));
return { loadedTools, toolContextMap };
};

View File

@@ -9,27 +9,7 @@ const mockPluginService = {
jest.mock('~/server/services/PluginService', () => mockPluginService);
jest.mock('~/server/services/Config', () => ({
getAppConfig: jest.fn().mockResolvedValue({
// Default app config for tool tests
paths: { uploads: '/tmp' },
fileStrategy: 'local',
filteredTools: [],
includedTools: [],
}),
getCachedTools: jest.fn().mockResolvedValue({
// Default cached tools for tests
dalle: {
type: 'function',
function: {
name: 'dalle',
description: 'DALL-E image generation',
parameters: {},
},
},
}),
}));
const { BaseLLM } = require('@langchain/openai');
const { Calculator } = require('@langchain/community/tools/calculator');
const { User } = require('~/db/models');
@@ -171,6 +151,7 @@ describe('Tool Handlers', () => {
beforeAll(async () => {
const toolMap = await loadTools({
user: fakeUser._id,
model: BaseLLM,
tools: sampleTools,
returnMap: true,
useSpecs: true,
@@ -264,6 +245,7 @@ describe('Tool Handlers', () => {
it('returns an empty object when no tools are requested', async () => {
toolFunctions = await loadTools({
user: fakeUser._id,
model: BaseLLM,
returnMap: true,
useSpecs: true,
});
@@ -273,6 +255,7 @@ describe('Tool Handlers', () => {
process.env.SD_WEBUI_URL = mockCredential;
toolFunctions = await loadTools({
user: fakeUser._id,
model: BaseLLM,
tools: ['stable-diffusion'],
functions: true,
returnMap: true,

33
api/cache/cacheConfig.js vendored Normal file
View File

@@ -0,0 +1,33 @@
const fs = require('fs');
const { math, isEnabled } = require('@librechat/api');
// To ensure that different deployments do not interfere with each other's cache, we use a prefix for the Redis keys.
// This prefix is usually the deployment ID, which is often passed to the container or pod as an env var.
// Set REDIS_KEY_PREFIX_VAR to the env var that contains the deployment ID.
const REDIS_KEY_PREFIX_VAR = process.env.REDIS_KEY_PREFIX_VAR;
const REDIS_KEY_PREFIX = process.env.REDIS_KEY_PREFIX;
if (REDIS_KEY_PREFIX_VAR && REDIS_KEY_PREFIX) {
throw new Error('Only either REDIS_KEY_PREFIX_VAR or REDIS_KEY_PREFIX can be set.');
}
const USE_REDIS = isEnabled(process.env.USE_REDIS);
if (USE_REDIS && !process.env.REDIS_URI) {
throw new Error('USE_REDIS is enabled but REDIS_URI is not set.');
}
const cacheConfig = {
USE_REDIS,
REDIS_URI: process.env.REDIS_URI,
REDIS_USERNAME: process.env.REDIS_USERNAME,
REDIS_PASSWORD: process.env.REDIS_PASSWORD,
REDIS_CA: process.env.REDIS_CA ? fs.readFileSync(process.env.REDIS_CA, 'utf8') : null,
REDIS_KEY_PREFIX: process.env[REDIS_KEY_PREFIX_VAR] || REDIS_KEY_PREFIX || '',
REDIS_MAX_LISTENERS: math(process.env.REDIS_MAX_LISTENERS, 40),
CI: isEnabled(process.env.CI),
DEBUG_MEMORY_CACHE: isEnabled(process.env.DEBUG_MEMORY_CACHE),
BAN_DURATION: math(process.env.BAN_DURATION, 7200000), // 2 hours
};
module.exports = { cacheConfig };

108
api/cache/cacheConfig.spec.js vendored Normal file
View File

@@ -0,0 +1,108 @@
const fs = require('fs');
describe('cacheConfig', () => {
let originalEnv;
let originalReadFileSync;
beforeEach(() => {
originalEnv = { ...process.env };
originalReadFileSync = fs.readFileSync;
// Clear all related env vars first
delete process.env.REDIS_URI;
delete process.env.REDIS_CA;
delete process.env.REDIS_KEY_PREFIX_VAR;
delete process.env.REDIS_KEY_PREFIX;
delete process.env.USE_REDIS;
// Clear require cache
jest.resetModules();
});
afterEach(() => {
process.env = originalEnv;
fs.readFileSync = originalReadFileSync;
jest.resetModules();
});
describe('REDIS_KEY_PREFIX validation and resolution', () => {
test('should throw error when both REDIS_KEY_PREFIX_VAR and REDIS_KEY_PREFIX are set', () => {
process.env.REDIS_KEY_PREFIX_VAR = 'DEPLOYMENT_ID';
process.env.REDIS_KEY_PREFIX = 'manual-prefix';
expect(() => {
require('./cacheConfig');
}).toThrow('Only either REDIS_KEY_PREFIX_VAR or REDIS_KEY_PREFIX can be set.');
});
test('should resolve REDIS_KEY_PREFIX from variable reference', () => {
process.env.REDIS_KEY_PREFIX_VAR = 'DEPLOYMENT_ID';
process.env.DEPLOYMENT_ID = 'test-deployment-123';
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.REDIS_KEY_PREFIX).toBe('test-deployment-123');
});
test('should use direct REDIS_KEY_PREFIX value', () => {
process.env.REDIS_KEY_PREFIX = 'direct-prefix';
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.REDIS_KEY_PREFIX).toBe('direct-prefix');
});
test('should default to empty string when no prefix is configured', () => {
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.REDIS_KEY_PREFIX).toBe('');
});
test('should handle empty variable reference', () => {
process.env.REDIS_KEY_PREFIX_VAR = 'EMPTY_VAR';
process.env.EMPTY_VAR = '';
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.REDIS_KEY_PREFIX).toBe('');
});
test('should handle undefined variable reference', () => {
process.env.REDIS_KEY_PREFIX_VAR = 'UNDEFINED_VAR';
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.REDIS_KEY_PREFIX).toBe('');
});
});
describe('USE_REDIS and REDIS_URI validation', () => {
test('should throw error when USE_REDIS is enabled but REDIS_URI is not set', () => {
process.env.USE_REDIS = 'true';
expect(() => {
require('./cacheConfig');
}).toThrow('USE_REDIS is enabled but REDIS_URI is not set.');
});
test('should not throw error when USE_REDIS is enabled and REDIS_URI is set', () => {
process.env.USE_REDIS = 'true';
process.env.REDIS_URI = 'redis://localhost:6379';
expect(() => {
require('./cacheConfig');
}).not.toThrow();
});
test('should handle empty REDIS_URI when USE_REDIS is enabled', () => {
process.env.USE_REDIS = 'true';
process.env.REDIS_URI = '';
expect(() => {
require('./cacheConfig');
}).toThrow('USE_REDIS is enabled but REDIS_URI is not set.');
});
});
describe('REDIS_CA file reading', () => {
test('should be null when REDIS_CA is not set', () => {
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.REDIS_CA).toBeNull();
});
});
});

66
api/cache/cacheFactory.js vendored Normal file
View File

@@ -0,0 +1,66 @@
const KeyvRedis = require('@keyv/redis').default;
const { Keyv } = require('keyv');
const { cacheConfig } = require('./cacheConfig');
const { keyvRedisClient, ioredisClient, GLOBAL_PREFIX_SEPARATOR } = require('./redisClients');
const { Time } = require('librechat-data-provider');
const { RedisStore: ConnectRedis } = require('connect-redis');
const MemoryStore = require('memorystore')(require('express-session'));
const { violationFile } = require('./keyvFiles');
const { RedisStore } = require('rate-limit-redis');
/**
* Creates a cache instance using Redis or a fallback store. Suitable for general caching needs.
* @param {string} namespace - The cache namespace.
* @param {number} [ttl] - Time to live for cache entries.
* @param {object} [fallbackStore] - Optional fallback store if Redis is not used.
* @returns {Keyv} Cache instance.
*/
const standardCache = (namespace, ttl = undefined, fallbackStore = undefined) => {
if (cacheConfig.USE_REDIS) {
const keyvRedis = new KeyvRedis(keyvRedisClient);
const cache = new Keyv(keyvRedis, { namespace, ttl });
keyvRedis.namespace = cacheConfig.REDIS_KEY_PREFIX;
keyvRedis.keyPrefixSeparator = GLOBAL_PREFIX_SEPARATOR;
return cache;
}
if (fallbackStore) return new Keyv({ store: fallbackStore, namespace, ttl });
return new Keyv({ namespace, ttl });
};
/**
* Creates a cache instance for storing violation data.
* Uses a file-based fallback store if Redis is not enabled.
* @param {string} namespace - The cache namespace for violations.
* @param {number} [ttl] - Time to live for cache entries.
* @returns {Keyv} Cache instance for violations.
*/
const violationCache = (namespace, ttl = undefined) => {
return standardCache(`violations:${namespace}`, ttl, violationFile);
};
/**
* Creates a session cache instance using Redis or in-memory store.
* @param {string} namespace - The session namespace.
* @param {number} [ttl] - Time to live for session entries.
* @returns {MemoryStore | ConnectRedis} Session store instance.
*/
const sessionCache = (namespace, ttl = undefined) => {
namespace = namespace.endsWith(':') ? namespace : `${namespace}:`;
if (!cacheConfig.USE_REDIS) return new MemoryStore({ ttl, checkPeriod: Time.ONE_DAY });
return new ConnectRedis({ client: ioredisClient, ttl, prefix: namespace });
};
/**
* Creates a rate limiter cache using Redis.
* @param {string} prefix - The key prefix for rate limiting.
* @returns {RedisStore|undefined} RedisStore instance or undefined if Redis is not used.
*/
const limiterCache = (prefix) => {
if (!prefix) throw new Error('prefix is required');
if (!cacheConfig.USE_REDIS) return undefined;
prefix = prefix.endsWith(':') ? prefix : `${prefix}:`;
return new RedisStore({ sendCommand, prefix });
};
const sendCommand = (...args) => ioredisClient?.call(...args);
module.exports = { standardCache, sessionCache, violationCache, limiterCache };

270
api/cache/cacheFactory.spec.js vendored Normal file
View File

@@ -0,0 +1,270 @@
const { Time } = require('librechat-data-provider');
// Mock dependencies first
const mockKeyvRedis = {
namespace: '',
keyPrefixSeparator: '',
};
const mockKeyv = jest.fn().mockReturnValue({ mock: 'keyv' });
const mockConnectRedis = jest.fn().mockReturnValue({ mock: 'connectRedis' });
const mockMemoryStore = jest.fn().mockReturnValue({ mock: 'memoryStore' });
const mockRedisStore = jest.fn().mockReturnValue({ mock: 'redisStore' });
const mockIoredisClient = {
call: jest.fn(),
};
const mockKeyvRedisClient = {};
const mockViolationFile = {};
// Mock modules before requiring the main module
jest.mock('@keyv/redis', () => ({
default: jest.fn().mockImplementation(() => mockKeyvRedis),
}));
jest.mock('keyv', () => ({
Keyv: mockKeyv,
}));
jest.mock('./cacheConfig', () => ({
cacheConfig: {
USE_REDIS: false,
REDIS_KEY_PREFIX: 'test',
},
}));
jest.mock('./redisClients', () => ({
keyvRedisClient: mockKeyvRedisClient,
ioredisClient: mockIoredisClient,
GLOBAL_PREFIX_SEPARATOR: '::',
}));
jest.mock('./keyvFiles', () => ({
violationFile: mockViolationFile,
}));
jest.mock('connect-redis', () => ({ RedisStore: mockConnectRedis }));
jest.mock('memorystore', () => jest.fn(() => mockMemoryStore));
jest.mock('rate-limit-redis', () => ({
RedisStore: mockRedisStore,
}));
// Import after mocking
const { standardCache, sessionCache, violationCache, limiterCache } = require('./cacheFactory');
const { cacheConfig } = require('./cacheConfig');
describe('cacheFactory', () => {
beforeEach(() => {
jest.clearAllMocks();
// Reset cache config mock
cacheConfig.USE_REDIS = false;
cacheConfig.REDIS_KEY_PREFIX = 'test';
});
describe('redisCache', () => {
it('should create Redis cache when USE_REDIS is true', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'test-namespace';
const ttl = 3600;
standardCache(namespace, ttl);
expect(require('@keyv/redis').default).toHaveBeenCalledWith(mockKeyvRedisClient);
expect(mockKeyv).toHaveBeenCalledWith(mockKeyvRedis, { namespace, ttl });
expect(mockKeyvRedis.namespace).toBe(cacheConfig.REDIS_KEY_PREFIX);
expect(mockKeyvRedis.keyPrefixSeparator).toBe('::');
});
it('should create Redis cache with undefined ttl when not provided', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'test-namespace';
standardCache(namespace);
expect(mockKeyv).toHaveBeenCalledWith(mockKeyvRedis, { namespace, ttl: undefined });
});
it('should use fallback store when USE_REDIS is false and fallbackStore is provided', () => {
cacheConfig.USE_REDIS = false;
const namespace = 'test-namespace';
const ttl = 3600;
const fallbackStore = { some: 'store' };
standardCache(namespace, ttl, fallbackStore);
expect(mockKeyv).toHaveBeenCalledWith({ store: fallbackStore, namespace, ttl });
});
it('should create default Keyv instance when USE_REDIS is false and no fallbackStore', () => {
cacheConfig.USE_REDIS = false;
const namespace = 'test-namespace';
const ttl = 3600;
standardCache(namespace, ttl);
expect(mockKeyv).toHaveBeenCalledWith({ namespace, ttl });
});
it('should handle namespace and ttl as undefined', () => {
cacheConfig.USE_REDIS = false;
standardCache();
expect(mockKeyv).toHaveBeenCalledWith({ namespace: undefined, ttl: undefined });
});
});
describe('violationCache', () => {
it('should create violation cache with prefixed namespace', () => {
const namespace = 'test-violations';
const ttl = 7200;
// We can't easily mock the internal redisCache call since it's in the same module
// But we can test that the function executes without throwing
expect(() => violationCache(namespace, ttl)).not.toThrow();
});
it('should create violation cache with undefined ttl', () => {
const namespace = 'test-violations';
violationCache(namespace);
// The function should call redisCache with violations: prefixed namespace
// Since we can't easily mock the internal redisCache call, we test the behavior
expect(() => violationCache(namespace)).not.toThrow();
});
it('should handle undefined namespace', () => {
expect(() => violationCache(undefined)).not.toThrow();
});
});
describe('sessionCache', () => {
it('should return MemoryStore when USE_REDIS is false', () => {
cacheConfig.USE_REDIS = false;
const namespace = 'sessions';
const ttl = 86400;
const result = sessionCache(namespace, ttl);
expect(mockMemoryStore).toHaveBeenCalledWith({ ttl, checkPeriod: Time.ONE_DAY });
expect(result).toBe(mockMemoryStore());
});
it('should return ConnectRedis when USE_REDIS is true', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'sessions';
const ttl = 86400;
const result = sessionCache(namespace, ttl);
expect(mockConnectRedis).toHaveBeenCalledWith({
client: mockIoredisClient,
ttl,
prefix: `${namespace}:`,
});
expect(result).toBe(mockConnectRedis());
});
it('should add colon to namespace if not present', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'sessions';
sessionCache(namespace);
expect(mockConnectRedis).toHaveBeenCalledWith({
client: mockIoredisClient,
ttl: undefined,
prefix: 'sessions:',
});
});
it('should not add colon to namespace if already present', () => {
cacheConfig.USE_REDIS = true;
const namespace = 'sessions:';
sessionCache(namespace);
expect(mockConnectRedis).toHaveBeenCalledWith({
client: mockIoredisClient,
ttl: undefined,
prefix: 'sessions:',
});
});
it('should handle undefined ttl', () => {
cacheConfig.USE_REDIS = false;
const namespace = 'sessions';
sessionCache(namespace);
expect(mockMemoryStore).toHaveBeenCalledWith({
ttl: undefined,
checkPeriod: Time.ONE_DAY,
});
});
});
describe('limiterCache', () => {
it('should return undefined when USE_REDIS is false', () => {
cacheConfig.USE_REDIS = false;
const result = limiterCache('prefix');
expect(result).toBeUndefined();
});
it('should return RedisStore when USE_REDIS is true', () => {
cacheConfig.USE_REDIS = true;
const result = limiterCache('rate-limit');
expect(mockRedisStore).toHaveBeenCalledWith({
sendCommand: expect.any(Function),
prefix: `rate-limit:`,
});
expect(result).toBe(mockRedisStore());
});
it('should add colon to prefix if not present', () => {
cacheConfig.USE_REDIS = true;
limiterCache('rate-limit');
expect(mockRedisStore).toHaveBeenCalledWith({
sendCommand: expect.any(Function),
prefix: 'rate-limit:',
});
});
it('should not add colon to prefix if already present', () => {
cacheConfig.USE_REDIS = true;
limiterCache('rate-limit:');
expect(mockRedisStore).toHaveBeenCalledWith({
sendCommand: expect.any(Function),
prefix: 'rate-limit:',
});
});
it('should pass sendCommand function that calls ioredisClient.call', () => {
cacheConfig.USE_REDIS = true;
limiterCache('rate-limit');
const sendCommandCall = mockRedisStore.mock.calls[0][0];
const sendCommand = sendCommandCall.sendCommand;
// Test that sendCommand properly delegates to ioredisClient.call
const args = ['GET', 'test-key'];
sendCommand(...args);
expect(mockIoredisClient.call).toHaveBeenCalledWith(...args);
});
it('should handle undefined prefix', () => {
cacheConfig.USE_REDIS = true;
expect(() => limiterCache()).toThrow('prefix is required');
});
});
});

View File

@@ -1,5 +1,5 @@
const { isEnabled } = require('@librechat/api');
const { Time, CacheKeys } = require('librechat-data-provider');
const { isEnabled } = require('~/server/utils');
const getLogStores = require('./getLogStores');
const { USE_REDIS, LIMIT_CONCURRENT_MESSAGES } = process.env ?? {};

View File

@@ -1,13 +1,9 @@
const { cacheConfig } = require('./cacheConfig');
const { Keyv } = require('keyv');
const { Time, CacheKeys, ViolationTypes } = require('librechat-data-provider');
const {
logFile,
keyvMongo,
cacheConfig,
sessionCache,
standardCache,
violationCache,
} = require('@librechat/api');
const { CacheKeys, ViolationTypes, Time } = require('librechat-data-provider');
const { logFile } = require('./keyvFiles');
const keyvMongo = require('./keyvMongo');
const { standardCache, sessionCache, violationCache } = require('./cacheFactory');
const namespaces = {
[ViolationTypes.GENERAL]: new Keyv({ store: logFile, namespace: 'violations' }),
@@ -35,7 +31,7 @@ const namespaces = {
[CacheKeys.SAML_SESSION]: sessionCache(CacheKeys.SAML_SESSION),
[CacheKeys.ROLES]: standardCache(CacheKeys.ROLES),
[CacheKeys.APP_CONFIG]: standardCache(CacheKeys.APP_CONFIG),
[CacheKeys.MCP_TOOLS]: standardCache(CacheKeys.MCP_TOOLS),
[CacheKeys.CONFIG_STORE]: standardCache(CacheKeys.CONFIG_STORE),
[CacheKeys.PENDING_REQ]: standardCache(CacheKeys.PENDING_REQ),
[CacheKeys.ENCODED_DOMAINS]: new Keyv({ store: keyvMongo, namespace: CacheKeys.ENCODED_DOMAINS }),

3
api/cache/index.js vendored
View File

@@ -1,4 +1,5 @@
const keyvFiles = require('./keyvFiles');
const getLogStores = require('./getLogStores');
const logViolation = require('./logViolation');
module.exports = { getLogStores, logViolation };
module.exports = { ...keyvFiles, getLogStores, logViolation };

9
api/cache/keyvFiles.js vendored Normal file
View File

@@ -0,0 +1,9 @@
const { KeyvFile } = require('keyv-file');
const logFile = new KeyvFile({ filename: './data/logs.json' }).setMaxListeners(20);
const violationFile = new KeyvFile({ filename: './data/violations.json' }).setMaxListeners(20);
module.exports = {
logFile,
violationFile,
};

View File

@@ -1,69 +1,65 @@
import mongoose from 'mongoose';
import { EventEmitter } from 'events';
import { GridFSBucket } from 'mongodb';
import { logger } from '@librechat/data-schemas';
import type { Db, ReadPreference, Collection } from 'mongodb';
// api/cache/keyvMongo.js
const mongoose = require('mongoose');
const EventEmitter = require('events');
const { GridFSBucket } = require('mongodb');
const { logger } = require('~/config');
interface KeyvMongoOptions {
url?: string;
collection?: string;
useGridFS?: boolean;
readPreference?: ReadPreference;
}
interface GridFSClient {
bucket: GridFSBucket;
store: Collection;
db: Db;
}
interface CollectionClient {
store: Collection;
db: Db;
}
type Client = GridFSClient | CollectionClient;
const storeMap = new Map<string, Client>();
const storeMap = new Map();
class KeyvMongoCustom extends EventEmitter {
private opts: KeyvMongoOptions;
public ttlSupport: boolean;
public namespace?: string;
constructor(options: KeyvMongoOptions = {}) {
constructor(url, options = {}) {
super();
url = url || {};
if (typeof url === 'string') {
url = { url };
}
if (url.uri) {
url = { url: url.uri, ...url };
}
this.opts = {
url: 'mongodb://127.0.0.1:27017',
collection: 'keyv',
...url,
...options,
};
this.ttlSupport = false;
// Filter valid options
const keyvMongoKeys = new Set([
'url',
'collection',
'namespace',
'serialize',
'deserialize',
'uri',
'useGridFS',
'dialect',
]);
this.opts = Object.fromEntries(Object.entries(this.opts).filter(([k]) => keyvMongoKeys.has(k)));
}
// Helper to access the store WITHOUT storing a promise on the instance
private async _getClient(): Promise<Client> {
_getClient() {
const storeKey = `${this.opts.collection}:${this.opts.useGridFS ? 'gridfs' : 'collection'}`;
// If we already have the store initialized, return it directly
if (storeMap.has(storeKey)) {
return storeMap.get(storeKey)!;
return Promise.resolve(storeMap.get(storeKey));
}
// Check mongoose connection state
if (mongoose.connection.readyState !== 1) {
throw new Error('Mongoose connection not ready. Ensure connectDb() is called first.');
return Promise.reject(
new Error('Mongoose connection not ready. Ensure connectDb() is called first.'),
);
}
try {
const db = mongoose.connection.db as unknown as Db | undefined;
if (!db) {
throw new Error('MongoDB database not available');
}
let client: Client;
const db = mongoose.connection.db;
let client;
if (this.opts.useGridFS) {
const bucket = new GridFSBucket(db, {
@@ -79,17 +75,17 @@ class KeyvMongoCustom extends EventEmitter {
}
storeMap.set(storeKey, client);
return client;
return Promise.resolve(client);
} catch (error) {
this.emit('error', error);
throw error;
return Promise.reject(error);
}
}
async get(key: string): Promise<unknown> {
async get(key) {
const client = await this._getClient();
if (this.opts.useGridFS && this.isGridFSClient(client)) {
if (this.opts.useGridFS) {
await client.store.updateOne(
{
filename: key,
@@ -104,7 +100,7 @@ class KeyvMongoCustom extends EventEmitter {
const stream = client.bucket.openDownloadStreamByName(key);
return new Promise((resolve) => {
const resp: Uint8Array[] = [];
const resp = [];
stream.on('error', () => {
resolve(undefined);
});
@@ -114,7 +110,7 @@ class KeyvMongoCustom extends EventEmitter {
resolve(data);
});
stream.on('data', (chunk: Uint8Array) => {
stream.on('data', (chunk) => {
resp.push(chunk);
});
});
@@ -129,7 +125,7 @@ class KeyvMongoCustom extends EventEmitter {
return document.value;
}
async getMany(keys: string[]): Promise<unknown[]> {
async getMany(keys) {
const client = await this._getClient();
if (this.opts.useGridFS) {
@@ -139,9 +135,9 @@ class KeyvMongoCustom extends EventEmitter {
}
const values = await Promise.allSettled(promises);
const data: unknown[] = [];
const data = [];
for (const value of values) {
data.push(value.status === 'fulfilled' ? value.value : undefined);
data.push(value.value);
}
return data;
@@ -152,7 +148,7 @@ class KeyvMongoCustom extends EventEmitter {
.project({ _id: 0, value: 1, key: 1 })
.toArray();
const results: unknown[] = [...keys];
const results = [...keys];
let i = 0;
for (const key of keys) {
const rowIndex = values.findIndex((row) => row.key === key);
@@ -163,11 +159,11 @@ class KeyvMongoCustom extends EventEmitter {
return results;
}
async set(key: string, value: string, ttl?: number): Promise<unknown> {
async set(key, value, ttl) {
const client = await this._getClient();
const expiresAt = typeof ttl === 'number' ? new Date(Date.now() + ttl) : null;
if (this.opts.useGridFS && this.isGridFSClient(client)) {
if (this.opts.useGridFS) {
const stream = client.bucket.openUploadStream(key, {
metadata: {
expiresAt,
@@ -190,18 +186,20 @@ class KeyvMongoCustom extends EventEmitter {
);
}
async delete(key: string): Promise<boolean> {
async delete(key) {
if (typeof key !== 'string') {
return false;
}
const client = await this._getClient();
if (this.opts.useGridFS && this.isGridFSClient(client)) {
if (this.opts.useGridFS) {
try {
const bucket = new GridFSBucket(client.db, {
bucketName: this.opts.collection,
});
const files = await bucket.find({ filename: key }).toArray();
if (files.length > 0) {
await client.bucket.delete(files[0]._id);
}
await client.bucket.delete(files[0]._id);
return true;
} catch {
return false;
@@ -212,10 +210,10 @@ class KeyvMongoCustom extends EventEmitter {
return object.deletedCount > 0;
}
async deleteMany(keys: string[]): Promise<boolean> {
async deleteMany(keys) {
const client = await this._getClient();
if (this.opts.useGridFS && this.isGridFSClient(client)) {
if (this.opts.useGridFS) {
const bucket = new GridFSBucket(client.db, {
bucketName: this.opts.collection,
});
@@ -232,17 +230,15 @@ class KeyvMongoCustom extends EventEmitter {
return object.deletedCount > 0;
}
async clear(): Promise<void> {
async clear() {
const client = await this._getClient();
if (this.opts.useGridFS && this.isGridFSClient(client)) {
if (this.opts.useGridFS) {
try {
await client.bucket.drop();
} catch (error: unknown) {
} catch (error) {
// Throw error if not "namespace not found" error
const errorCode =
error instanceof Error && 'code' in error ? (error as { code?: number }).code : undefined;
if (errorCode !== 26) {
if (!(error.code === 26)) {
throw error;
}
}
@@ -253,7 +249,7 @@ class KeyvMongoCustom extends EventEmitter {
});
}
async has(key: string): Promise<boolean> {
async has(key) {
const client = await this._getClient();
const filter = { [this.opts.useGridFS ? 'filename' : 'key']: { $eq: key } };
const document = await client.store.countDocuments(filter, { limit: 1 });
@@ -261,14 +257,10 @@ class KeyvMongoCustom extends EventEmitter {
}
// No-op disconnect
async disconnect(): Promise<boolean> {
async disconnect() {
// This is a no-op since we don't want to close the shared mongoose connection
return true;
}
private isGridFSClient(client: Client): client is GridFSClient {
return (client as GridFSClient).bucket != null;
}
}
const keyvMongo = new KeyvMongoCustom({
@@ -277,4 +269,4 @@ const keyvMongo = new KeyvMongoCustom({
keyvMongo.on('error', (err) => logger.error('KeyvMongo connection error:', err));
export default keyvMongo;
module.exports = keyvMongo;

View File

@@ -1,4 +1,4 @@
const { isEnabled } = require('@librechat/api');
const { isEnabled } = require('~/server/utils');
const { ViolationTypes } = require('librechat-data-provider');
const getLogStores = require('./getLogStores');
const banViolation = require('./banViolation');

57
api/cache/redisClients.js vendored Normal file
View File

@@ -0,0 +1,57 @@
const IoRedis = require('ioredis');
const { cacheConfig } = require('./cacheConfig');
const { createClient, createCluster } = require('@keyv/redis');
const GLOBAL_PREFIX_SEPARATOR = '::';
const urls = cacheConfig.REDIS_URI?.split(',').map((uri) => new URL(uri));
const username = urls?.[0].username || cacheConfig.REDIS_USERNAME;
const password = urls?.[0].password || cacheConfig.REDIS_PASSWORD;
const ca = cacheConfig.REDIS_CA;
/** @type {import('ioredis').Redis | import('ioredis').Cluster | null} */
let ioredisClient = null;
if (cacheConfig.USE_REDIS) {
const redisOptions = {
username: username,
password: password,
tls: ca ? { ca } : undefined,
keyPrefix: `${cacheConfig.REDIS_KEY_PREFIX}${GLOBAL_PREFIX_SEPARATOR}`,
maxListeners: cacheConfig.REDIS_MAX_LISTENERS,
};
ioredisClient =
urls.length === 1
? new IoRedis(cacheConfig.REDIS_URI, redisOptions)
: new IoRedis.Cluster(cacheConfig.REDIS_URI, { redisOptions });
// Pinging the Redis server every 5 minutes to keep the connection alive
const pingInterval = setInterval(() => ioredisClient.ping(), 5 * 60 * 1000);
ioredisClient.on('close', () => clearInterval(pingInterval));
ioredisClient.on('end', () => clearInterval(pingInterval));
}
/** @type {import('@keyv/redis').RedisClient | import('@keyv/redis').RedisCluster | null} */
let keyvRedisClient = null;
if (cacheConfig.USE_REDIS) {
// ** WARNING ** Keyv Redis client does not support Prefix like ioredis above.
// The prefix feature will be handled by the Keyv-Redis store in cacheFactory.js
const redisOptions = { username, password, socket: { tls: ca != null, ca } };
keyvRedisClient =
urls.length === 1
? createClient({ url: cacheConfig.REDIS_URI, ...redisOptions })
: createCluster({
rootNodes: cacheConfig.REDIS_URI.split(',').map((url) => ({ url })),
defaults: redisOptions,
});
keyvRedisClient.setMaxListeners(cacheConfig.REDIS_MAX_LISTENERS);
// Pinging the Redis server every 5 minutes to keep the connection alive
const keyvPingInterval = setInterval(() => keyvRedisClient.ping(), 5 * 60 * 1000);
keyvRedisClient.on('disconnect', () => clearInterval(keyvPingInterval));
keyvRedisClient.on('end', () => clearInterval(keyvPingInterval));
}
module.exports = { ioredisClient, keyvRedisClient, GLOBAL_PREFIX_SEPARATOR };

View File

@@ -1,13 +1,27 @@
const { EventSource } = require('eventsource');
const { Time } = require('librechat-data-provider');
const { MCPManager, FlowStateManager, OAuthReconnectionManager } = require('@librechat/api');
const { MCPManager, FlowStateManager } = require('@librechat/api');
const logger = require('./winston');
global.EventSource = EventSource;
/** @type {MCPManager} */
let mcpManager = null;
let flowManager = null;
/**
* @param {string} [userId] - Optional user ID, to avoid disconnecting the current user.
* @returns {MCPManager}
*/
function getMCPManager(userId) {
if (!mcpManager) {
mcpManager = MCPManager.getInstance();
} else {
mcpManager.checkIdleConnections(userId);
}
return mcpManager;
}
/**
* @param {Keyv} flowsCache
* @returns {FlowStateManager}
@@ -23,9 +37,6 @@ function getFlowStateManager(flowsCache) {
module.exports = {
logger,
createMCPManager: MCPManager.createInstance,
getMCPManager: MCPManager.getInstance,
getMCPManager,
getFlowStateManager,
createOAuthReconnectionManager: OAuthReconnectionManager.createInstance,
getOAuthReconnectionManager: OAuthReconnectionManager.getInstance,
};

View File

@@ -1,34 +1,11 @@
require('dotenv').config();
const { isEnabled } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const mongoose = require('mongoose');
const MONGO_URI = process.env.MONGO_URI;
if (!MONGO_URI) {
throw new Error('Please define the MONGO_URI environment variable');
}
/** The maximum number of connections in the connection pool. */
const maxPoolSize = parseInt(process.env.MONGO_MAX_POOL_SIZE) || undefined;
/** The minimum number of connections in the connection pool. */
const minPoolSize = parseInt(process.env.MONGO_MIN_POOL_SIZE) || undefined;
/** The maximum number of connections that may be in the process of being established concurrently by the connection pool. */
const maxConnecting = parseInt(process.env.MONGO_MAX_CONNECTING) || undefined;
/** The maximum number of milliseconds that a connection can remain idle in the pool before being removed and closed. */
const maxIdleTimeMS = parseInt(process.env.MONGO_MAX_IDLE_TIME_MS) || undefined;
/** The maximum time in milliseconds that a thread can wait for a connection to become available. */
const waitQueueTimeoutMS = parseInt(process.env.MONGO_WAIT_QUEUE_TIMEOUT_MS) || undefined;
/** Set to false to disable automatic index creation for all models associated with this connection. */
const autoIndex =
process.env.MONGO_AUTO_INDEX != undefined
? isEnabled(process.env.MONGO_AUTO_INDEX) || false
: undefined;
/** Set to `false` to disable Mongoose automatically calling `createCollection()` on every model created on this connection. */
const autoCreate =
process.env.MONGO_AUTO_CREATE != undefined
? isEnabled(process.env.MONGO_AUTO_CREATE) || false
: undefined;
/**
* Global is used here to maintain a cached connection across hot reloads
* in development. This prevents connections growing exponentially
@@ -49,21 +26,13 @@ async function connectDb() {
if (!cached.promise || disconnected) {
const opts = {
bufferCommands: false,
...(maxPoolSize ? { maxPoolSize } : {}),
...(minPoolSize ? { minPoolSize } : {}),
...(maxConnecting ? { maxConnecting } : {}),
...(maxIdleTimeMS ? { maxIdleTimeMS } : {}),
...(waitQueueTimeoutMS ? { waitQueueTimeoutMS } : {}),
...(autoIndex != undefined ? { autoIndex } : {}),
...(autoCreate != undefined ? { autoCreate } : {}),
// useNewUrlParser: true,
// useUnifiedTopology: true,
// bufferMaxEntries: 0,
// useFindAndModify: true,
// useCreateIndex: true
};
logger.info('Mongo Connection options');
logger.info(JSON.stringify(opts, null, 2));
mongoose.set('strictQuery', true);
cached.promise = mongoose.connect(MONGO_URI, opts).then((mongoose) => {
return mongoose;

View File

@@ -1,8 +1,10 @@
const mongoose = require('mongoose');
const { MeiliSearch } = require('meilisearch');
const { logger } = require('@librechat/data-schemas');
const { FlowStateManager } = require('@librechat/api');
const { CacheKeys } = require('librechat-data-provider');
const { isEnabled, FlowStateManager } = require('@librechat/api');
const { isEnabled } = require('~/server/utils');
const { getLogStores } = require('~/cache');
const Conversation = mongoose.models.Conversation;
@@ -29,265 +31,79 @@ class MeiliSearchClient {
}
}
/**
* Deletes documents from MeiliSearch index that are missing the user field
* @param {import('meilisearch').Index} index - MeiliSearch index instance
* @param {string} indexName - Name of the index for logging
* @returns {Promise<number>} - Number of documents deleted
*/
async function deleteDocumentsWithoutUserField(index, indexName) {
let deletedCount = 0;
let offset = 0;
const batchSize = 1000;
try {
while (true) {
const searchResult = await index.search('', {
limit: batchSize,
offset: offset,
});
if (searchResult.hits.length === 0) {
break;
}
const idsToDelete = searchResult.hits.filter((hit) => !hit.user).map((hit) => hit.id);
if (idsToDelete.length > 0) {
logger.info(
`[indexSync] Deleting ${idsToDelete.length} documents without user field from ${indexName} index`,
);
await index.deleteDocuments(idsToDelete);
deletedCount += idsToDelete.length;
}
if (searchResult.hits.length < batchSize) {
break;
}
offset += batchSize;
}
if (deletedCount > 0) {
logger.info(`[indexSync] Deleted ${deletedCount} orphaned documents from ${indexName} index`);
}
} catch (error) {
logger.error(`[indexSync] Error deleting documents from ${indexName}:`, error);
}
return deletedCount;
}
/**
* Ensures indexes have proper filterable attributes configured and checks if documents have user field
* @param {MeiliSearch} client - MeiliSearch client instance
* @returns {Promise<{settingsUpdated: boolean, orphanedDocsFound: boolean}>} - Status of what was done
*/
async function ensureFilterableAttributes(client) {
let settingsUpdated = false;
let hasOrphanedDocs = false;
try {
// Check and update messages index
try {
const messagesIndex = client.index('messages');
const settings = await messagesIndex.getSettings();
if (!settings.filterableAttributes || !settings.filterableAttributes.includes('user')) {
logger.info('[indexSync] Configuring messages index to filter by user...');
await messagesIndex.updateSettings({
filterableAttributes: ['user'],
});
logger.info('[indexSync] Messages index configured for user filtering');
settingsUpdated = true;
}
// Check if existing documents have user field indexed
try {
const searchResult = await messagesIndex.search('', { limit: 1 });
if (searchResult.hits.length > 0 && !searchResult.hits[0].user) {
logger.info(
'[indexSync] Existing messages missing user field, will clean up orphaned documents...',
);
hasOrphanedDocs = true;
}
} catch (searchError) {
logger.debug('[indexSync] Could not check message documents:', searchError.message);
}
} catch (error) {
if (error.code !== 'index_not_found') {
logger.warn('[indexSync] Could not check/update messages index settings:', error.message);
}
}
// Check and update conversations index
try {
const convosIndex = client.index('convos');
const settings = await convosIndex.getSettings();
if (!settings.filterableAttributes || !settings.filterableAttributes.includes('user')) {
logger.info('[indexSync] Configuring convos index to filter by user...');
await convosIndex.updateSettings({
filterableAttributes: ['user'],
});
logger.info('[indexSync] Convos index configured for user filtering');
settingsUpdated = true;
}
// Check if existing documents have user field indexed
try {
const searchResult = await convosIndex.search('', { limit: 1 });
if (searchResult.hits.length > 0 && !searchResult.hits[0].user) {
logger.info(
'[indexSync] Existing conversations missing user field, will clean up orphaned documents...',
);
hasOrphanedDocs = true;
}
} catch (searchError) {
logger.debug('[indexSync] Could not check conversation documents:', searchError.message);
}
} catch (error) {
if (error.code !== 'index_not_found') {
logger.warn('[indexSync] Could not check/update convos index settings:', error.message);
}
}
// If either index has orphaned documents, clean them up (but don't force resync)
if (hasOrphanedDocs) {
try {
const messagesIndex = client.index('messages');
await deleteDocumentsWithoutUserField(messagesIndex, 'messages');
} catch (error) {
logger.debug('[indexSync] Could not clean up messages:', error.message);
}
try {
const convosIndex = client.index('convos');
await deleteDocumentsWithoutUserField(convosIndex, 'convos');
} catch (error) {
logger.debug('[indexSync] Could not clean up convos:', error.message);
}
logger.info('[indexSync] Orphaned documents cleaned up without forcing resync.');
}
if (settingsUpdated) {
logger.info('[indexSync] Index settings updated. Full re-sync will be triggered.');
}
} catch (error) {
logger.error('[indexSync] Error ensuring filterable attributes:', error);
}
return { settingsUpdated, orphanedDocsFound: hasOrphanedDocs };
}
/**
* Performs the actual sync operations for messages and conversations
* @param {FlowStateManager} flowManager - Flow state manager instance
* @param {string} flowId - Flow identifier
* @param {string} flowType - Flow type
*/
async function performSync(flowManager, flowId, flowType) {
try {
const client = MeiliSearchClient.getInstance();
async function performSync() {
const client = MeiliSearchClient.getInstance();
const { status } = await client.health();
if (status !== 'available') {
throw new Error('Meilisearch not available');
}
if (indexingDisabled === true) {
logger.info('[indexSync] Indexing is disabled, skipping...');
return { messagesSync: false, convosSync: false };
}
/** Ensures indexes have proper filterable attributes configured */
const { settingsUpdated, orphanedDocsFound: _orphanedDocsFound } =
await ensureFilterableAttributes(client);
let messagesSync = false;
let convosSync = false;
// Only reset flags if settings were actually updated (not just for orphaned doc cleanup)
if (settingsUpdated) {
logger.info(
'[indexSync] Settings updated. Forcing full re-sync to reindex with new configuration...',
);
// Reset sync flags to force full re-sync
await Message.collection.updateMany({ _meiliIndex: true }, { $set: { _meiliIndex: false } });
await Conversation.collection.updateMany(
{ _meiliIndex: true },
{ $set: { _meiliIndex: false } },
);
}
// Check if we need to sync messages
const messageProgress = await Message.getSyncProgress();
if (!messageProgress.isComplete || settingsUpdated) {
logger.info(
`[indexSync] Messages need syncing: ${messageProgress.totalProcessed}/${messageProgress.totalDocuments} indexed`,
);
// Check if we should do a full sync or incremental
const messageCount = await Message.countDocuments();
const messagesIndexed = messageProgress.totalProcessed;
const syncThreshold = parseInt(process.env.MEILI_SYNC_THRESHOLD || '1000', 10);
if (messageCount - messagesIndexed > syncThreshold) {
logger.info('[indexSync] Starting full message sync due to large difference');
await Message.syncWithMeili();
messagesSync = true;
} else if (messageCount !== messagesIndexed) {
logger.warn('[indexSync] Messages out of sync, performing incremental sync');
await Message.syncWithMeili();
messagesSync = true;
}
} else {
logger.info(
`[indexSync] Messages are fully synced: ${messageProgress.totalProcessed}/${messageProgress.totalDocuments}`,
);
}
// Check if we need to sync conversations
const convoProgress = await Conversation.getSyncProgress();
if (!convoProgress.isComplete || settingsUpdated) {
logger.info(
`[indexSync] Conversations need syncing: ${convoProgress.totalProcessed}/${convoProgress.totalDocuments} indexed`,
);
const convoCount = await Conversation.countDocuments();
const convosIndexed = convoProgress.totalProcessed;
const syncThreshold = parseInt(process.env.MEILI_SYNC_THRESHOLD || '1000', 10);
if (convoCount - convosIndexed > syncThreshold) {
logger.info('[indexSync] Starting full conversation sync due to large difference');
await Conversation.syncWithMeili();
convosSync = true;
} else if (convoCount !== convosIndexed) {
logger.warn('[indexSync] Convos out of sync, performing incremental sync');
await Conversation.syncWithMeili();
convosSync = true;
}
} else {
logger.info(
`[indexSync] Conversations are fully synced: ${convoProgress.totalProcessed}/${convoProgress.totalDocuments}`,
);
}
return { messagesSync, convosSync };
} finally {
if (indexingDisabled === true) {
logger.info('[indexSync] Indexing is disabled, skipping cleanup...');
} else if (flowManager && flowId && flowType) {
try {
await flowManager.deleteFlow(flowId, flowType);
logger.debug('[indexSync] Flow state cleaned up');
} catch (cleanupErr) {
logger.debug('[indexSync] Could not clean up flow state:', cleanupErr.message);
}
}
const { status } = await client.health();
if (status !== 'available') {
throw new Error('Meilisearch not available');
}
if (indexingDisabled === true) {
logger.info('[indexSync] Indexing is disabled, skipping...');
return { messagesSync: false, convosSync: false };
}
let messagesSync = false;
let convosSync = false;
// Check if we need to sync messages
const messageProgress = await Message.getSyncProgress();
if (!messageProgress.isComplete) {
logger.info(
`[indexSync] Messages need syncing: ${messageProgress.totalProcessed}/${messageProgress.totalDocuments} indexed`,
);
// Check if we should do a full sync or incremental
const messageCount = await Message.countDocuments();
const messagesIndexed = messageProgress.totalProcessed;
const syncThreshold = parseInt(process.env.MEILI_SYNC_THRESHOLD || '1000', 10);
if (messageCount - messagesIndexed > syncThreshold) {
logger.info('[indexSync] Starting full message sync due to large difference');
await Message.syncWithMeili();
messagesSync = true;
} else if (messageCount !== messagesIndexed) {
logger.warn('[indexSync] Messages out of sync, performing incremental sync');
await Message.syncWithMeili();
messagesSync = true;
}
} else {
logger.info(
`[indexSync] Messages are fully synced: ${messageProgress.totalProcessed}/${messageProgress.totalDocuments}`,
);
}
// Check if we need to sync conversations
const convoProgress = await Conversation.getSyncProgress();
if (!convoProgress.isComplete) {
logger.info(
`[indexSync] Conversations need syncing: ${convoProgress.totalProcessed}/${convoProgress.totalDocuments} indexed`,
);
const convoCount = await Conversation.countDocuments();
const convosIndexed = convoProgress.totalProcessed;
const syncThreshold = parseInt(process.env.MEILI_SYNC_THRESHOLD || '1000', 10);
if (convoCount - convosIndexed > syncThreshold) {
logger.info('[indexSync] Starting full conversation sync due to large difference');
await Conversation.syncWithMeili();
convosSync = true;
} else if (convoCount !== convosIndexed) {
logger.warn('[indexSync] Convos out of sync, performing incremental sync');
await Conversation.syncWithMeili();
convosSync = true;
}
} else {
logger.info(
`[indexSync] Conversations are fully synced: ${convoProgress.totalProcessed}/${convoProgress.totalDocuments}`,
);
}
return { messagesSync, convosSync };
}
/**
@@ -300,26 +116,24 @@ async function indexSync() {
logger.info('[indexSync] Starting index synchronization check...');
// Get or create FlowStateManager instance
const flowsCache = getLogStores(CacheKeys.FLOWS);
if (!flowsCache) {
logger.warn('[indexSync] Flows cache not available, falling back to direct sync');
return await performSync(null, null, null);
}
const flowManager = new FlowStateManager(flowsCache, {
ttl: 60000 * 10, // 10 minutes TTL for sync operations
});
// Use a unique flow ID for the sync operation
const flowId = 'meili-index-sync';
const flowType = 'MEILI_SYNC';
try {
// Get or create FlowStateManager instance
const flowsCache = getLogStores(CacheKeys.FLOWS);
if (!flowsCache) {
logger.warn('[indexSync] Flows cache not available, falling back to direct sync');
return await performSync();
}
const flowManager = new FlowStateManager(flowsCache, {
ttl: 60000 * 10, // 10 minutes TTL for sync operations
});
// Use a unique flow ID for the sync operation
const flowId = 'meili-index-sync';
const flowType = 'MEILI_SYNC';
// This will only execute the handler if no other instance is running the sync
const result = await flowManager.createFlowWithHandler(flowId, flowType, () =>
performSync(flowManager, flowId, flowType),
);
const result = await flowManager.createFlowWithHandler(flowId, flowType, performSync);
if (result.messagesSync || result.convosSync) {
logger.info('[indexSync] Sync completed successfully');

View File

@@ -1,17 +1,20 @@
const mongoose = require('mongoose');
const crypto = require('node:crypto');
const { logger } = require('@librechat/data-schemas');
const { ResourceType, SystemRoles, Tools, actionDelimiter } = require('librechat-data-provider');
const { GLOBAL_PROJECT_NAME, EPHEMERAL_AGENT_ID, mcp_all, mcp_delimiter } =
const { SystemRoles, Tools, actionDelimiter } = require('librechat-data-provider');
const { GLOBAL_PROJECT_NAME, EPHEMERAL_AGENT_ID, mcp_delimiter } =
require('librechat-data-provider').Constants;
// Default category value for new agents
const {
removeAgentFromAllProjects,
removeAgentIdsFromProject,
addAgentIdsToProject,
getProjectByName,
addAgentIdsToProject,
removeAgentIdsFromProject,
removeAgentFromAllProjects,
} = require('./Project');
const { removeAllPermissions } = require('~/server/services/PermissionService');
const { getMCPServerTools } = require('~/server/services/Config');
const { getCachedTools } = require('~/server/services/Config');
// Category values are now imported from shared constants
// Schema fields (category, support_contact, is_promoted) are defined in @librechat/data-schemas
const { getActions } = require('./Action');
const { Agent } = require('~/db/models');
@@ -49,14 +52,6 @@ const createAgent = async (agentData) => {
*/
const getAgent = async (searchParameter) => await Agent.findOne(searchParameter).lean();
/**
* Get multiple agent documents based on the provided search parameters.
*
* @param {Object} searchParameter - The search parameters to find agents.
* @returns {Promise<Agent[]>} Array of agent documents as plain objects.
*/
const getAgents = async (searchParameter) => await Agent.find(searchParameter).lean();
/**
* Load an agent based on the provided ID
*
@@ -69,6 +64,8 @@ const getAgents = async (searchParameter) => await Agent.find(searchParameter).l
*/
const loadEphemeralAgent = async ({ req, agent_id, endpoint, model_parameters: _m }) => {
const { model, ...model_parameters } = _m;
/** @type {Record<string, FunctionTool>} */
const availableTools = await getCachedTools({ userId: req.user.id, includeGlobal: true });
/** @type {TEphemeralAgent | null} */
const ephemeralAgent = req.body.ephemeralAgent;
const mcpServers = new Set(ephemeralAgent?.mcp);
@@ -84,20 +81,15 @@ const loadEphemeralAgent = async ({ req, agent_id, endpoint, model_parameters: _
tools.push(Tools.web_search);
}
const addedServers = new Set();
if (mcpServers.size > 0) {
for (const mcpServer of mcpServers) {
if (addedServers.has(mcpServer)) {
for (const toolName of Object.keys(availableTools)) {
if (!toolName.includes(mcp_delimiter)) {
continue;
}
const serverTools = await getMCPServerTools(mcpServer);
if (!serverTools) {
tools.push(`${mcp_all}${mcp_delimiter}${mcpServer}`);
addedServers.add(mcpServer);
continue;
const mcpServer = toolName.split(mcp_delimiter)?.[1];
if (mcpServer && mcpServers.has(mcpServer)) {
tools.push(toolName);
}
tools.push(...Object.keys(serverTools));
addedServers.add(mcpServer);
}
}
@@ -375,10 +367,17 @@ const updateAgent = async (searchParameter, updateData, options = {}) => {
if (shouldCreateVersion) {
const duplicateVersion = isDuplicateVersion(updateData, versionData, versions, actionsHash);
if (duplicateVersion && !forceVersion) {
// No changes detected, return the current agent without creating a new version
const agentObj = currentAgent.toObject();
agentObj.version = versions.length;
return agentObj;
const error = new Error(
'Duplicate version: This would create a version identical to an existing one',
);
error.statusCode = 409;
error.details = {
duplicateVersion,
versionIndex: versions.findIndex(
(v) => JSON.stringify(duplicateVersion) === JSON.stringify(v),
),
};
throw error;
}
}
@@ -517,10 +516,6 @@ const deleteAgent = async (searchParameter) => {
const agent = await Agent.findOneAndDelete(searchParameter);
if (agent) {
await removeAgentFromAllProjects(agent.id);
await removeAllPermissions({
resourceType: ResourceType.AGENT,
resourceId: agent._id,
});
}
return agent;
};
@@ -544,7 +539,11 @@ const getListAgentsByAccess = async ({
const normalizedLimit = isPaginated ? Math.min(Math.max(1, parseInt(limit) || 20), 100) : null;
// Build base query combining ACL accessible agents with other filters
const baseQuery = { ...otherParams, _id: { $in: accessibleIds } };
const baseQuery = { ...otherParams };
if (accessibleIds.length > 0) {
baseQuery._id = { $in: accessibleIds };
}
// Add cursor condition
if (after) {
@@ -683,7 +682,7 @@ const getListAgents = async (searchParameter) => {
* This function also updates the corresponding projects to include or exclude the agent ID.
*
* @param {Object} params - Parameters for updating the agent's projects.
* @param {IUser} params.user - Parameters for updating the agent's projects.
* @param {MongoUser} params.user - Parameters for updating the agent's projects.
* @param {string} params.agentId - The ID of the agent to update.
* @param {string[]} [params.projectIds] - Array of project IDs to add to the agent.
* @param {string[]} [params.removeProjectIds] - Array of project IDs to remove from the agent.
@@ -837,7 +836,6 @@ const countPromotedAgents = async () => {
module.exports = {
getAgent,
getAgents,
loadAgent,
createAgent,
updateAgent,

View File

@@ -8,14 +8,12 @@ process.env.CREDS_IV = '0123456789abcdef';
jest.mock('~/server/services/Config', () => ({
getCachedTools: jest.fn(),
getMCPServerTools: jest.fn(),
}));
const mongoose = require('mongoose');
const { v4: uuidv4 } = require('uuid');
const { agentSchema } = require('@librechat/data-schemas');
const { MongoMemoryServer } = require('mongodb-memory-server');
const { AccessRoleIds, ResourceType, PrincipalType } = require('librechat-data-provider');
const {
getAgent,
loadAgent,
@@ -23,16 +21,13 @@ const {
updateAgent,
deleteAgent,
getListAgents,
getListAgentsByAccess,
revertAgentVersion,
updateAgentProjects,
addAgentResourceFile,
removeAgentResourceFiles,
generateActionMetadataHash,
revertAgentVersion,
} = require('./Agent');
const permissionService = require('~/server/services/PermissionService');
const { getCachedTools, getMCPServerTools } = require('~/server/services/Config');
const { AclEntry } = require('~/db/models');
const { getCachedTools } = require('~/server/services/Config');
/**
* @type {import('mongoose').Model<import('@librechat/data-schemas').IAgent>}
@@ -412,26 +407,12 @@ describe('models/Agent', () => {
describe('Agent CRUD Operations', () => {
let mongoServer;
let AccessRole;
beforeAll(async () => {
mongoServer = await MongoMemoryServer.create();
const mongoUri = mongoServer.getUri();
Agent = mongoose.models.Agent || mongoose.model('Agent', agentSchema);
await mongoose.connect(mongoUri);
// Initialize models
const dbModels = require('~/db/models');
AccessRole = dbModels.AccessRole;
// Create necessary access roles for agents
await AccessRole.create({
accessRoleId: AccessRoleIds.AGENT_OWNER,
name: 'Owner',
description: 'Full control over agents',
resourceType: ResourceType.AGENT,
permBits: 15, // VIEW | EDIT | DELETE | SHARE
});
}, 20000);
afterAll(async () => {
@@ -487,51 +468,6 @@ describe('models/Agent', () => {
expect(agentAfterDelete).toBeNull();
});
test('should remove ACL entries when deleting an agent', async () => {
const agentId = `agent_${uuidv4()}`;
const authorId = new mongoose.Types.ObjectId();
// Create agent
const agent = await createAgent({
id: agentId,
name: 'Agent With Permissions',
provider: 'test',
model: 'test-model',
author: authorId,
});
// Grant permissions (simulating sharing)
await permissionService.grantPermission({
principalType: PrincipalType.USER,
principalId: authorId,
resourceType: ResourceType.AGENT,
resourceId: agent._id,
accessRoleId: AccessRoleIds.AGENT_OWNER,
grantedBy: authorId,
});
// Verify ACL entry exists
const aclEntriesBefore = await AclEntry.find({
resourceType: ResourceType.AGENT,
resourceId: agent._id,
});
expect(aclEntriesBefore).toHaveLength(1);
// Delete the agent
await deleteAgent({ id: agentId });
// Verify agent is deleted
const agentAfterDelete = await getAgent({ id: agentId });
expect(agentAfterDelete).toBeNull();
// Verify ACL entries are removed
const aclEntriesAfter = await AclEntry.find({
resourceType: ResourceType.AGENT,
resourceId: agent._id,
});
expect(aclEntriesAfter).toHaveLength(0);
});
test('should list agents by author', async () => {
const authorId = new mongoose.Types.ObjectId();
const otherAuthorId = new mongoose.Types.ObjectId();
@@ -943,31 +879,45 @@ describe('models/Agent', () => {
expect(emptyParamsAgent.model_parameters).toEqual({});
});
test('should not create new version for duplicate updates', async () => {
const authorId = new mongoose.Types.ObjectId();
const testCases = generateVersionTestCases();
test('should detect duplicate versions and reject updates', async () => {
const originalConsoleError = console.error;
console.error = jest.fn();
for (const testCase of testCases) {
const testAgentId = `agent_${uuidv4()}`;
try {
const authorId = new mongoose.Types.ObjectId();
const testCases = generateVersionTestCases();
await createAgent({
id: testAgentId,
provider: 'test',
model: 'test-model',
author: authorId,
...testCase.initial,
});
for (const testCase of testCases) {
const testAgentId = `agent_${uuidv4()}`;
const updatedAgent = await updateAgent({ id: testAgentId }, testCase.update);
expect(updatedAgent.versions).toHaveLength(2); // No new version created
await createAgent({
id: testAgentId,
provider: 'test',
model: 'test-model',
author: authorId,
...testCase.initial,
});
// Update with duplicate data should succeed but not create a new version
const duplicateUpdate = await updateAgent({ id: testAgentId }, testCase.duplicate);
await updateAgent({ id: testAgentId }, testCase.update);
expect(duplicateUpdate.versions).toHaveLength(2); // No new version created
let error;
try {
await updateAgent({ id: testAgentId }, testCase.duplicate);
} catch (e) {
error = e;
}
const agent = await getAgent({ id: testAgentId });
expect(agent.versions).toHaveLength(2);
expect(error).toBeDefined();
expect(error.message).toContain('Duplicate version');
expect(error.statusCode).toBe(409);
expect(error.details).toBeDefined();
expect(error.details.duplicateVersion).toBeDefined();
const agent = await getAgent({ id: testAgentId });
expect(agent.versions).toHaveLength(2);
}
} finally {
console.error = originalConsoleError;
}
});
@@ -1143,13 +1093,20 @@ describe('models/Agent', () => {
expect(secondUpdate.versions).toHaveLength(3);
// Update without forceVersion and no changes should not create a version
const duplicateUpdate = await updateAgent(
{ id: agentId },
{ tools: ['listEvents_action_test.com', 'createEvent_action_test.com'] },
{ updatingUserId: authorId.toString(), forceVersion: false },
);
let error;
try {
await updateAgent(
{ id: agentId },
{ tools: ['listEvents_action_test.com', 'createEvent_action_test.com'] },
{ updatingUserId: authorId.toString(), forceVersion: false },
);
} catch (e) {
error = e;
}
expect(duplicateUpdate.versions).toHaveLength(3); // No new version created
expect(error).toBeDefined();
expect(error.message).toContain('Duplicate version');
expect(error.statusCode).toBe(409);
});
test('should handle isDuplicateVersion with arrays containing null/undefined values', async () => {
@@ -1347,21 +1304,18 @@ describe('models/Agent', () => {
expect(secondUpdate.versions).toHaveLength(3);
expect(secondUpdate.support_contact.email).toBe('updated@support.com');
// Try to update with same support_contact - should be detected as duplicate but return successfully
const duplicateUpdate = await updateAgent(
{ id: agentId },
{
support_contact: {
name: 'Updated Support',
email: 'updated@support.com',
// Try to update with same support_contact - should be detected as duplicate
await expect(
updateAgent(
{ id: agentId },
{
support_contact: {
name: 'Updated Support',
email: 'updated@support.com',
},
},
},
);
// Should not create a new version
expect(duplicateUpdate.versions).toHaveLength(3);
expect(duplicateUpdate.version).toBe(3);
expect(duplicateUpdate.support_contact.email).toBe('updated@support.com');
),
).rejects.toThrow('Duplicate version');
});
test('should handle support_contact from empty to populated', async () => {
@@ -1545,22 +1499,18 @@ describe('models/Agent', () => {
expect(updated.support_contact.name).toBe('New Name');
expect(updated.support_contact.email).toBe('');
// Verify isDuplicateVersion works with partial changes - should return successfully without creating new version
const duplicateUpdate = await updateAgent(
{ id: agentId },
{
support_contact: {
name: 'New Name',
email: '',
// Verify isDuplicateVersion works with partial changes
await expect(
updateAgent(
{ id: agentId },
{
support_contact: {
name: 'New Name',
email: '',
},
},
},
);
// Should not create a new version since content is the same
expect(duplicateUpdate.versions).toHaveLength(2);
expect(duplicateUpdate.version).toBe(2);
expect(duplicateUpdate.support_contact.name).toBe('New Name');
expect(duplicateUpdate.support_contact.email).toBe('');
),
).rejects.toThrow('Duplicate version');
});
// Edge Cases
@@ -1930,16 +1880,6 @@ describe('models/Agent', () => {
another_tool: {},
});
// Mock getMCPServerTools to return tools for each server
getMCPServerTools.mockImplementation(async (server) => {
if (server === 'server1') {
return { tool1_mcp_server1: {} };
} else if (server === 'server2') {
return { tool2_mcp_server2: {} };
}
return null;
});
const mockReq = {
user: { id: 'user123' },
body: {
@@ -2124,14 +2064,6 @@ describe('models/Agent', () => {
getCachedTools.mockResolvedValue(availableTools);
// Mock getMCPServerTools to return all tools for server1
getMCPServerTools.mockImplementation(async (server) => {
if (server === 'server1') {
return availableTools; // All 100 tools belong to server1
}
return null;
});
const mockReq = {
user: { id: 'user123' },
body: {
@@ -2673,17 +2605,6 @@ describe('models/Agent', () => {
tool_mcp_server2: {}, // Different server
});
// Mock getMCPServerTools to return only tools matching the server
getMCPServerTools.mockImplementation(async (server) => {
if (server === 'server1') {
// Only return tool that correctly matches server1 format
return { tool_mcp_server1: {} };
} else if (server === 'server2') {
return { tool_mcp_server2: {} };
}
return null;
});
const mockReq = {
user: { id: 'user123' },
body: {
@@ -2809,18 +2730,11 @@ describe('models/Agent', () => {
agent_ids: ['agent1', 'agent2'],
});
const updatedAgent = await updateAgent(
{ id: agentId },
{ agent_ids: ['agent1', 'agent2', 'agent3'] },
);
expect(updatedAgent.versions).toHaveLength(2);
await updateAgent({ id: agentId }, { agent_ids: ['agent1', 'agent2', 'agent3'] });
// Update with same agent_ids should succeed but not create a new version
const duplicateUpdate = await updateAgent(
{ id: agentId },
{ agent_ids: ['agent1', 'agent2', 'agent3'] },
);
expect(duplicateUpdate.versions).toHaveLength(2); // No new version created
await expect(
updateAgent({ id: agentId }, { agent_ids: ['agent1', 'agent2', 'agent3'] }),
).rejects.toThrow('Duplicate version');
});
test('should handle agent_ids field alongside other fields', async () => {
@@ -2959,10 +2873,9 @@ describe('models/Agent', () => {
expect(updated.versions).toHaveLength(2);
expect(updated.agent_ids).toEqual([]);
// Update with same empty agent_ids should succeed but not create a new version
const duplicateUpdate = await updateAgent({ id: agentId }, { agent_ids: [] });
expect(duplicateUpdate.versions).toHaveLength(2); // No new version created
expect(duplicateUpdate.agent_ids).toEqual([]);
await expect(updateAgent({ id: agentId }, { agent_ids: [] })).rejects.toThrow(
'Duplicate version',
);
});
test('should handle agent without agent_ids field', async () => {
@@ -3072,212 +2985,6 @@ describe('Support Contact Field', () => {
// Verify support_contact is undefined when not provided
expect(agent.support_contact).toBeUndefined();
});
describe('getListAgentsByAccess - Security Tests', () => {
let userA, userB;
let agentA1, agentA2, agentA3;
beforeEach(async () => {
Agent = mongoose.models.Agent || mongoose.model('Agent', agentSchema);
await Agent.deleteMany({});
await AclEntry.deleteMany({});
// Create two users
userA = new mongoose.Types.ObjectId();
userB = new mongoose.Types.ObjectId();
// Create agents for user A
agentA1 = await createAgent({
id: `agent_${uuidv4().slice(0, 12)}`,
name: 'Agent A1',
description: 'User A agent 1',
provider: 'openai',
model: 'gpt-4',
author: userA,
});
agentA2 = await createAgent({
id: `agent_${uuidv4().slice(0, 12)}`,
name: 'Agent A2',
description: 'User A agent 2',
provider: 'openai',
model: 'gpt-4',
author: userA,
});
agentA3 = await createAgent({
id: `agent_${uuidv4().slice(0, 12)}`,
name: 'Agent A3',
description: 'User A agent 3',
provider: 'openai',
model: 'gpt-4',
author: userA,
});
});
test('should return empty list when user has no accessible agents (empty accessibleIds)', async () => {
// User B has no agents and no shared agents
const result = await getListAgentsByAccess({
accessibleIds: [],
otherParams: {},
});
expect(result.data).toHaveLength(0);
expect(result.has_more).toBe(false);
expect(result.first_id).toBeNull();
expect(result.last_id).toBeNull();
});
test('should not return other users agents when accessibleIds is empty', async () => {
// User B trying to list agents with empty accessibleIds should not see User A's agents
const result = await getListAgentsByAccess({
accessibleIds: [],
otherParams: { author: userB },
});
expect(result.data).toHaveLength(0);
expect(result.has_more).toBe(false);
});
test('should only return agents in accessibleIds list', async () => {
// Give User B access to only one of User A's agents
const accessibleIds = [agentA1._id];
const result = await getListAgentsByAccess({
accessibleIds,
otherParams: {},
});
expect(result.data).toHaveLength(1);
expect(result.data[0].id).toBe(agentA1.id);
expect(result.data[0].name).toBe('Agent A1');
});
test('should return multiple accessible agents when provided', async () => {
// Give User B access to two of User A's agents
const accessibleIds = [agentA1._id, agentA3._id];
const result = await getListAgentsByAccess({
accessibleIds,
otherParams: {},
});
expect(result.data).toHaveLength(2);
const returnedIds = result.data.map((agent) => agent.id);
expect(returnedIds).toContain(agentA1.id);
expect(returnedIds).toContain(agentA3.id);
expect(returnedIds).not.toContain(agentA2.id);
});
test('should respect other query parameters while enforcing accessibleIds', async () => {
// Give access to all agents but filter by name
const accessibleIds = [agentA1._id, agentA2._id, agentA3._id];
const result = await getListAgentsByAccess({
accessibleIds,
otherParams: { name: 'Agent A2' },
});
expect(result.data).toHaveLength(1);
expect(result.data[0].id).toBe(agentA2.id);
});
test('should handle pagination correctly with accessibleIds filter', async () => {
// Create more agents
const moreAgents = [];
for (let i = 4; i <= 10; i++) {
const agent = await createAgent({
id: `agent_${uuidv4().slice(0, 12)}`,
name: `Agent A${i}`,
description: `User A agent ${i}`,
provider: 'openai',
model: 'gpt-4',
author: userA,
});
moreAgents.push(agent);
}
// Give access to all agents
const allAgentIds = [agentA1, agentA2, agentA3, ...moreAgents].map((a) => a._id);
// First page
const page1 = await getListAgentsByAccess({
accessibleIds: allAgentIds,
otherParams: {},
limit: 5,
});
expect(page1.data).toHaveLength(5);
expect(page1.has_more).toBe(true);
expect(page1.after).toBeTruthy();
// Second page
const page2 = await getListAgentsByAccess({
accessibleIds: allAgentIds,
otherParams: {},
limit: 5,
after: page1.after,
});
expect(page2.data).toHaveLength(5);
expect(page2.has_more).toBe(false);
// Verify no overlap between pages
const page1Ids = page1.data.map((a) => a.id);
const page2Ids = page2.data.map((a) => a.id);
const intersection = page1Ids.filter((id) => page2Ids.includes(id));
expect(intersection).toHaveLength(0);
});
test('should return empty list when accessibleIds contains non-existent IDs', async () => {
// Try with non-existent agent IDs
const fakeIds = [new mongoose.Types.ObjectId(), new mongoose.Types.ObjectId()];
const result = await getListAgentsByAccess({
accessibleIds: fakeIds,
otherParams: {},
});
expect(result.data).toHaveLength(0);
expect(result.has_more).toBe(false);
});
test('should handle undefined accessibleIds as empty array', async () => {
// When accessibleIds is undefined, it should be treated as empty array
const result = await getListAgentsByAccess({
accessibleIds: undefined,
otherParams: {},
});
expect(result.data).toHaveLength(0);
expect(result.has_more).toBe(false);
});
test('should combine accessibleIds with author filter correctly', async () => {
// Create an agent for User B
const agentB1 = await createAgent({
id: `agent_${uuidv4().slice(0, 12)}`,
name: 'Agent B1',
description: 'User B agent 1',
provider: 'openai',
model: 'gpt-4',
author: userB,
});
// Give User B access to one of User A's agents
const accessibleIds = [agentA1._id, agentB1._id];
// Filter by author should further restrict the results
const result = await getListAgentsByAccess({
accessibleIds,
otherParams: { author: userB },
});
expect(result.data).toHaveLength(1);
expect(result.data[0].id).toBe(agentB1.id);
expect(result.data[0].author).toBe(userB.toString());
});
});
});
function createBasicAgent(overrides = {}) {

View File

@@ -1,4 +1,4 @@
const { logger } = require('@librechat/data-schemas');
const { logger } = require('~/config');
const options = [
{

View File

@@ -1,5 +1,6 @@
const { logger } = require('@librechat/data-schemas');
const { createTempChatExpirationDate } = require('@librechat/api');
const getCustomConfig = require('~/server/services/Config/getCustomConfig');
const { getMessages, deleteMessages } = require('./Message');
const { Conversation } = require('~/db/models');
@@ -101,8 +102,8 @@ module.exports = {
if (req?.body?.isTemporary) {
try {
const appConfig = req.config;
update.expiredAt = createTempChatExpirationDate(appConfig?.interfaceConfig);
const customConfig = await getCustomConfig();
update.expiredAt = createTempChatExpirationDate(customConfig);
} catch (err) {
logger.error('Error creating temporary chat expiration date:', err);
logger.info(`---\`saveConvo\` context: ${metadata?.context}`);
@@ -174,7 +175,7 @@ module.exports = {
if (search) {
try {
const meiliResults = await Conversation.meiliSearch(search, { filter: `user = "${user}"` });
const meiliResults = await Conversation.meiliSearch(search);
const matchingIds = Array.isArray(meiliResults.hits)
? meiliResults.hits.map((result) => result.conversationId)
: [];

View File

@@ -1,570 +0,0 @@
const mongoose = require('mongoose');
const { v4: uuidv4 } = require('uuid');
const { EModelEndpoint } = require('librechat-data-provider');
const { MongoMemoryServer } = require('mongodb-memory-server');
const {
deleteNullOrEmptyConversations,
searchConversation,
getConvosByCursor,
getConvosQueried,
getConvoFiles,
getConvoTitle,
deleteConvos,
saveConvo,
getConvo,
} = require('./Conversation');
jest.mock('~/server/services/Config/app');
jest.mock('./Message');
const { getMessages, deleteMessages } = require('./Message');
const { Conversation } = require('~/db/models');
describe('Conversation Operations', () => {
let mongoServer;
let mockReq;
let mockConversationData;
beforeAll(async () => {
mongoServer = await MongoMemoryServer.create();
const mongoUri = mongoServer.getUri();
await mongoose.connect(mongoUri);
});
afterAll(async () => {
await mongoose.disconnect();
await mongoServer.stop();
});
beforeEach(async () => {
// Clear database
await Conversation.deleteMany({});
// Reset mocks
jest.clearAllMocks();
// Default mock implementations
getMessages.mockResolvedValue([]);
deleteMessages.mockResolvedValue({ deletedCount: 0 });
mockReq = {
user: { id: 'user123' },
body: {},
config: {
interfaceConfig: {
temporaryChatRetention: 24, // Default 24 hours
},
},
};
mockConversationData = {
conversationId: uuidv4(),
title: 'Test Conversation',
endpoint: EModelEndpoint.openAI,
};
});
describe('saveConvo', () => {
it('should save a conversation for an authenticated user', async () => {
const result = await saveConvo(mockReq, mockConversationData);
expect(result.conversationId).toBe(mockConversationData.conversationId);
expect(result.user).toBe('user123');
expect(result.title).toBe('Test Conversation');
expect(result.endpoint).toBe(EModelEndpoint.openAI);
// Verify the conversation was actually saved to the database
const savedConvo = await Conversation.findOne({
conversationId: mockConversationData.conversationId,
user: 'user123',
});
expect(savedConvo).toBeTruthy();
expect(savedConvo.title).toBe('Test Conversation');
});
it('should query messages when saving a conversation', async () => {
// Mock messages as ObjectIds
const mongoose = require('mongoose');
const mockMessages = [new mongoose.Types.ObjectId(), new mongoose.Types.ObjectId()];
getMessages.mockResolvedValue(mockMessages);
await saveConvo(mockReq, mockConversationData);
// Verify that getMessages was called with correct parameters
expect(getMessages).toHaveBeenCalledWith(
{ conversationId: mockConversationData.conversationId },
'_id',
);
});
it('should handle newConversationId when provided', async () => {
const newConversationId = uuidv4();
const result = await saveConvo(mockReq, {
...mockConversationData,
newConversationId,
});
expect(result.conversationId).toBe(newConversationId);
});
it('should handle unsetFields metadata', async () => {
const metadata = {
unsetFields: { someField: 1 },
};
await saveConvo(mockReq, mockConversationData, metadata);
const savedConvo = await Conversation.findOne({
conversationId: mockConversationData.conversationId,
});
expect(savedConvo.someField).toBeUndefined();
});
});
describe('isTemporary conversation handling', () => {
it('should save a conversation with expiredAt when isTemporary is true', async () => {
// Mock app config with 24 hour retention
mockReq.config.interfaceConfig.temporaryChatRetention = 24;
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveConvo(mockReq, mockConversationData);
const afterSave = new Date();
expect(result.conversationId).toBe(mockConversationData.conversationId);
expect(result.expiredAt).toBeDefined();
expect(result.expiredAt).toBeInstanceOf(Date);
// Verify expiredAt is approximately 24 hours in the future
const expectedExpirationTime = new Date(beforeSave.getTime() + 24 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
new Date(afterSave.getTime() + 24 * 60 * 60 * 1000 + 1000).getTime(),
);
});
it('should save a conversation without expiredAt when isTemporary is false', async () => {
mockReq.body = { isTemporary: false };
const result = await saveConvo(mockReq, mockConversationData);
expect(result.conversationId).toBe(mockConversationData.conversationId);
expect(result.expiredAt).toBeNull();
});
it('should save a conversation without expiredAt when isTemporary is not provided', async () => {
// No isTemporary in body
mockReq.body = {};
const result = await saveConvo(mockReq, mockConversationData);
expect(result.conversationId).toBe(mockConversationData.conversationId);
expect(result.expiredAt).toBeNull();
});
it('should use custom retention period from config', async () => {
// Mock app config with 48 hour retention
mockReq.config.interfaceConfig.temporaryChatRetention = 48;
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveConvo(mockReq, mockConversationData);
expect(result.expiredAt).toBeDefined();
// Verify expiredAt is approximately 48 hours in the future
const expectedExpirationTime = new Date(beforeSave.getTime() + 48 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
expectedExpirationTime.getTime() + 1000,
);
});
it('should handle minimum retention period (1 hour)', async () => {
// Mock app config with less than minimum retention
mockReq.config.interfaceConfig.temporaryChatRetention = 0.5; // Half hour - should be clamped to 1 hour
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveConvo(mockReq, mockConversationData);
expect(result.expiredAt).toBeDefined();
// Verify expiredAt is approximately 1 hour in the future (minimum)
const expectedExpirationTime = new Date(beforeSave.getTime() + 1 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
expectedExpirationTime.getTime() + 1000,
);
});
it('should handle maximum retention period (8760 hours)', async () => {
// Mock app config with more than maximum retention
mockReq.config.interfaceConfig.temporaryChatRetention = 10000; // Should be clamped to 8760 hours
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveConvo(mockReq, mockConversationData);
expect(result.expiredAt).toBeDefined();
// Verify expiredAt is approximately 8760 hours (1 year) in the future
const expectedExpirationTime = new Date(beforeSave.getTime() + 8760 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
expectedExpirationTime.getTime() + 1000,
);
});
it('should handle missing config gracefully', async () => {
// Simulate missing config - should use default retention period
delete mockReq.config;
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveConvo(mockReq, mockConversationData);
const afterSave = new Date();
// Should still save the conversation with default retention period (30 days)
expect(result.conversationId).toBe(mockConversationData.conversationId);
expect(result.expiredAt).toBeDefined();
expect(result.expiredAt).toBeInstanceOf(Date);
// Verify expiredAt is approximately 30 days in the future (720 hours)
const expectedExpirationTime = new Date(beforeSave.getTime() + 720 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
new Date(afterSave.getTime() + 720 * 60 * 60 * 1000 + 1000).getTime(),
);
});
it('should use default retention when config is not provided', async () => {
// Mock getAppConfig to return empty config
mockReq.config = {}; // Empty config
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveConvo(mockReq, mockConversationData);
expect(result.expiredAt).toBeDefined();
// Default retention is 30 days (720 hours)
const expectedExpirationTime = new Date(beforeSave.getTime() + 30 * 24 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
expectedExpirationTime.getTime() + 1000,
);
});
it('should update expiredAt when saving existing temporary conversation', async () => {
// First save a temporary conversation
mockReq.config.interfaceConfig.temporaryChatRetention = 24;
mockReq.body = { isTemporary: true };
const firstSave = await saveConvo(mockReq, mockConversationData);
const originalExpiredAt = firstSave.expiredAt;
// Wait a bit to ensure time difference
await new Promise((resolve) => setTimeout(resolve, 100));
// Save again with same conversationId but different title
const updatedData = { ...mockConversationData, title: 'Updated Title' };
const secondSave = await saveConvo(mockReq, updatedData);
// Should update title and create new expiredAt
expect(secondSave.title).toBe('Updated Title');
expect(secondSave.expiredAt).toBeDefined();
expect(new Date(secondSave.expiredAt).getTime()).toBeGreaterThan(
new Date(originalExpiredAt).getTime(),
);
});
it('should not set expiredAt when updating non-temporary conversation', async () => {
// First save a non-temporary conversation
mockReq.body = { isTemporary: false };
const firstSave = await saveConvo(mockReq, mockConversationData);
expect(firstSave.expiredAt).toBeNull();
// Update without isTemporary flag
mockReq.body = {};
const updatedData = { ...mockConversationData, title: 'Updated Title' };
const secondSave = await saveConvo(mockReq, updatedData);
expect(secondSave.title).toBe('Updated Title');
expect(secondSave.expiredAt).toBeNull();
});
it('should filter out expired conversations in getConvosByCursor', async () => {
// Create some test conversations
const nonExpiredConvo = await Conversation.create({
conversationId: uuidv4(),
user: 'user123',
title: 'Non-expired',
endpoint: EModelEndpoint.openAI,
expiredAt: null,
updatedAt: new Date(),
});
await Conversation.create({
conversationId: uuidv4(),
user: 'user123',
title: 'Future expired',
endpoint: EModelEndpoint.openAI,
expiredAt: new Date(Date.now() + 24 * 60 * 60 * 1000), // 24 hours from now
updatedAt: new Date(),
});
// Mock Meili search
Conversation.meiliSearch = jest.fn().mockResolvedValue({ hits: [] });
const result = await getConvosByCursor('user123');
// Should only return conversations with null or non-existent expiredAt
expect(result.conversations).toHaveLength(1);
expect(result.conversations[0].conversationId).toBe(nonExpiredConvo.conversationId);
});
it('should filter out expired conversations in getConvosQueried', async () => {
// Create test conversations
const nonExpiredConvo = await Conversation.create({
conversationId: uuidv4(),
user: 'user123',
title: 'Non-expired',
endpoint: EModelEndpoint.openAI,
expiredAt: null,
});
const expiredConvo = await Conversation.create({
conversationId: uuidv4(),
user: 'user123',
title: 'Expired',
endpoint: EModelEndpoint.openAI,
expiredAt: new Date(Date.now() + 24 * 60 * 60 * 1000),
});
const convoIds = [
{ conversationId: nonExpiredConvo.conversationId },
{ conversationId: expiredConvo.conversationId },
];
const result = await getConvosQueried('user123', convoIds);
// Should only return the non-expired conversation
expect(result.conversations).toHaveLength(1);
expect(result.conversations[0].conversationId).toBe(nonExpiredConvo.conversationId);
expect(result.convoMap[nonExpiredConvo.conversationId]).toBeDefined();
expect(result.convoMap[expiredConvo.conversationId]).toBeUndefined();
});
});
describe('searchConversation', () => {
it('should find a conversation by conversationId', async () => {
await Conversation.create({
conversationId: mockConversationData.conversationId,
user: 'user123',
title: 'Test',
endpoint: EModelEndpoint.openAI,
});
const result = await searchConversation(mockConversationData.conversationId);
expect(result).toBeTruthy();
expect(result.conversationId).toBe(mockConversationData.conversationId);
expect(result.user).toBe('user123');
expect(result.title).toBeUndefined(); // Only returns conversationId and user
});
it('should return null if conversation not found', async () => {
const result = await searchConversation('non-existent-id');
expect(result).toBeNull();
});
});
describe('getConvo', () => {
it('should retrieve a conversation for a user', async () => {
await Conversation.create({
conversationId: mockConversationData.conversationId,
user: 'user123',
title: 'Test Conversation',
endpoint: EModelEndpoint.openAI,
});
const result = await getConvo('user123', mockConversationData.conversationId);
expect(result.conversationId).toBe(mockConversationData.conversationId);
expect(result.user).toBe('user123');
expect(result.title).toBe('Test Conversation');
});
it('should return null if conversation not found', async () => {
const result = await getConvo('user123', 'non-existent-id');
expect(result).toBeNull();
});
});
describe('getConvoTitle', () => {
it('should return the conversation title', async () => {
await Conversation.create({
conversationId: mockConversationData.conversationId,
user: 'user123',
title: 'Test Title',
endpoint: EModelEndpoint.openAI,
});
const result = await getConvoTitle('user123', mockConversationData.conversationId);
expect(result).toBe('Test Title');
});
it('should return null if conversation has no title', async () => {
await Conversation.create({
conversationId: mockConversationData.conversationId,
user: 'user123',
title: null,
endpoint: EModelEndpoint.openAI,
});
const result = await getConvoTitle('user123', mockConversationData.conversationId);
expect(result).toBeNull();
});
it('should return "New Chat" if conversation not found', async () => {
const result = await getConvoTitle('user123', 'non-existent-id');
expect(result).toBe('New Chat');
});
});
describe('getConvoFiles', () => {
it('should return conversation files', async () => {
const files = ['file1', 'file2'];
await Conversation.create({
conversationId: mockConversationData.conversationId,
user: 'user123',
endpoint: EModelEndpoint.openAI,
files,
});
const result = await getConvoFiles(mockConversationData.conversationId);
expect(result).toEqual(files);
});
it('should return empty array if no files', async () => {
await Conversation.create({
conversationId: mockConversationData.conversationId,
user: 'user123',
endpoint: EModelEndpoint.openAI,
});
const result = await getConvoFiles(mockConversationData.conversationId);
expect(result).toEqual([]);
});
it('should return empty array if conversation not found', async () => {
const result = await getConvoFiles('non-existent-id');
expect(result).toEqual([]);
});
});
describe('deleteConvos', () => {
it('should delete conversations and associated messages', async () => {
await Conversation.create({
conversationId: mockConversationData.conversationId,
user: 'user123',
title: 'To Delete',
endpoint: EModelEndpoint.openAI,
});
deleteMessages.mockResolvedValue({ deletedCount: 5 });
const result = await deleteConvos('user123', {
conversationId: mockConversationData.conversationId,
});
expect(result.deletedCount).toBe(1);
expect(result.messages.deletedCount).toBe(5);
expect(deleteMessages).toHaveBeenCalledWith({
conversationId: { $in: [mockConversationData.conversationId] },
});
// Verify conversation was deleted
const deletedConvo = await Conversation.findOne({
conversationId: mockConversationData.conversationId,
});
expect(deletedConvo).toBeNull();
});
it('should throw error if no conversations found', async () => {
await expect(deleteConvos('user123', { conversationId: 'non-existent' })).rejects.toThrow(
'Conversation not found or already deleted.',
);
});
});
describe('deleteNullOrEmptyConversations', () => {
it('should delete conversations with null, empty, or missing conversationIds', async () => {
// Since conversationId is required by the schema, we can't create documents with null/missing IDs
// This test should verify the function works when such documents exist (e.g., from data corruption)
// For this test, let's create a valid conversation and verify the function doesn't delete it
await Conversation.create({
conversationId: mockConversationData.conversationId,
user: 'user4',
endpoint: EModelEndpoint.openAI,
});
deleteMessages.mockResolvedValue({ deletedCount: 0 });
const result = await deleteNullOrEmptyConversations();
expect(result.conversations.deletedCount).toBe(0); // No invalid conversations to delete
expect(result.messages.deletedCount).toBe(0);
// Verify valid conversation remains
const remainingConvos = await Conversation.find({});
expect(remainingConvos).toHaveLength(1);
expect(remainingConvos[0].conversationId).toBe(mockConversationData.conversationId);
});
});
describe('Error Handling', () => {
it('should handle database errors in saveConvo', async () => {
// Force a database error by disconnecting
await mongoose.disconnect();
const result = await saveConvo(mockReq, mockConversationData);
expect(result).toEqual({ message: 'Error saving conversation' });
// Reconnect for other tests
await mongoose.connect(mongoServer.getUri());
});
});
});

View File

@@ -239,46 +239,10 @@ const updateTagsForConversation = async (user, conversationId, tags) => {
}
};
/**
* Increments tag counts for existing tags only.
* @param {string} user - The user ID.
* @param {string[]} tags - Array of tag names to increment
* @returns {Promise<void>}
*/
const bulkIncrementTagCounts = async (user, tags) => {
if (!tags || tags.length === 0) {
return;
}
try {
const uniqueTags = [...new Set(tags.filter(Boolean))];
if (uniqueTags.length === 0) {
return;
}
const bulkOps = uniqueTags.map((tag) => ({
updateOne: {
filter: { user, tag },
update: { $inc: { count: 1 } },
},
}));
const result = await ConversationTag.bulkWrite(bulkOps);
if (result && result.modifiedCount > 0) {
logger.debug(
`user: ${user} | Incremented tag counts - modified ${result.modifiedCount} tags`,
);
}
} catch (error) {
logger.error('[bulkIncrementTagCounts] Error incrementing tag counts', error);
}
};
module.exports = {
getConversationTags,
createConversationTag,
updateConversationTag,
deleteConversationTag,
bulkIncrementTagCounts,
updateTagsForConversation,
};

View File

@@ -42,7 +42,7 @@ const getToolFilesByIds = async (fileIds, toolResourceSet) => {
$or: [],
};
if (toolResourceSet.has(EToolResources.context)) {
if (toolResourceSet.has(EToolResources.ocr)) {
filter.$or.push({ text: { $exists: true, $ne: null }, context: FileContext.agents });
}
if (toolResourceSet.has(EToolResources.file_search)) {

View File

@@ -1,17 +1,11 @@
const mongoose = require('mongoose');
const { v4: uuidv4 } = require('uuid');
const { createModels } = require('@librechat/data-schemas');
const { MongoMemoryServer } = require('mongodb-memory-server');
const {
SystemRoles,
ResourceType,
AccessRoleIds,
PrincipalType,
} = require('librechat-data-provider');
const { grantPermission } = require('~/server/services/PermissionService');
const { createModels } = require('@librechat/data-schemas');
const { getFiles, createFile } = require('./File');
const { seedDefaultRoles } = require('~/models');
const { createAgent } = require('./Agent');
const { grantPermission } = require('~/server/services/PermissionService');
const { seedDefaultRoles } = require('~/models');
let File;
let Agent;
@@ -120,22 +114,17 @@ describe('File Access Control', () => {
// Grant EDIT permission to user on the agent
await grantPermission({
principalType: PrincipalType.USER,
principalType: 'user',
principalId: userId,
resourceType: ResourceType.AGENT,
resourceType: 'agent',
resourceId: agent._id,
accessRoleId: AccessRoleIds.AGENT_EDITOR,
accessRoleId: 'agent_editor',
grantedBy: authorId,
});
// Check access for all files
const { hasAccessToFilesViaAgent } = require('~/server/services/Files/permissions');
const accessMap = await hasAccessToFilesViaAgent({
userId: userId,
role: SystemRoles.USER,
fileIds,
agentId: agent.id, // Use agent.id which is the custom UUID
});
const accessMap = await hasAccessToFilesViaAgent(userId.toString(), fileIds, agentId);
// Should have access only to the first two files
expect(accessMap.get(fileIds[0])).toBe(true);
@@ -173,12 +162,7 @@ describe('File Access Control', () => {
// Check access as the author
const { hasAccessToFilesViaAgent } = require('~/server/services/Files/permissions');
const accessMap = await hasAccessToFilesViaAgent({
userId: authorId,
role: SystemRoles.USER,
fileIds,
agentId,
});
const accessMap = await hasAccessToFilesViaAgent(authorId.toString(), fileIds, agentId);
// Author should have access to all files
expect(accessMap.get(fileIds[0])).toBe(true);
@@ -199,19 +183,18 @@ describe('File Access Control', () => {
});
const { hasAccessToFilesViaAgent } = require('~/server/services/Files/permissions');
const accessMap = await hasAccessToFilesViaAgent({
userId: userId,
role: SystemRoles.USER,
const accessMap = await hasAccessToFilesViaAgent(
userId.toString(),
fileIds,
agentId: 'non-existent-agent',
});
'non-existent-agent',
);
// Should have no access to any files
expect(accessMap.get(fileIds[0])).toBe(false);
expect(accessMap.get(fileIds[1])).toBe(false);
});
it('should deny access when user only has VIEW permission and needs access for deletion', async () => {
it('should deny access when user only has VIEW permission', async () => {
const userId = new mongoose.Types.ObjectId();
const authorId = new mongoose.Types.ObjectId();
const agentId = uuidv4();
@@ -248,86 +231,22 @@ describe('File Access Control', () => {
// Grant only VIEW permission to user on the agent
await grantPermission({
principalType: PrincipalType.USER,
principalType: 'user',
principalId: userId,
resourceType: ResourceType.AGENT,
resourceType: 'agent',
resourceId: agent._id,
accessRoleId: AccessRoleIds.AGENT_VIEWER,
accessRoleId: 'agent_viewer',
grantedBy: authorId,
});
// Check access for files
const { hasAccessToFilesViaAgent } = require('~/server/services/Files/permissions');
const accessMap = await hasAccessToFilesViaAgent({
userId: userId,
role: SystemRoles.USER,
fileIds,
agentId,
isDelete: true,
});
const accessMap = await hasAccessToFilesViaAgent(userId.toString(), fileIds, agentId);
// Should have no access to any files when only VIEW permission
expect(accessMap.get(fileIds[0])).toBe(false);
expect(accessMap.get(fileIds[1])).toBe(false);
});
it('should grant access when user has VIEW permission', async () => {
const userId = new mongoose.Types.ObjectId();
const authorId = new mongoose.Types.ObjectId();
const agentId = uuidv4();
const fileIds = [uuidv4(), uuidv4()];
// Create users
await User.create({
_id: userId,
email: 'user@example.com',
emailVerified: true,
provider: 'local',
});
await User.create({
_id: authorId,
email: 'author@example.com',
emailVerified: true,
provider: 'local',
});
// Create agent with files
const agent = await createAgent({
id: agentId,
name: 'View-Only Agent',
author: authorId,
model: 'gpt-4',
provider: 'openai',
tool_resources: {
file_search: {
file_ids: fileIds,
},
},
});
// Grant only VIEW permission to user on the agent
await grantPermission({
principalType: PrincipalType.USER,
principalId: userId,
resourceType: ResourceType.AGENT,
resourceId: agent._id,
accessRoleId: AccessRoleIds.AGENT_VIEWER,
grantedBy: authorId,
});
// Check access for files
const { hasAccessToFilesViaAgent } = require('~/server/services/Files/permissions');
const accessMap = await hasAccessToFilesViaAgent({
userId: userId,
role: SystemRoles.USER,
fileIds,
agentId,
});
expect(accessMap.get(fileIds[0])).toBe(true);
expect(accessMap.get(fileIds[1])).toBe(true);
});
});
describe('getFiles with agent access control', () => {
@@ -370,11 +289,11 @@ describe('File Access Control', () => {
// Grant EDIT permission to user on the agent
await grantPermission({
principalType: PrincipalType.USER,
principalType: 'user',
principalId: userId,
resourceType: ResourceType.AGENT,
resourceType: 'agent',
resourceId: agent._id,
accessRoleId: AccessRoleIds.AGENT_EDITOR,
accessRoleId: 'agent_editor',
grantedBy: authorId,
});
@@ -416,12 +335,7 @@ describe('File Access Control', () => {
// Then filter by access control
const { filterFilesByAgentAccess } = require('~/server/services/Files/permissions');
const files = await filterFilesByAgentAccess({
files: allFiles,
userId: userId,
role: SystemRoles.USER,
agentId,
});
const files = await filterFilesByAgentAccess(allFiles, userId.toString(), agentId);
expect(files).toHaveLength(2);
expect(files.map((f) => f.file_id)).toContain(ownedFileId);
@@ -456,166 +370,4 @@ describe('File Access Control', () => {
expect(files).toHaveLength(2);
});
});
describe('Role-based file permissions', () => {
it('should optimize permission checks when role is provided', async () => {
const userId = new mongoose.Types.ObjectId();
const authorId = new mongoose.Types.ObjectId();
const agentId = uuidv4();
const fileIds = [uuidv4(), uuidv4()];
// Create users
await User.create({
_id: userId,
email: 'user@example.com',
emailVerified: true,
provider: 'local',
role: 'ADMIN', // User has ADMIN role
});
await User.create({
_id: authorId,
email: 'author@example.com',
emailVerified: true,
provider: 'local',
});
// Create files
for (const fileId of fileIds) {
await createFile({
file_id: fileId,
user: authorId,
filename: `${fileId}.txt`,
filepath: `/uploads/${fileId}.txt`,
type: 'text/plain',
bytes: 100,
});
}
// Create agent with files
const agent = await createAgent({
id: agentId,
name: 'Test Agent',
author: authorId,
model: 'gpt-4',
provider: 'openai',
tool_resources: {
file_search: {
file_ids: fileIds,
},
},
});
// Grant permission to ADMIN role
await grantPermission({
principalType: PrincipalType.ROLE,
principalId: 'ADMIN',
resourceType: ResourceType.AGENT,
resourceId: agent._id,
accessRoleId: AccessRoleIds.AGENT_EDITOR,
grantedBy: authorId,
});
// Check access with role provided (should avoid DB query)
const { hasAccessToFilesViaAgent } = require('~/server/services/Files/permissions');
const accessMapWithRole = await hasAccessToFilesViaAgent({
userId: userId,
role: 'ADMIN',
fileIds,
agentId: agent.id,
});
// User should have access through their ADMIN role
expect(accessMapWithRole.get(fileIds[0])).toBe(true);
expect(accessMapWithRole.get(fileIds[1])).toBe(true);
// Check access without role (will query DB to get user's role)
const accessMapWithoutRole = await hasAccessToFilesViaAgent({
userId: userId,
fileIds,
agentId: agent.id,
});
// Should have same result
expect(accessMapWithoutRole.get(fileIds[0])).toBe(true);
expect(accessMapWithoutRole.get(fileIds[1])).toBe(true);
});
it('should deny access when user role changes', async () => {
const userId = new mongoose.Types.ObjectId();
const authorId = new mongoose.Types.ObjectId();
const agentId = uuidv4();
const fileId = uuidv4();
// Create users
await User.create({
_id: userId,
email: 'user@example.com',
emailVerified: true,
provider: 'local',
role: 'EDITOR',
});
await User.create({
_id: authorId,
email: 'author@example.com',
emailVerified: true,
provider: 'local',
});
// Create file
await createFile({
file_id: fileId,
user: authorId,
filename: 'test.txt',
filepath: '/uploads/test.txt',
type: 'text/plain',
bytes: 100,
});
// Create agent
const agent = await createAgent({
id: agentId,
name: 'Test Agent',
author: authorId,
model: 'gpt-4',
provider: 'openai',
tool_resources: {
file_search: {
file_ids: [fileId],
},
},
});
// Grant permission to EDITOR role only
await grantPermission({
principalType: PrincipalType.ROLE,
principalId: 'EDITOR',
resourceType: ResourceType.AGENT,
resourceId: agent._id,
accessRoleId: AccessRoleIds.AGENT_EDITOR,
grantedBy: authorId,
});
const { hasAccessToFilesViaAgent } = require('~/server/services/Files/permissions');
// Check with EDITOR role - should have access
const accessAsEditor = await hasAccessToFilesViaAgent({
userId: userId,
role: 'EDITOR',
fileIds: [fileId],
agentId: agent.id,
});
expect(accessAsEditor.get(fileId)).toBe(true);
// Simulate role change to USER - should lose access
const accessAsUser = await hasAccessToFilesViaAgent({
userId: userId,
role: SystemRoles.USER,
fileIds: [fileId],
agentId: agent.id,
});
expect(accessAsUser.get(fileId)).toBe(false);
});
});
});

View File

@@ -1,6 +1,7 @@
const { z } = require('zod');
const { logger } = require('@librechat/data-schemas');
const { createTempChatExpirationDate } = require('@librechat/api');
const getCustomConfig = require('~/server/services/Config/getCustomConfig');
const { Message } = require('~/db/models');
const idSchema = z.string().uuid();
@@ -10,7 +11,7 @@ const idSchema = z.string().uuid();
*
* @async
* @function saveMessage
* @param {ServerRequest} req - The request object containing user information.
* @param {Express.Request} req - The request object containing user information.
* @param {Object} params - The message data object.
* @param {string} params.endpoint - The endpoint where the message originated.
* @param {string} params.iconURL - The URL of the sender's icon.
@@ -56,8 +57,8 @@ async function saveMessage(req, params, metadata) {
if (req?.body?.isTemporary) {
try {
const appConfig = req.config;
update.expiredAt = createTempChatExpirationDate(appConfig?.interfaceConfig);
const customConfig = await getCustomConfig();
update.expiredAt = createTempChatExpirationDate(customConfig);
} catch (err) {
logger.error('Error creating temporary chat expiration date:', err);
logger.info(`---\`saveMessage\` context: ${metadata?.context}`);

View File

@@ -1,20 +1,17 @@
const mongoose = require('mongoose');
const { MongoMemoryServer } = require('mongodb-memory-server');
const { v4: uuidv4 } = require('uuid');
const { messageSchema } = require('@librechat/data-schemas');
const { MongoMemoryServer } = require('mongodb-memory-server');
const {
saveMessage,
getMessages,
updateMessage,
deleteMessages,
bulkSaveMessages,
updateMessageText,
deleteMessagesSince,
} = require('./Message');
jest.mock('~/server/services/Config/app');
/**
* @type {import('mongoose').Model<import('@librechat/data-schemas').IMessage>}
*/
@@ -43,11 +40,6 @@ describe('Message Operations', () => {
mockReq = {
user: { id: 'user123' },
config: {
interfaceConfig: {
temporaryChatRetention: 24, // Default 24 hours
},
},
};
mockMessageData = {
@@ -125,21 +117,21 @@ describe('Message Operations', () => {
const conversationId = uuidv4();
// Create multiple messages in the same conversation
await saveMessage(mockReq, {
const message1 = await saveMessage(mockReq, {
messageId: 'msg1',
conversationId,
text: 'First message',
user: 'user123',
});
await saveMessage(mockReq, {
const message2 = await saveMessage(mockReq, {
messageId: 'msg2',
conversationId,
text: 'Second message',
user: 'user123',
});
await saveMessage(mockReq, {
const message3 = await saveMessage(mockReq, {
messageId: 'msg3',
conversationId,
text: 'Third message',
@@ -322,255 +314,4 @@ describe('Message Operations', () => {
expect(messages[0].text).toBe('Victim message');
});
});
describe('isTemporary message handling', () => {
beforeEach(() => {
// Reset mocks before each test
jest.clearAllMocks();
});
it('should save a message with expiredAt when isTemporary is true', async () => {
// Mock app config with 24 hour retention
mockReq.config.interfaceConfig.temporaryChatRetention = 24;
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveMessage(mockReq, mockMessageData);
const afterSave = new Date();
expect(result.messageId).toBe('msg123');
expect(result.expiredAt).toBeDefined();
expect(result.expiredAt).toBeInstanceOf(Date);
// Verify expiredAt is approximately 24 hours in the future
const expectedExpirationTime = new Date(beforeSave.getTime() + 24 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
new Date(afterSave.getTime() + 24 * 60 * 60 * 1000 + 1000).getTime(),
);
});
it('should save a message without expiredAt when isTemporary is false', async () => {
mockReq.body = { isTemporary: false };
const result = await saveMessage(mockReq, mockMessageData);
expect(result.messageId).toBe('msg123');
expect(result.expiredAt).toBeNull();
});
it('should save a message without expiredAt when isTemporary is not provided', async () => {
// No isTemporary in body
mockReq.body = {};
const result = await saveMessage(mockReq, mockMessageData);
expect(result.messageId).toBe('msg123');
expect(result.expiredAt).toBeNull();
});
it('should use custom retention period from config', async () => {
// Mock app config with 48 hour retention
mockReq.config.interfaceConfig.temporaryChatRetention = 48;
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveMessage(mockReq, mockMessageData);
expect(result.expiredAt).toBeDefined();
// Verify expiredAt is approximately 48 hours in the future
const expectedExpirationTime = new Date(beforeSave.getTime() + 48 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
expectedExpirationTime.getTime() + 1000,
);
});
it('should handle minimum retention period (1 hour)', async () => {
// Mock app config with less than minimum retention
mockReq.config.interfaceConfig.temporaryChatRetention = 0.5; // Half hour - should be clamped to 1 hour
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveMessage(mockReq, mockMessageData);
expect(result.expiredAt).toBeDefined();
// Verify expiredAt is approximately 1 hour in the future (minimum)
const expectedExpirationTime = new Date(beforeSave.getTime() + 1 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
expectedExpirationTime.getTime() + 1000,
);
});
it('should handle maximum retention period (8760 hours)', async () => {
// Mock app config with more than maximum retention
mockReq.config.interfaceConfig.temporaryChatRetention = 10000; // Should be clamped to 8760 hours
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveMessage(mockReq, mockMessageData);
expect(result.expiredAt).toBeDefined();
// Verify expiredAt is approximately 8760 hours (1 year) in the future
const expectedExpirationTime = new Date(beforeSave.getTime() + 8760 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
expectedExpirationTime.getTime() + 1000,
);
});
it('should handle missing config gracefully', async () => {
// Simulate missing config - should use default retention period
delete mockReq.config;
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveMessage(mockReq, mockMessageData);
const afterSave = new Date();
// Should still save the message with default retention period (30 days)
expect(result.messageId).toBe('msg123');
expect(result.expiredAt).toBeDefined();
expect(result.expiredAt).toBeInstanceOf(Date);
// Verify expiredAt is approximately 30 days in the future (720 hours)
const expectedExpirationTime = new Date(beforeSave.getTime() + 720 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
new Date(afterSave.getTime() + 720 * 60 * 60 * 1000 + 1000).getTime(),
);
});
it('should use default retention when config is not provided', async () => {
// Mock getAppConfig to return empty config
mockReq.config = {}; // Empty config
mockReq.body = { isTemporary: true };
const beforeSave = new Date();
const result = await saveMessage(mockReq, mockMessageData);
expect(result.expiredAt).toBeDefined();
// Default retention is 30 days (720 hours)
const expectedExpirationTime = new Date(beforeSave.getTime() + 30 * 24 * 60 * 60 * 1000);
const actualExpirationTime = new Date(result.expiredAt);
expect(actualExpirationTime.getTime()).toBeGreaterThanOrEqual(
expectedExpirationTime.getTime() - 1000,
);
expect(actualExpirationTime.getTime()).toBeLessThanOrEqual(
expectedExpirationTime.getTime() + 1000,
);
});
it('should not update expiredAt on message update', async () => {
// First save a temporary message
mockReq.config.interfaceConfig.temporaryChatRetention = 24;
mockReq.body = { isTemporary: true };
const savedMessage = await saveMessage(mockReq, mockMessageData);
const originalExpiredAt = savedMessage.expiredAt;
// Now update the message without isTemporary flag
mockReq.body = {};
const updatedMessage = await updateMessage(mockReq, {
messageId: 'msg123',
text: 'Updated text',
});
// expiredAt should not be in the returned updated message object
expect(updatedMessage.expiredAt).toBeUndefined();
// Verify in database that expiredAt wasn't changed
const dbMessage = await Message.findOne({ messageId: 'msg123', user: 'user123' });
expect(dbMessage.expiredAt).toEqual(originalExpiredAt);
});
it('should preserve expiredAt when saving existing temporary message', async () => {
// First save a temporary message
mockReq.config.interfaceConfig.temporaryChatRetention = 24;
mockReq.body = { isTemporary: true };
const firstSave = await saveMessage(mockReq, mockMessageData);
const originalExpiredAt = firstSave.expiredAt;
// Wait a bit to ensure time difference
await new Promise((resolve) => setTimeout(resolve, 100));
// Save again with same messageId but different text
const updatedData = { ...mockMessageData, text: 'Updated text' };
const secondSave = await saveMessage(mockReq, updatedData);
// Should update text but create new expiredAt
expect(secondSave.text).toBe('Updated text');
expect(secondSave.expiredAt).toBeDefined();
expect(new Date(secondSave.expiredAt).getTime()).toBeGreaterThan(
new Date(originalExpiredAt).getTime(),
);
});
it('should handle bulk operations with temporary messages', async () => {
// This test verifies bulkSaveMessages doesn't interfere with expiredAt
const messages = [
{
messageId: 'bulk1',
conversationId: uuidv4(),
text: 'Bulk message 1',
user: 'user123',
expiredAt: new Date(Date.now() + 24 * 60 * 60 * 1000),
},
{
messageId: 'bulk2',
conversationId: uuidv4(),
text: 'Bulk message 2',
user: 'user123',
expiredAt: null,
},
];
await bulkSaveMessages(messages);
const savedMessages = await Message.find({
messageId: { $in: ['bulk1', 'bulk2'] },
}).lean();
expect(savedMessages).toHaveLength(2);
const bulk1 = savedMessages.find((m) => m.messageId === 'bulk1');
const bulk2 = savedMessages.find((m) => m.messageId === 'bulk2');
expect(bulk1.expiredAt).toBeDefined();
expect(bulk2.expiredAt).toBeNull();
});
});
});

View File

@@ -1,18 +1,12 @@
const { ObjectId } = require('mongodb');
const { logger } = require('@librechat/data-schemas');
const { SystemRoles, SystemCategories, Constants } = require('librechat-data-provider');
const {
Constants,
SystemRoles,
ResourceType,
SystemCategories,
} = require('librechat-data-provider');
const {
removeGroupFromAllProjects,
removeGroupIdsFromProject,
addGroupIdsToProject,
getProjectByName,
addGroupIdsToProject,
removeGroupIdsFromProject,
removeGroupFromAllProjects,
} = require('./Project');
const { removeAllPermissions } = require('~/server/services/PermissionService');
const { PromptGroup, Prompt } = require('~/db/models');
const { escapeRegExp } = require('~/server/utils');
@@ -106,6 +100,10 @@ const getAllPromptGroups = async (req, filter) => {
try {
const { name, ...query } = filter;
if (!query.author) {
throw new Error('Author is required');
}
let searchShared = true;
let searchSharedOnly = false;
if (name) {
@@ -155,6 +153,10 @@ const getPromptGroups = async (req, filter) => {
const validatedPageNumber = Math.max(parseInt(pageNumber, 10), 1);
const validatedPageSize = Math.max(parseInt(pageSize, 10), 1);
if (!query.author) {
throw new Error('Author is required');
}
let searchShared = true;
let searchSharedOnly = false;
if (name) {
@@ -219,16 +221,12 @@ const getPromptGroups = async (req, filter) => {
* @returns {Promise<TDeletePromptGroupResponse>}
*/
const deletePromptGroup = async ({ _id, author, role }) => {
// Build query - with ACL, author is optional
const query = { _id };
const groupQuery = { groupId: new ObjectId(_id) };
// Legacy: Add author filter if provided (backward compatibility)
if (author && role !== SystemRoles.ADMIN) {
query.author = author;
groupQuery.author = author;
const query = { _id, author };
const groupQuery = { groupId: new ObjectId(_id), author };
if (role === SystemRoles.ADMIN) {
delete query.author;
delete groupQuery.author;
}
const response = await PromptGroup.deleteOne(query);
if (!response || response.deletedCount === 0) {
@@ -237,140 +235,13 @@ const deletePromptGroup = async ({ _id, author, role }) => {
await Prompt.deleteMany(groupQuery);
await removeGroupFromAllProjects(_id);
try {
await removeAllPermissions({ resourceType: ResourceType.PROMPTGROUP, resourceId: _id });
} catch (error) {
logger.error('Error removing promptGroup permissions:', error);
}
return { message: 'Prompt group deleted successfully' };
};
/**
* Get prompt groups by accessible IDs with optional cursor-based pagination.
* @param {Object} params - The parameters for getting accessible prompt groups.
* @param {Array} [params.accessibleIds] - Array of prompt group ObjectIds the user has ACL access to.
* @param {Object} [params.otherParams] - Additional query parameters (including author filter).
* @param {number} [params.limit] - Number of prompt groups to return (max 100). If not provided, returns all prompt groups.
* @param {string} [params.after] - Cursor for pagination - get prompt groups after this cursor. // base64 encoded JSON string with updatedAt and _id.
* @returns {Promise<Object>} A promise that resolves to an object containing the prompt groups data and pagination info.
*/
async function getListPromptGroupsByAccess({
accessibleIds = [],
otherParams = {},
limit = null,
after = null,
}) {
const isPaginated = limit !== null && limit !== undefined;
const normalizedLimit = isPaginated ? Math.min(Math.max(1, parseInt(limit) || 20), 100) : null;
// Build base query combining ACL accessible prompt groups with other filters
const baseQuery = { ...otherParams, _id: { $in: accessibleIds } };
// Add cursor condition
if (after && typeof after === 'string' && after !== 'undefined' && after !== 'null') {
try {
const cursor = JSON.parse(Buffer.from(after, 'base64').toString('utf8'));
const { updatedAt, _id } = cursor;
const cursorCondition = {
$or: [
{ updatedAt: { $lt: new Date(updatedAt) } },
{ updatedAt: new Date(updatedAt), _id: { $gt: new ObjectId(_id) } },
],
};
// Merge cursor condition with base query
if (Object.keys(baseQuery).length > 0) {
baseQuery.$and = [{ ...baseQuery }, cursorCondition];
// Remove the original conditions from baseQuery to avoid duplication
Object.keys(baseQuery).forEach((key) => {
if (key !== '$and') delete baseQuery[key];
});
} else {
Object.assign(baseQuery, cursorCondition);
}
} catch (error) {
logger.warn('Invalid cursor:', error.message);
}
}
// Build aggregation pipeline
const pipeline = [{ $match: baseQuery }, { $sort: { updatedAt: -1, _id: 1 } }];
// Only apply limit if pagination is requested
if (isPaginated) {
pipeline.push({ $limit: normalizedLimit + 1 });
}
// Add lookup for production prompt
pipeline.push(
{
$lookup: {
from: 'prompts',
localField: 'productionId',
foreignField: '_id',
as: 'productionPrompt',
},
},
{ $unwind: { path: '$productionPrompt', preserveNullAndEmptyArrays: true } },
{
$project: {
name: 1,
numberOfGenerations: 1,
oneliner: 1,
category: 1,
projectIds: 1,
productionId: 1,
author: 1,
authorName: 1,
createdAt: 1,
updatedAt: 1,
'productionPrompt.prompt': 1,
},
},
);
const promptGroups = await PromptGroup.aggregate(pipeline).exec();
const hasMore = isPaginated ? promptGroups.length > normalizedLimit : false;
const data = (isPaginated ? promptGroups.slice(0, normalizedLimit) : promptGroups).map(
(group) => {
if (group.author) {
group.author = group.author.toString();
}
return group;
},
);
// Generate next cursor only if paginated
let nextCursor = null;
if (isPaginated && hasMore && data.length > 0) {
const lastGroup = promptGroups[normalizedLimit - 1];
nextCursor = Buffer.from(
JSON.stringify({
updatedAt: lastGroup.updatedAt.toISOString(),
_id: lastGroup._id.toString(),
}),
).toString('base64');
}
return {
object: 'list',
data,
first_id: data.length > 0 ? data[0]._id.toString() : null,
last_id: data.length > 0 ? data[data.length - 1]._id.toString() : null,
has_more: hasMore,
after: nextCursor,
};
}
module.exports = {
getPromptGroups,
deletePromptGroup,
getAllPromptGroups,
getListPromptGroupsByAccess,
/**
* Create a prompt and its respective group
* @param {TCreatePromptRecord} saveData
@@ -559,16 +430,6 @@ module.exports = {
.lean();
if (remainingPrompts.length === 0) {
// Remove all ACL entries for the promptGroup when deleting the last prompt
try {
await removeAllPermissions({
resourceType: ResourceType.PROMPTGROUP,
resourceId: groupId,
});
} catch (error) {
logger.error('Error removing promptGroup permissions:', error);
}
await PromptGroup.deleteOne({ _id: groupId });
await removeGroupFromAllProjects(groupId);

View File

@@ -1,564 +0,0 @@
const mongoose = require('mongoose');
const { ObjectId } = require('mongodb');
const { logger } = require('@librechat/data-schemas');
const { MongoMemoryServer } = require('mongodb-memory-server');
const {
SystemRoles,
ResourceType,
AccessRoleIds,
PrincipalType,
PermissionBits,
} = require('librechat-data-provider');
// Mock the config/connect module to prevent connection attempts during tests
jest.mock('../../config/connect', () => jest.fn().mockResolvedValue(true));
const dbModels = require('~/db/models');
// Disable console for tests
logger.silent = true;
let mongoServer;
let Prompt, PromptGroup, AclEntry, AccessRole, User, Group, Project;
let promptFns, permissionService;
let testUsers, testGroups, testRoles;
beforeAll(async () => {
// Set up MongoDB memory server
mongoServer = await MongoMemoryServer.create();
const mongoUri = mongoServer.getUri();
await mongoose.connect(mongoUri);
// Initialize models
Prompt = dbModels.Prompt;
PromptGroup = dbModels.PromptGroup;
AclEntry = dbModels.AclEntry;
AccessRole = dbModels.AccessRole;
User = dbModels.User;
Group = dbModels.Group;
Project = dbModels.Project;
promptFns = require('~/models/Prompt');
permissionService = require('~/server/services/PermissionService');
// Create test data
await setupTestData();
});
afterAll(async () => {
await mongoose.disconnect();
await mongoServer.stop();
jest.clearAllMocks();
});
async function setupTestData() {
// Create access roles for promptGroups
testRoles = {
viewer: await AccessRole.create({
accessRoleId: AccessRoleIds.PROMPTGROUP_VIEWER,
name: 'Viewer',
description: 'Can view promptGroups',
resourceType: ResourceType.PROMPTGROUP,
permBits: PermissionBits.VIEW,
}),
editor: await AccessRole.create({
accessRoleId: AccessRoleIds.PROMPTGROUP_EDITOR,
name: 'Editor',
description: 'Can view and edit promptGroups',
resourceType: ResourceType.PROMPTGROUP,
permBits: PermissionBits.VIEW | PermissionBits.EDIT,
}),
owner: await AccessRole.create({
accessRoleId: AccessRoleIds.PROMPTGROUP_OWNER,
name: 'Owner',
description: 'Full control over promptGroups',
resourceType: ResourceType.PROMPTGROUP,
permBits:
PermissionBits.VIEW | PermissionBits.EDIT | PermissionBits.DELETE | PermissionBits.SHARE,
}),
};
// Create test users
testUsers = {
owner: await User.create({
name: 'Prompt Owner',
email: 'owner@example.com',
role: SystemRoles.USER,
}),
editor: await User.create({
name: 'Prompt Editor',
email: 'editor@example.com',
role: SystemRoles.USER,
}),
viewer: await User.create({
name: 'Prompt Viewer',
email: 'viewer@example.com',
role: SystemRoles.USER,
}),
admin: await User.create({
name: 'Admin User',
email: 'admin@example.com',
role: SystemRoles.ADMIN,
}),
noAccess: await User.create({
name: 'No Access User',
email: 'noaccess@example.com',
role: SystemRoles.USER,
}),
};
// Create test groups
testGroups = {
editors: await Group.create({
name: 'Prompt Editors',
description: 'Group with editor access',
}),
viewers: await Group.create({
name: 'Prompt Viewers',
description: 'Group with viewer access',
}),
};
await Project.create({
name: 'Global',
description: 'Global project',
promptGroupIds: [],
});
}
describe('Prompt ACL Permissions', () => {
describe('Creating Prompts with Permissions', () => {
it('should grant owner permissions when creating a prompt', async () => {
// First create a group
const testGroup = await PromptGroup.create({
name: 'Test Group',
category: 'testing',
author: testUsers.owner._id,
authorName: testUsers.owner.name,
productionId: new mongoose.Types.ObjectId(),
});
const promptData = {
prompt: {
prompt: 'Test prompt content',
name: 'Test Prompt',
type: 'text',
groupId: testGroup._id,
},
author: testUsers.owner._id,
};
await promptFns.savePrompt(promptData);
// Manually grant permissions as would happen in the route
await permissionService.grantPermission({
principalType: PrincipalType.USER,
principalId: testUsers.owner._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: testGroup._id,
accessRoleId: AccessRoleIds.PROMPTGROUP_OWNER,
grantedBy: testUsers.owner._id,
});
// Check ACL entry
const aclEntry = await AclEntry.findOne({
resourceType: ResourceType.PROMPTGROUP,
resourceId: testGroup._id,
principalType: PrincipalType.USER,
principalId: testUsers.owner._id,
});
expect(aclEntry).toBeTruthy();
expect(aclEntry.permBits).toBe(testRoles.owner.permBits);
});
});
describe('Accessing Prompts', () => {
let testPromptGroup;
beforeEach(async () => {
// Create a prompt group
testPromptGroup = await PromptGroup.create({
name: 'Test Group',
author: testUsers.owner._id,
authorName: testUsers.owner.name,
productionId: new ObjectId(),
});
// Create a prompt
await Prompt.create({
prompt: 'Test prompt for access control',
name: 'Access Test Prompt',
author: testUsers.owner._id,
groupId: testPromptGroup._id,
type: 'text',
});
// Grant owner permissions
await permissionService.grantPermission({
principalType: PrincipalType.USER,
principalId: testUsers.owner._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: testPromptGroup._id,
accessRoleId: AccessRoleIds.PROMPTGROUP_OWNER,
grantedBy: testUsers.owner._id,
});
});
afterEach(async () => {
await Prompt.deleteMany({});
await PromptGroup.deleteMany({});
await AclEntry.deleteMany({});
});
it('owner should have full access to their prompt', async () => {
const hasAccess = await permissionService.checkPermission({
userId: testUsers.owner._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: testPromptGroup._id,
requiredPermission: PermissionBits.VIEW,
});
expect(hasAccess).toBe(true);
const canEdit = await permissionService.checkPermission({
userId: testUsers.owner._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: testPromptGroup._id,
requiredPermission: PermissionBits.EDIT,
});
expect(canEdit).toBe(true);
});
it('user with viewer role should only have view access', async () => {
// Grant viewer permissions
await permissionService.grantPermission({
principalType: PrincipalType.USER,
principalId: testUsers.viewer._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: testPromptGroup._id,
accessRoleId: AccessRoleIds.PROMPTGROUP_VIEWER,
grantedBy: testUsers.owner._id,
});
const canView = await permissionService.checkPermission({
userId: testUsers.viewer._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: testPromptGroup._id,
requiredPermission: PermissionBits.VIEW,
});
const canEdit = await permissionService.checkPermission({
userId: testUsers.viewer._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: testPromptGroup._id,
requiredPermission: PermissionBits.EDIT,
});
expect(canView).toBe(true);
expect(canEdit).toBe(false);
});
it('user without permissions should have no access', async () => {
const hasAccess = await permissionService.checkPermission({
userId: testUsers.noAccess._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: testPromptGroup._id,
requiredPermission: PermissionBits.VIEW,
});
expect(hasAccess).toBe(false);
});
it('admin should have access regardless of permissions', async () => {
// Admin users should work through normal permission system
// The middleware layer handles admin bypass, not the permission service
const hasAccess = await permissionService.checkPermission({
userId: testUsers.admin._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: testPromptGroup._id,
requiredPermission: PermissionBits.VIEW,
});
// Without explicit permissions, even admin won't have access at this layer
expect(hasAccess).toBe(false);
// The actual admin bypass happens in the middleware layer (`canAccessPromptViaGroup`/`canAccessPromptGroupResource`)
// which checks req.user.role === SystemRoles.ADMIN
});
});
describe('Group-based Access', () => {
let testPromptGroup;
beforeEach(async () => {
// Create a prompt group first
testPromptGroup = await PromptGroup.create({
name: 'Group Access Test Group',
author: testUsers.owner._id,
authorName: testUsers.owner.name,
productionId: new ObjectId(),
});
await Prompt.create({
prompt: 'Group access test prompt',
name: 'Group Test',
author: testUsers.owner._id,
groupId: testPromptGroup._id,
type: 'text',
});
// Add users to groups
await User.findByIdAndUpdate(testUsers.editor._id, {
$push: { groups: testGroups.editors._id },
});
await User.findByIdAndUpdate(testUsers.viewer._id, {
$push: { groups: testGroups.viewers._id },
});
});
afterEach(async () => {
await Prompt.deleteMany({});
await AclEntry.deleteMany({});
await User.updateMany({}, { $set: { groups: [] } });
});
it('group members should inherit group permissions', async () => {
// Create a prompt group
const testPromptGroup = await PromptGroup.create({
name: 'Group Test Group',
author: testUsers.owner._id,
authorName: testUsers.owner.name,
productionId: new ObjectId(),
});
const { addUserToGroup } = require('~/models');
await addUserToGroup(testUsers.editor._id, testGroups.editors._id);
const prompt = await promptFns.savePrompt({
author: testUsers.owner._id,
prompt: {
prompt: 'Group test prompt',
name: 'Group Test',
groupId: testPromptGroup._id,
type: 'text',
},
});
// Check if savePrompt returned an error
if (!prompt || !prompt.prompt) {
throw new Error(`Failed to save prompt: ${prompt?.message || 'Unknown error'}`);
}
// Grant edit permissions to the group
await permissionService.grantPermission({
principalType: PrincipalType.GROUP,
principalId: testGroups.editors._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: testPromptGroup._id,
accessRoleId: AccessRoleIds.PROMPTGROUP_EDITOR,
grantedBy: testUsers.owner._id,
});
// Check if group member has access
const hasAccess = await permissionService.checkPermission({
userId: testUsers.editor._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: testPromptGroup._id,
requiredPermission: PermissionBits.EDIT,
});
expect(hasAccess).toBe(true);
// Check that non-member doesn't have access
const nonMemberAccess = await permissionService.checkPermission({
userId: testUsers.viewer._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: testPromptGroup._id,
requiredPermission: PermissionBits.EDIT,
});
expect(nonMemberAccess).toBe(false);
});
});
describe('Public Access', () => {
let publicPromptGroup, privatePromptGroup;
beforeEach(async () => {
// Create separate prompt groups for public and private access
publicPromptGroup = await PromptGroup.create({
name: 'Public Access Test Group',
author: testUsers.owner._id,
authorName: testUsers.owner.name,
productionId: new ObjectId(),
});
privatePromptGroup = await PromptGroup.create({
name: 'Private Access Test Group',
author: testUsers.owner._id,
authorName: testUsers.owner.name,
productionId: new ObjectId(),
});
// Create prompts in their respective groups
await Prompt.create({
prompt: 'Public prompt',
name: 'Public',
author: testUsers.owner._id,
groupId: publicPromptGroup._id,
type: 'text',
});
await Prompt.create({
prompt: 'Private prompt',
name: 'Private',
author: testUsers.owner._id,
groupId: privatePromptGroup._id,
type: 'text',
});
// Grant public view access to publicPromptGroup
await permissionService.grantPermission({
principalType: PrincipalType.PUBLIC,
principalId: null,
resourceType: ResourceType.PROMPTGROUP,
resourceId: publicPromptGroup._id,
accessRoleId: AccessRoleIds.PROMPTGROUP_VIEWER,
grantedBy: testUsers.owner._id,
});
// Grant only owner access to privatePromptGroup
await permissionService.grantPermission({
principalType: PrincipalType.USER,
principalId: testUsers.owner._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: privatePromptGroup._id,
accessRoleId: AccessRoleIds.PROMPTGROUP_OWNER,
grantedBy: testUsers.owner._id,
});
});
afterEach(async () => {
await Prompt.deleteMany({});
await PromptGroup.deleteMany({});
await AclEntry.deleteMany({});
});
it('public prompt should be accessible to any user', async () => {
const hasAccess = await permissionService.checkPermission({
userId: testUsers.noAccess._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: publicPromptGroup._id,
requiredPermission: PermissionBits.VIEW,
includePublic: true,
});
expect(hasAccess).toBe(true);
});
it('private prompt should not be accessible to unauthorized users', async () => {
const hasAccess = await permissionService.checkPermission({
userId: testUsers.noAccess._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: privatePromptGroup._id,
requiredPermission: PermissionBits.VIEW,
includePublic: true,
});
expect(hasAccess).toBe(false);
});
});
describe('Prompt Deletion', () => {
let testPromptGroup;
it('should remove ACL entries when prompt is deleted', async () => {
testPromptGroup = await PromptGroup.create({
name: 'Deletion Test Group',
author: testUsers.owner._id,
authorName: testUsers.owner.name,
productionId: new ObjectId(),
});
const prompt = await promptFns.savePrompt({
author: testUsers.owner._id,
prompt: {
prompt: 'To be deleted',
name: 'Delete Test',
groupId: testPromptGroup._id,
type: 'text',
},
});
// Check if savePrompt returned an error
if (!prompt || !prompt.prompt) {
throw new Error(`Failed to save prompt: ${prompt?.message || 'Unknown error'}`);
}
const testPromptId = prompt.prompt._id;
const promptGroupId = testPromptGroup._id;
// Grant permission
await permissionService.grantPermission({
principalType: PrincipalType.USER,
principalId: testUsers.owner._id,
resourceType: ResourceType.PROMPTGROUP,
resourceId: testPromptGroup._id,
accessRoleId: AccessRoleIds.PROMPTGROUP_OWNER,
grantedBy: testUsers.owner._id,
});
// Verify ACL entry exists
const beforeDelete = await AclEntry.find({
resourceType: ResourceType.PROMPTGROUP,
resourceId: testPromptGroup._id,
});
expect(beforeDelete).toHaveLength(1);
// Delete the prompt
await promptFns.deletePrompt({
promptId: testPromptId,
groupId: promptGroupId,
author: testUsers.owner._id,
role: SystemRoles.USER,
});
// Verify ACL entries are removed
const aclEntries = await AclEntry.find({
resourceType: ResourceType.PROMPTGROUP,
resourceId: testPromptGroup._id,
});
expect(aclEntries).toHaveLength(0);
});
});
describe('Backwards Compatibility', () => {
it('should handle prompts without ACL entries gracefully', async () => {
// Create a prompt group first
const promptGroup = await PromptGroup.create({
name: 'Legacy Test Group',
author: testUsers.owner._id,
authorName: testUsers.owner.name,
productionId: new ObjectId(),
});
// Create a prompt without ACL entries (legacy prompt)
const legacyPrompt = await Prompt.create({
prompt: 'Legacy prompt without ACL',
name: 'Legacy',
author: testUsers.owner._id,
groupId: promptGroup._id,
type: 'text',
});
// The system should handle this gracefully
const prompt = await promptFns.getPrompt({ _id: legacyPrompt._id });
expect(prompt).toBeTruthy();
expect(prompt._id.toString()).toBe(legacyPrompt._id.toString());
});
});
});

View File

@@ -1,280 +0,0 @@
const mongoose = require('mongoose');
const { ObjectId } = require('mongodb');
const { logger } = require('@librechat/data-schemas');
const { MongoMemoryServer } = require('mongodb-memory-server');
const {
Constants,
ResourceType,
AccessRoleIds,
PrincipalType,
PrincipalModel,
PermissionBits,
} = require('librechat-data-provider');
// Mock the config/connect module to prevent connection attempts during tests
jest.mock('../../config/connect', () => jest.fn().mockResolvedValue(true));
// Disable console for tests
logger.silent = true;
describe('PromptGroup Migration Script', () => {
let mongoServer;
let Prompt, PromptGroup, AclEntry, AccessRole, User, Project;
let migrateToPromptGroupPermissions;
let testOwner, testProject;
let ownerRole, viewerRole;
beforeAll(async () => {
// Set up MongoDB memory server
mongoServer = await MongoMemoryServer.create();
const mongoUri = mongoServer.getUri();
await mongoose.connect(mongoUri);
// Initialize models
const dbModels = require('~/db/models');
Prompt = dbModels.Prompt;
PromptGroup = dbModels.PromptGroup;
AclEntry = dbModels.AclEntry;
AccessRole = dbModels.AccessRole;
User = dbModels.User;
Project = dbModels.Project;
// Create test user
testOwner = await User.create({
name: 'Test Owner',
email: 'owner@test.com',
role: 'USER',
});
// Create test project with the proper name
const projectName = Constants.GLOBAL_PROJECT_NAME || 'instance';
testProject = await Project.create({
name: projectName,
description: 'Global project',
promptGroupIds: [],
});
// Create promptGroup access roles
ownerRole = await AccessRole.create({
accessRoleId: AccessRoleIds.PROMPTGROUP_OWNER,
name: 'Owner',
description: 'Full control over promptGroups',
resourceType: ResourceType.PROMPTGROUP,
permBits:
PermissionBits.VIEW | PermissionBits.EDIT | PermissionBits.DELETE | PermissionBits.SHARE,
});
viewerRole = await AccessRole.create({
accessRoleId: AccessRoleIds.PROMPTGROUP_VIEWER,
name: 'Viewer',
description: 'Can view promptGroups',
resourceType: ResourceType.PROMPTGROUP,
permBits: PermissionBits.VIEW,
});
await AccessRole.create({
accessRoleId: AccessRoleIds.PROMPTGROUP_EDITOR,
name: 'Editor',
description: 'Can view and edit promptGroups',
resourceType: ResourceType.PROMPTGROUP,
permBits: PermissionBits.VIEW | PermissionBits.EDIT,
});
// Import migration function
const migration = require('../../config/migrate-prompt-permissions');
migrateToPromptGroupPermissions = migration.migrateToPromptGroupPermissions;
});
afterAll(async () => {
await mongoose.disconnect();
await mongoServer.stop();
});
beforeEach(async () => {
// Clean up before each test
await Prompt.deleteMany({});
await PromptGroup.deleteMany({});
await AclEntry.deleteMany({});
// Reset the project's promptGroupIds array
testProject.promptGroupIds = [];
await testProject.save();
});
it('should categorize promptGroups correctly in dry run', async () => {
// Create global prompt group (in Global project)
const globalPromptGroup = await PromptGroup.create({
name: 'Global Group',
author: testOwner._id,
authorName: testOwner.name,
productionId: new ObjectId(),
});
// Create private prompt group (not in any project)
await PromptGroup.create({
name: 'Private Group',
author: testOwner._id,
authorName: testOwner.name,
productionId: new ObjectId(),
});
// Add global group to project's promptGroupIds array
testProject.promptGroupIds = [globalPromptGroup._id];
await testProject.save();
const result = await migrateToPromptGroupPermissions({ dryRun: true });
expect(result.dryRun).toBe(true);
expect(result.summary.total).toBe(2);
expect(result.summary.globalViewAccess).toBe(1);
expect(result.summary.privateGroups).toBe(1);
});
it('should grant appropriate permissions during migration', async () => {
// Create prompt groups
const globalPromptGroup = await PromptGroup.create({
name: 'Global Group',
author: testOwner._id,
authorName: testOwner.name,
productionId: new ObjectId(),
});
const privatePromptGroup = await PromptGroup.create({
name: 'Private Group',
author: testOwner._id,
authorName: testOwner.name,
productionId: new ObjectId(),
});
// Add global group to project's promptGroupIds array
testProject.promptGroupIds = [globalPromptGroup._id];
await testProject.save();
const result = await migrateToPromptGroupPermissions({ dryRun: false });
expect(result.migrated).toBe(2);
expect(result.errors).toBe(0);
expect(result.ownerGrants).toBe(2);
expect(result.publicViewGrants).toBe(1);
// Check global promptGroup permissions
const globalOwnerEntry = await AclEntry.findOne({
resourceType: ResourceType.PROMPTGROUP,
resourceId: globalPromptGroup._id,
principalType: PrincipalType.USER,
principalId: testOwner._id,
});
expect(globalOwnerEntry).toBeTruthy();
expect(globalOwnerEntry.permBits).toBe(ownerRole.permBits);
const globalPublicEntry = await AclEntry.findOne({
resourceType: ResourceType.PROMPTGROUP,
resourceId: globalPromptGroup._id,
principalType: PrincipalType.PUBLIC,
});
expect(globalPublicEntry).toBeTruthy();
expect(globalPublicEntry.permBits).toBe(viewerRole.permBits);
// Check private promptGroup permissions
const privateOwnerEntry = await AclEntry.findOne({
resourceType: ResourceType.PROMPTGROUP,
resourceId: privatePromptGroup._id,
principalType: PrincipalType.USER,
principalId: testOwner._id,
});
expect(privateOwnerEntry).toBeTruthy();
expect(privateOwnerEntry.permBits).toBe(ownerRole.permBits);
const privatePublicEntry = await AclEntry.findOne({
resourceType: ResourceType.PROMPTGROUP,
resourceId: privatePromptGroup._id,
principalType: PrincipalType.PUBLIC,
});
expect(privatePublicEntry).toBeNull();
});
it('should skip promptGroups that already have ACL entries', async () => {
// Create prompt groups
const promptGroup1 = await PromptGroup.create({
name: 'Group 1',
author: testOwner._id,
authorName: testOwner.name,
productionId: new ObjectId(),
});
const promptGroup2 = await PromptGroup.create({
name: 'Group 2',
author: testOwner._id,
authorName: testOwner.name,
productionId: new ObjectId(),
});
// Grant permission to one promptGroup manually (simulating it already has ACL)
await AclEntry.create({
principalType: PrincipalType.USER,
principalId: testOwner._id,
principalModel: PrincipalModel.USER,
resourceType: ResourceType.PROMPTGROUP,
resourceId: promptGroup1._id,
permBits: ownerRole.permBits,
roleId: ownerRole._id,
grantedBy: testOwner._id,
grantedAt: new Date(),
});
const result = await migrateToPromptGroupPermissions({ dryRun: false });
// Should only migrate promptGroup2, skip promptGroup1
expect(result.migrated).toBe(1);
expect(result.errors).toBe(0);
// Verify promptGroup2 now has permissions
const group2Entry = await AclEntry.findOne({
resourceType: ResourceType.PROMPTGROUP,
resourceId: promptGroup2._id,
});
expect(group2Entry).toBeTruthy();
});
it('should handle promptGroups with prompts correctly', async () => {
// Create a promptGroup with some prompts
const promptGroup = await PromptGroup.create({
name: 'Group with Prompts',
author: testOwner._id,
authorName: testOwner.name,
productionId: new ObjectId(),
});
// Create some prompts in this group
await Prompt.create({
prompt: 'First prompt',
author: testOwner._id,
groupId: promptGroup._id,
type: 'text',
});
await Prompt.create({
prompt: 'Second prompt',
author: testOwner._id,
groupId: promptGroup._id,
type: 'text',
});
const result = await migrateToPromptGroupPermissions({ dryRun: false });
expect(result.migrated).toBe(1);
expect(result.errors).toBe(0);
// Verify the promptGroup has permissions
const groupEntry = await AclEntry.findOne({
resourceType: ResourceType.PROMPTGROUP,
resourceId: promptGroup._id,
});
expect(groupEntry).toBeTruthy();
// Verify no prompt-level permissions were created
const promptEntries = await AclEntry.find({
resourceType: 'prompt',
});
expect(promptEntries).toHaveLength(0);
});
});

View File

@@ -16,7 +16,7 @@ const { Role } = require('~/db/models');
*
* @param {string} roleName - The name of the role to find or create.
* @param {string|string[]} [fieldsToSelect] - The fields to include or exclude in the returned document.
* @returns {Promise<IRole>} Role document.
* @returns {Promise<Object>} A plain object representing the role document.
*/
const getRoleByName = async function (roleName, fieldsToSelect = null) {
const cache = getLogStores(CacheKeys.ROLES);
@@ -72,9 +72,8 @@ const updateRoleByName = async function (roleName, updates) {
* Updates access permissions for a specific role and multiple permission types.
* @param {string} roleName - The role to update.
* @param {Object.<PermissionTypes, Object.<Permissions, boolean>>} permissionsUpdate - Permissions to update and their values.
* @param {IRole} [roleData] - Optional role data to use instead of fetching from the database.
*/
async function updateAccessPermissions(roleName, permissionsUpdate, roleData) {
async function updateAccessPermissions(roleName, permissionsUpdate) {
// Filter and clean the permission updates based on our schema definition.
const updates = {};
for (const [permissionType, permissions] of Object.entries(permissionsUpdate)) {
@@ -87,7 +86,7 @@ async function updateAccessPermissions(roleName, permissionsUpdate, roleData) {
}
try {
const role = roleData ?? (await getRoleByName(roleName));
const role = await getRoleByName(roleName);
if (!role) {
return;
}
@@ -114,6 +113,7 @@ async function updateAccessPermissions(roleName, permissionsUpdate, roleData) {
}
}
// Process the current updates
for (const [permissionType, permissions] of Object.entries(updates)) {
const currentTypePermissions = currentPermissions[permissionType] || {};
updatedPermissions[permissionType] = { ...currentTypePermissions };

View File

@@ -1,4 +1,5 @@
const { logger } = require('@librechat/data-schemas');
const { getBalanceConfig } = require('~/server/services/Config');
const { getMultiplier, getCacheMultiplier } = require('./tx');
const { Transaction, Balance } = require('~/db/models');
@@ -186,23 +187,20 @@ async function createAutoRefillTransaction(txData) {
/**
* Static method to create a transaction and update the balance
* @param {txData} _txData - Transaction data.
* @param {txData} txData - Transaction data.
*/
async function createTransaction(_txData) {
const { balance, transactions, ...txData } = _txData;
async function createTransaction(txData) {
if (txData.rawAmount != null && isNaN(txData.rawAmount)) {
return;
}
if (transactions?.enabled === false) {
return;
}
const transaction = new Transaction(txData);
transaction.endpointTokenConfig = txData.endpointTokenConfig;
calculateTokenValue(transaction);
await transaction.save();
const balance = await getBalanceConfig();
if (!balance?.enabled) {
return;
}
@@ -223,14 +221,9 @@ async function createTransaction(_txData) {
/**
* Static method to create a structured transaction and update the balance
* @param {txData} _txData - Transaction data.
* @param {txData} txData - Transaction data.
*/
async function createStructuredTransaction(_txData) {
const { balance, transactions, ...txData } = _txData;
if (transactions?.enabled === false) {
return;
}
async function createStructuredTransaction(txData) {
const transaction = new Transaction({
...txData,
endpointTokenConfig: txData.endpointTokenConfig,
@@ -240,6 +233,7 @@ async function createStructuredTransaction(_txData) {
await transaction.save();
const balance = await getBalanceConfig();
if (!balance?.enabled) {
return;
}

View File

@@ -1,9 +1,13 @@
const mongoose = require('mongoose');
const { MongoMemoryServer } = require('mongodb-memory-server');
const { spendTokens, spendStructuredTokens } = require('./spendTokens');
const { getBalanceConfig } = require('~/server/services/Config');
const { getMultiplier, getCacheMultiplier } = require('./tx');
const { createTransaction, createStructuredTransaction } = require('./Transaction');
const { Balance, Transaction } = require('~/db/models');
const { createTransaction } = require('./Transaction');
const { Balance } = require('~/db/models');
// Mock the custom config module so we can control the balance flag.
jest.mock('~/server/services/Config');
let mongoServer;
beforeAll(async () => {
@@ -19,6 +23,8 @@ afterAll(async () => {
beforeEach(async () => {
await mongoose.connection.dropDatabase();
// Default: enable balance updates in tests.
getBalanceConfig.mockResolvedValue({ enabled: true });
});
describe('Regular Token Spending Tests', () => {
@@ -35,7 +41,6 @@ describe('Regular Token Spending Tests', () => {
model,
context: 'test',
endpointTokenConfig: null,
balance: { enabled: true },
};
const tokenUsage = {
@@ -69,7 +74,6 @@ describe('Regular Token Spending Tests', () => {
model,
context: 'test',
endpointTokenConfig: null,
balance: { enabled: true },
};
const tokenUsage = {
@@ -100,7 +104,6 @@ describe('Regular Token Spending Tests', () => {
model,
context: 'test',
endpointTokenConfig: null,
balance: { enabled: true },
};
const tokenUsage = {};
@@ -125,7 +128,6 @@ describe('Regular Token Spending Tests', () => {
model,
context: 'test',
endpointTokenConfig: null,
balance: { enabled: true },
};
const tokenUsage = { promptTokens: 100 };
@@ -141,7 +143,8 @@ describe('Regular Token Spending Tests', () => {
});
test('spendTokens should not update balance when balance feature is disabled', async () => {
// Arrange: Balance config is now passed directly in txData
// Arrange: Override the config to disable balance updates.
getBalanceConfig.mockResolvedValue({ balance: { enabled: false } });
const userId = new mongoose.Types.ObjectId();
const initialBalance = 10000000;
await Balance.create({ user: userId, tokenCredits: initialBalance });
@@ -153,7 +156,6 @@ describe('Regular Token Spending Tests', () => {
model,
context: 'test',
endpointTokenConfig: null,
balance: { enabled: false },
};
const tokenUsage = {
@@ -184,7 +186,6 @@ describe('Structured Token Spending Tests', () => {
model,
context: 'message',
endpointTokenConfig: null,
balance: { enabled: true },
};
const tokenUsage = {
@@ -238,7 +239,6 @@ describe('Structured Token Spending Tests', () => {
conversationId: 'test-convo',
model,
context: 'message',
balance: { enabled: true },
};
const tokenUsage = {
@@ -271,7 +271,6 @@ describe('Structured Token Spending Tests', () => {
conversationId: 'test-convo',
model,
context: 'message',
balance: { enabled: true },
};
const tokenUsage = {
@@ -303,7 +302,6 @@ describe('Structured Token Spending Tests', () => {
conversationId: 'test-convo',
model,
context: 'message',
balance: { enabled: true },
};
const tokenUsage = {};
@@ -330,7 +328,6 @@ describe('Structured Token Spending Tests', () => {
conversationId: 'test-convo',
model,
context: 'incomplete',
balance: { enabled: true },
};
const tokenUsage = {
@@ -367,7 +364,6 @@ describe('NaN Handling Tests', () => {
endpointTokenConfig: null,
rawAmount: NaN,
tokenType: 'prompt',
balance: { enabled: true },
};
// Act
@@ -379,188 +375,3 @@ describe('NaN Handling Tests', () => {
expect(balance.tokenCredits).toBe(initialBalance);
});
});
describe('Transactions Config Tests', () => {
test('createTransaction should not save when transactions.enabled is false', async () => {
// Arrange
const userId = new mongoose.Types.ObjectId();
const initialBalance = 10000000;
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'gpt-3.5-turbo';
const txData = {
user: userId,
conversationId: 'test-conversation-id',
model,
context: 'test',
endpointTokenConfig: null,
rawAmount: -100,
tokenType: 'prompt',
transactions: { enabled: false },
};
// Act
const result = await createTransaction(txData);
// Assert: No transaction should be created
expect(result).toBeUndefined();
const transactions = await Transaction.find({ user: userId });
expect(transactions).toHaveLength(0);
const balance = await Balance.findOne({ user: userId });
expect(balance.tokenCredits).toBe(initialBalance);
});
test('createTransaction should save when transactions.enabled is true', async () => {
// Arrange
const userId = new mongoose.Types.ObjectId();
const initialBalance = 10000000;
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'gpt-3.5-turbo';
const txData = {
user: userId,
conversationId: 'test-conversation-id',
model,
context: 'test',
endpointTokenConfig: null,
rawAmount: -100,
tokenType: 'prompt',
transactions: { enabled: true },
balance: { enabled: true },
};
// Act
const result = await createTransaction(txData);
// Assert: Transaction should be created
expect(result).toBeDefined();
expect(result.balance).toBeLessThan(initialBalance);
const transactions = await Transaction.find({ user: userId });
expect(transactions).toHaveLength(1);
expect(transactions[0].rawAmount).toBe(-100);
});
test('createTransaction should save when balance.enabled is true even if transactions config is missing', async () => {
// Arrange
const userId = new mongoose.Types.ObjectId();
const initialBalance = 10000000;
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'gpt-3.5-turbo';
const txData = {
user: userId,
conversationId: 'test-conversation-id',
model,
context: 'test',
endpointTokenConfig: null,
rawAmount: -100,
tokenType: 'prompt',
balance: { enabled: true },
// No transactions config provided
};
// Act
const result = await createTransaction(txData);
// Assert: Transaction should be created (backward compatibility)
expect(result).toBeDefined();
expect(result.balance).toBeLessThan(initialBalance);
const transactions = await Transaction.find({ user: userId });
expect(transactions).toHaveLength(1);
});
test('createTransaction should save transaction but not update balance when balance is disabled but transactions enabled', async () => {
// Arrange
const userId = new mongoose.Types.ObjectId();
const initialBalance = 10000000;
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'gpt-3.5-turbo';
const txData = {
user: userId,
conversationId: 'test-conversation-id',
model,
context: 'test',
endpointTokenConfig: null,
rawAmount: -100,
tokenType: 'prompt',
transactions: { enabled: true },
balance: { enabled: false },
};
// Act
const result = await createTransaction(txData);
// Assert: Transaction should be created but balance unchanged
expect(result).toBeUndefined();
const transactions = await Transaction.find({ user: userId });
expect(transactions).toHaveLength(1);
expect(transactions[0].rawAmount).toBe(-100);
const balance = await Balance.findOne({ user: userId });
expect(balance.tokenCredits).toBe(initialBalance);
});
test('createStructuredTransaction should not save when transactions.enabled is false', async () => {
// Arrange
const userId = new mongoose.Types.ObjectId();
const initialBalance = 10000000;
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'claude-3-5-sonnet';
const txData = {
user: userId,
conversationId: 'test-conversation-id',
model,
context: 'message',
tokenType: 'prompt',
inputTokens: -10,
writeTokens: -100,
readTokens: -5,
transactions: { enabled: false },
};
// Act
const result = await createStructuredTransaction(txData);
// Assert: No transaction should be created
expect(result).toBeUndefined();
const transactions = await Transaction.find({ user: userId });
expect(transactions).toHaveLength(0);
const balance = await Balance.findOne({ user: userId });
expect(balance.tokenCredits).toBe(initialBalance);
});
test('createStructuredTransaction should save transaction but not update balance when balance is disabled but transactions enabled', async () => {
// Arrange
const userId = new mongoose.Types.ObjectId();
const initialBalance = 10000000;
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'claude-3-5-sonnet';
const txData = {
user: userId,
conversationId: 'test-conversation-id',
model,
context: 'message',
tokenType: 'prompt',
inputTokens: -10,
writeTokens: -100,
readTokens: -5,
transactions: { enabled: true },
balance: { enabled: false },
};
// Act
const result = await createStructuredTransaction(txData);
// Assert: Transaction should be created but balance unchanged
expect(result).toBeUndefined();
const transactions = await Transaction.find({ user: userId });
expect(transactions).toHaveLength(1);
expect(transactions[0].inputTokens).toBe(-10);
expect(transactions[0].writeTokens).toBe(-100);
expect(transactions[0].readTokens).toBe(-5);
const balance = await Balance.findOne({ user: userId });
expect(balance.tokenCredits).toBe(initialBalance);
});
});

View File

@@ -118,7 +118,7 @@ const addIntervalToDate = (date, value, unit) => {
* @async
* @function
* @param {Object} params - The function parameters.
* @param {ServerRequest} params.req - The Express request object.
* @param {Express.Request} params.req - The Express request object.
* @param {Express.Response} params.res - The Express response object.
* @param {Object} params.txData - The transaction data.
* @param {string} params.txData.user - The user ID or identifier.

View File

@@ -1,9 +1,47 @@
const mongoose = require('mongoose');
const { buildTree } = require('librechat-data-provider');
const { MongoMemoryServer } = require('mongodb-memory-server');
const { getMessages, bulkSaveMessages } = require('./Message');
const { Message } = require('~/db/models');
// Original version of buildTree function
function buildTree({ messages, fileMap }) {
if (messages === null) {
return null;
}
const messageMap = {};
const rootMessages = [];
const childrenCount = {};
messages.forEach((message) => {
const parentId = message.parentMessageId ?? '';
childrenCount[parentId] = (childrenCount[parentId] || 0) + 1;
const extendedMessage = {
...message,
children: [],
depth: 0,
siblingIndex: childrenCount[parentId] - 1,
};
if (message.files && fileMap) {
extendedMessage.files = message.files.map((file) => fileMap[file.file_id ?? ''] ?? file);
}
messageMap[message.messageId] = extendedMessage;
const parentMessage = messageMap[parentId];
if (parentMessage) {
parentMessage.children.push(extendedMessage);
extendedMessage.depth = parentMessage.depth + 1;
} else {
rootMessages.push(extendedMessage);
}
});
return rootMessages;
}
let mongod;
beforeAll(async () => {
mongod = await MongoMemoryServer.create();

View File

@@ -22,17 +22,9 @@ const {
} = require('./Message');
const { getConvoTitle, getConvo, saveConvo, deleteConvos } = require('./Conversation');
const { getPreset, getPresets, savePreset, deletePresets } = require('./Preset');
const { File } = require('~/db/models');
const seedDatabase = async () => {
await methods.initializeRoles();
await methods.seedDefaultRoles();
await methods.ensureDefaultCategories();
};
module.exports = {
...methods,
seedDatabase,
comparePassword,
findFileById,
createFile,
@@ -59,6 +51,4 @@ module.exports = {
getPresets,
savePreset,
deletePresets,
Files: File,
};

View File

@@ -1,24 +0,0 @@
const { logger } = require('@librechat/data-schemas');
const { updateInterfacePermissions: updateInterfacePerms } = require('@librechat/api');
const { getRoleByName, updateAccessPermissions } = require('./Role');
/**
* Update interface permissions based on app configuration.
* Must be done independently from loading the app config.
* @param {AppConfig} appConfig
*/
async function updateInterfacePermissions(appConfig) {
try {
await updateInterfacePerms({
appConfig,
getRoleByName,
updateAccessPermissions,
});
} catch (error) {
logger.error('Error updating interface permissions:', error);
}
}
module.exports = {
updateInterfacePermissions,
};

View File

@@ -1,11 +1,17 @@
const { logger } = require('@librechat/data-schemas');
const { logger } = require('~/config');
const { createTransaction, createStructuredTransaction } = require('./Transaction');
/**
* Creates up to two transactions to record the spending of tokens.
*
* @function
* @async
* @param {txData} txData - Transaction data.
* @param {Object} txData - Transaction data.
* @param {mongoose.Schema.Types.ObjectId} txData.user - The user ID.
* @param {String} txData.conversationId - The ID of the conversation.
* @param {String} txData.model - The model name.
* @param {String} txData.context - The context in which the transaction is made.
* @param {EndpointTokenConfig} [txData.endpointTokenConfig] - The current endpoint token config.
* @param {String} [txData.valueKey] - The value key (optional).
* @param {Object} tokenUsage - The number of tokens used.
* @param {Number} tokenUsage.promptTokens - The number of prompt tokens used.
* @param {Number} tokenUsage.completionTokens - The number of completion tokens used.
@@ -63,7 +69,13 @@ const spendTokens = async (txData, tokenUsage) => {
*
* @function
* @async
* @param {txData} txData - Transaction data.
* @param {Object} txData - Transaction data.
* @param {mongoose.Schema.Types.ObjectId} txData.user - The user ID.
* @param {String} txData.conversationId - The ID of the conversation.
* @param {String} txData.model - The model name.
* @param {String} txData.context - The context in which the transaction is made.
* @param {EndpointTokenConfig} [txData.endpointTokenConfig] - The current endpoint token config.
* @param {String} [txData.valueKey] - The value key (optional).
* @param {Object} tokenUsage - The number of tokens used.
* @param {Object} tokenUsage.promptTokens - The number of prompt tokens used.
* @param {Number} tokenUsage.promptTokens.input - The number of input tokens.

View File

@@ -5,6 +5,7 @@ const { createTransaction, createAutoRefillTransaction } = require('./Transactio
require('~/db/models');
// Mock the logger to prevent console output during tests
jest.mock('~/config', () => ({
logger: {
debug: jest.fn(),
@@ -12,6 +13,10 @@ jest.mock('~/config', () => ({
},
}));
// Mock the Config service
const { getBalanceConfig } = require('~/server/services/Config');
jest.mock('~/server/services/Config');
describe('spendTokens', () => {
let mongoServer;
let userId;
@@ -39,7 +44,8 @@ describe('spendTokens', () => {
// Create a new user ID for each test
userId = new mongoose.Types.ObjectId();
// Balance config is now passed directly in txData
// Mock the balance config to be enabled by default
getBalanceConfig.mockResolvedValue({ enabled: true });
});
it('should create transactions for both prompt and completion tokens', async () => {
@@ -54,7 +60,6 @@ describe('spendTokens', () => {
conversationId: 'test-convo',
model: 'gpt-3.5-turbo',
context: 'test',
balance: { enabled: true },
};
const tokenUsage = {
promptTokens: 100,
@@ -93,7 +98,6 @@ describe('spendTokens', () => {
conversationId: 'test-convo',
model: 'gpt-3.5-turbo',
context: 'test',
balance: { enabled: true },
};
const tokenUsage = {
promptTokens: 100,
@@ -123,7 +127,6 @@ describe('spendTokens', () => {
conversationId: 'test-convo',
model: 'gpt-3.5-turbo',
context: 'test',
balance: { enabled: true },
};
const tokenUsage = {};
@@ -135,7 +138,8 @@ describe('spendTokens', () => {
});
it('should not update balance when the balance feature is disabled', async () => {
// Balance is now passed directly in txData
// Override configuration: disable balance updates
getBalanceConfig.mockResolvedValue({ enabled: false });
// Create a balance for the user
await Balance.create({
user: userId,
@@ -147,7 +151,6 @@ describe('spendTokens', () => {
conversationId: 'test-convo',
model: 'gpt-3.5-turbo',
context: 'test',
balance: { enabled: false },
};
const tokenUsage = {
promptTokens: 100,
@@ -177,7 +180,6 @@ describe('spendTokens', () => {
conversationId: 'test-convo',
model: 'gpt-4', // Using a more expensive model
context: 'test',
balance: { enabled: true },
};
// Spending more tokens than the user has balance for
@@ -231,7 +233,6 @@ describe('spendTokens', () => {
conversationId: 'test-convo-1',
model: 'gpt-4',
context: 'test',
balance: { enabled: true },
};
const tokenUsage1 = {
@@ -251,7 +252,6 @@ describe('spendTokens', () => {
conversationId: 'test-convo-2',
model: 'gpt-4',
context: 'test',
balance: { enabled: true },
};
const tokenUsage2 = {
@@ -292,7 +292,6 @@ describe('spendTokens', () => {
tokenType: 'completion',
rawAmount: -100,
context: 'test',
balance: { enabled: true },
});
console.log('Direct Transaction.create result:', directResult);
@@ -317,7 +316,6 @@ describe('spendTokens', () => {
conversationId: `test-convo-${model}`,
model,
context: 'test',
balance: { enabled: true },
};
const tokenUsage = {
@@ -354,7 +352,6 @@ describe('spendTokens', () => {
conversationId: 'test-convo-1',
model: 'claude-3-5-sonnet',
context: 'test',
balance: { enabled: true },
};
const tokenUsage1 = {
@@ -378,7 +375,6 @@ describe('spendTokens', () => {
conversationId: 'test-convo-2',
model: 'claude-3-5-sonnet',
context: 'test',
balance: { enabled: true },
};
const tokenUsage2 = {
@@ -430,7 +426,6 @@ describe('spendTokens', () => {
conversationId: 'test-convo',
model: 'claude-3-5-sonnet', // Using a model that supports structured tokens
context: 'test',
balance: { enabled: true },
};
// Spending more tokens than the user has balance for
@@ -510,7 +505,6 @@ describe('spendTokens', () => {
conversationId,
user: userId,
model: usage.model,
balance: { enabled: true },
};
// Calculate expected spend for this transaction
@@ -623,7 +617,6 @@ describe('spendTokens', () => {
tokenType: 'credits',
context: 'concurrent-refill-test',
rawAmount: refillAmount,
balance: { enabled: true },
}),
);
}
@@ -690,7 +683,6 @@ describe('spendTokens', () => {
conversationId: 'test-convo',
model: 'claude-3-5-sonnet',
context: 'test',
balance: { enabled: true },
};
const tokenUsage = {
promptTokens: {

View File

@@ -1,4 +1,4 @@
const { matchModelName } = require('@librechat/api');
const { matchModelName } = require('../utils');
const defaultRate = 6;
/**
@@ -87,9 +87,6 @@ const tokenValues = Object.assign(
'gpt-4.1': { prompt: 2, completion: 8 },
'gpt-4.5': { prompt: 75, completion: 150 },
'gpt-4o-mini': { prompt: 0.15, completion: 0.6 },
'gpt-5': { prompt: 1.25, completion: 10 },
'gpt-5-mini': { prompt: 0.25, completion: 2 },
'gpt-5-nano': { prompt: 0.05, completion: 0.4 },
'gpt-4o': { prompt: 2.5, completion: 10 },
'gpt-4o-2024-05-13': { prompt: 5, completion: 15 },
'gpt-4-1106': { prompt: 10, completion: 30 },
@@ -111,8 +108,8 @@ const tokenValues = Object.assign(
'claude-': { prompt: 0.8, completion: 2.4 },
'command-r-plus': { prompt: 3, completion: 15 },
'command-r': { prompt: 0.5, completion: 1.5 },
'deepseek-reasoner': { prompt: 0.28, completion: 0.42 },
deepseek: { prompt: 0.28, completion: 0.42 },
'deepseek-reasoner': { prompt: 0.55, completion: 2.19 },
deepseek: { prompt: 0.14, completion: 0.28 },
/* cohere doesn't have rates for the older command models,
so this was from https://artificialanalysis.ai/models/command-light/providers */
command: { prompt: 0.38, completion: 0.38 },
@@ -124,8 +121,7 @@ const tokenValues = Object.assign(
'gemini-2.0-flash': { prompt: 0.1, completion: 0.4 },
'gemini-2.0': { prompt: 0, completion: 0 }, // https://ai.google.dev/pricing
'gemini-2.5-pro': { prompt: 1.25, completion: 10 },
'gemini-2.5-flash': { prompt: 0.3, completion: 2.5 },
'gemini-2.5-flash-lite': { prompt: 0.1, completion: 0.4 },
'gemini-2.5-flash': { prompt: 0.15, completion: 3.5 },
'gemini-2.5': { prompt: 0, completion: 0 }, // Free for a period of time
'gemini-1.5-flash-8b': { prompt: 0.075, completion: 0.3 },
'gemini-1.5-flash': { prompt: 0.15, completion: 0.6 },
@@ -151,20 +147,6 @@ const tokenValues = Object.assign(
codestral: { prompt: 0.3, completion: 0.9 },
'ministral-8b': { prompt: 0.1, completion: 0.1 },
'ministral-3b': { prompt: 0.04, completion: 0.04 },
// GPT-OSS models
'gpt-oss': { prompt: 0.05, completion: 0.2 },
'gpt-oss:20b': { prompt: 0.05, completion: 0.2 },
'gpt-oss-20b': { prompt: 0.05, completion: 0.2 },
'gpt-oss:120b': { prompt: 0.15, completion: 0.6 },
'gpt-oss-120b': { prompt: 0.15, completion: 0.6 },
// GLM models (Zhipu AI)
glm4: { prompt: 0.1, completion: 0.1 },
'glm-4': { prompt: 0.1, completion: 0.1 },
'glm-4-32b': { prompt: 0.1, completion: 0.1 },
'glm-4.5': { prompt: 0.35, completion: 1.55 },
'glm-4.5v': { prompt: 0.6, completion: 1.8 },
'glm-4.5-air': { prompt: 0.14, completion: 0.86 },
'glm-4.6': { prompt: 0.5, completion: 1.75 },
},
bedrockValues,
);
@@ -232,12 +214,6 @@ const getValueKey = (model, endpoint) => {
return 'gpt-4.1';
} else if (modelName.includes('gpt-4o-2024-05-13')) {
return 'gpt-4o-2024-05-13';
} else if (modelName.includes('gpt-5-nano')) {
return 'gpt-5-nano';
} else if (modelName.includes('gpt-5-mini')) {
return 'gpt-5-mini';
} else if (modelName.includes('gpt-5')) {
return 'gpt-5';
} else if (modelName.includes('gpt-4o-mini')) {
return 'gpt-4o-mini';
} else if (modelName.includes('gpt-4o')) {

View File

@@ -25,14 +25,8 @@ describe('getValueKey', () => {
expect(getValueKey('gpt-4-some-other-info')).toBe('8k');
});
it('should return "gpt-5" for model name containing "gpt-5"', () => {
expect(getValueKey('gpt-5-some-other-info')).toBe('gpt-5');
expect(getValueKey('gpt-5-2025-01-30')).toBe('gpt-5');
expect(getValueKey('gpt-5-2025-01-30-0130')).toBe('gpt-5');
expect(getValueKey('openai/gpt-5')).toBe('gpt-5');
expect(getValueKey('openai/gpt-5-2025-01-30')).toBe('gpt-5');
expect(getValueKey('gpt-5-turbo')).toBe('gpt-5');
expect(getValueKey('gpt-5-0130')).toBe('gpt-5');
it('should return undefined for model names that do not match any known patterns', () => {
expect(getValueKey('gpt-5-some-other-info')).toBeUndefined();
});
it('should return "gpt-3.5-turbo-1106" for model name containing "gpt-3.5-turbo-1106"', () => {
@@ -90,29 +84,6 @@ describe('getValueKey', () => {
expect(getValueKey('gpt-4.1-nano-0125')).toBe('gpt-4.1-nano');
});
it('should return "gpt-5" for model type of "gpt-5"', () => {
expect(getValueKey('gpt-5-2025-01-30')).toBe('gpt-5');
expect(getValueKey('gpt-5-2025-01-30-0130')).toBe('gpt-5');
expect(getValueKey('openai/gpt-5')).toBe('gpt-5');
expect(getValueKey('openai/gpt-5-2025-01-30')).toBe('gpt-5');
expect(getValueKey('gpt-5-turbo')).toBe('gpt-5');
expect(getValueKey('gpt-5-0130')).toBe('gpt-5');
});
it('should return "gpt-5-mini" for model type of "gpt-5-mini"', () => {
expect(getValueKey('gpt-5-mini-2025-01-30')).toBe('gpt-5-mini');
expect(getValueKey('openai/gpt-5-mini')).toBe('gpt-5-mini');
expect(getValueKey('gpt-5-mini-0130')).toBe('gpt-5-mini');
expect(getValueKey('gpt-5-mini-2025-01-30-0130')).toBe('gpt-5-mini');
});
it('should return "gpt-5-nano" for model type of "gpt-5-nano"', () => {
expect(getValueKey('gpt-5-nano-2025-01-30')).toBe('gpt-5-nano');
expect(getValueKey('openai/gpt-5-nano')).toBe('gpt-5-nano');
expect(getValueKey('gpt-5-nano-0130')).toBe('gpt-5-nano');
expect(getValueKey('gpt-5-nano-2025-01-30-0130')).toBe('gpt-5-nano');
});
it('should return "gpt-4o" for model type of "gpt-4o"', () => {
expect(getValueKey('gpt-4o-2024-08-06')).toBe('gpt-4o');
expect(getValueKey('gpt-4o-2024-08-06-0718')).toBe('gpt-4o');
@@ -184,16 +155,6 @@ describe('getValueKey', () => {
expect(getValueKey('claude-3.5-haiku-turbo')).toBe('claude-3.5-haiku');
expect(getValueKey('claude-3.5-haiku-0125')).toBe('claude-3.5-haiku');
});
it('should return expected value keys for "gpt-oss" models', () => {
expect(getValueKey('openai/gpt-oss-120b')).toBe('gpt-oss-120b');
expect(getValueKey('openai/gpt-oss:120b')).toBe('gpt-oss:120b');
expect(getValueKey('openai/gpt-oss-570b')).toBe('gpt-oss');
expect(getValueKey('gpt-oss-570b')).toBe('gpt-oss');
expect(getValueKey('groq/gpt-oss-1080b')).toBe('gpt-oss');
expect(getValueKey('gpt-oss-20b')).toBe('gpt-oss-20b');
expect(getValueKey('oai/gpt-oss:20b')).toBe('gpt-oss:20b');
});
});
describe('getMultiplier', () => {
@@ -246,48 +207,6 @@ describe('getMultiplier', () => {
);
});
it('should return the correct multiplier for gpt-5', () => {
const valueKey = getValueKey('gpt-5-2025-01-30');
expect(getMultiplier({ valueKey, tokenType: 'prompt' })).toBe(tokenValues['gpt-5'].prompt);
expect(getMultiplier({ valueKey, tokenType: 'completion' })).toBe(
tokenValues['gpt-5'].completion,
);
expect(getMultiplier({ model: 'gpt-5-preview', tokenType: 'prompt' })).toBe(
tokenValues['gpt-5'].prompt,
);
expect(getMultiplier({ model: 'openai/gpt-5', tokenType: 'completion' })).toBe(
tokenValues['gpt-5'].completion,
);
});
it('should return the correct multiplier for gpt-5-mini', () => {
const valueKey = getValueKey('gpt-5-mini-2025-01-30');
expect(getMultiplier({ valueKey, tokenType: 'prompt' })).toBe(tokenValues['gpt-5-mini'].prompt);
expect(getMultiplier({ valueKey, tokenType: 'completion' })).toBe(
tokenValues['gpt-5-mini'].completion,
);
expect(getMultiplier({ model: 'gpt-5-mini-preview', tokenType: 'prompt' })).toBe(
tokenValues['gpt-5-mini'].prompt,
);
expect(getMultiplier({ model: 'openai/gpt-5-mini', tokenType: 'completion' })).toBe(
tokenValues['gpt-5-mini'].completion,
);
});
it('should return the correct multiplier for gpt-5-nano', () => {
const valueKey = getValueKey('gpt-5-nano-2025-01-30');
expect(getMultiplier({ valueKey, tokenType: 'prompt' })).toBe(tokenValues['gpt-5-nano'].prompt);
expect(getMultiplier({ valueKey, tokenType: 'completion' })).toBe(
tokenValues['gpt-5-nano'].completion,
);
expect(getMultiplier({ model: 'gpt-5-nano-preview', tokenType: 'prompt' })).toBe(
tokenValues['gpt-5-nano'].prompt,
);
expect(getMultiplier({ model: 'openai/gpt-5-nano', tokenType: 'completion' })).toBe(
tokenValues['gpt-5-nano'].completion,
);
});
it('should return the correct multiplier for gpt-4o', () => {
const valueKey = getValueKey('gpt-4o-2024-08-06');
expect(getMultiplier({ valueKey, tokenType: 'prompt' })).toBe(tokenValues['gpt-4o'].prompt);
@@ -388,34 +307,10 @@ describe('getMultiplier', () => {
});
it('should return defaultRate if derived valueKey does not match any known patterns', () => {
expect(getMultiplier({ tokenType: 'prompt', model: 'gpt-10-some-other-info' })).toBe(
expect(getMultiplier({ tokenType: 'prompt', model: 'gpt-5-some-other-info' })).toBe(
defaultRate,
);
});
it('should return correct multipliers for GPT-OSS models', () => {
const models = ['gpt-oss-20b', 'gpt-oss-120b'];
models.forEach((key) => {
const expectedPrompt = tokenValues[key].prompt;
const expectedCompletion = tokenValues[key].completion;
expect(getMultiplier({ valueKey: key, tokenType: 'prompt' })).toBe(expectedPrompt);
expect(getMultiplier({ valueKey: key, tokenType: 'completion' })).toBe(expectedCompletion);
expect(getMultiplier({ model: key, tokenType: 'prompt' })).toBe(expectedPrompt);
expect(getMultiplier({ model: key, tokenType: 'completion' })).toBe(expectedCompletion);
});
});
it('should return correct multipliers for GLM models', () => {
const models = ['glm-4.6', 'glm-4.5v', 'glm-4.5-air', 'glm-4.5', 'glm-4-32b', 'glm-4', 'glm4'];
models.forEach((key) => {
const expectedPrompt = tokenValues[key].prompt;
const expectedCompletion = tokenValues[key].completion;
expect(getMultiplier({ valueKey: key, tokenType: 'prompt' })).toBe(expectedPrompt);
expect(getMultiplier({ valueKey: key, tokenType: 'completion' })).toBe(expectedCompletion);
expect(getMultiplier({ model: key, tokenType: 'prompt' })).toBe(expectedPrompt);
expect(getMultiplier({ model: key, tokenType: 'completion' })).toBe(expectedCompletion);
});
});
});
describe('AWS Bedrock Model Tests', () => {
@@ -593,9 +488,6 @@ describe('getCacheMultiplier', () => {
describe('Google Model Tests', () => {
const googleModels = [
'gemini-2.5-pro',
'gemini-2.5-flash',
'gemini-2.5-flash-lite',
'gemini-2.5-pro-preview-05-06',
'gemini-2.5-flash-preview-04-17',
'gemini-2.5-exp',
@@ -636,9 +528,6 @@ describe('Google Model Tests', () => {
it('should map to the correct model keys', () => {
const expected = {
'gemini-2.5-pro': 'gemini-2.5-pro',
'gemini-2.5-flash': 'gemini-2.5-flash',
'gemini-2.5-flash-lite': 'gemini-2.5-flash-lite',
'gemini-2.5-pro-preview-05-06': 'gemini-2.5-pro',
'gemini-2.5-flash-preview-04-17': 'gemini-2.5-flash',
'gemini-2.5-exp': 'gemini-2.5',
@@ -794,110 +683,6 @@ describe('Grok Model Tests - Pricing', () => {
});
});
describe('GLM Model Tests', () => {
it('should return expected value keys for GLM models', () => {
expect(getValueKey('glm-4.6')).toBe('glm-4.6');
expect(getValueKey('glm-4.5')).toBe('glm-4.5');
expect(getValueKey('glm-4.5v')).toBe('glm-4.5v');
expect(getValueKey('glm-4.5-air')).toBe('glm-4.5-air');
expect(getValueKey('glm-4-32b')).toBe('glm-4-32b');
expect(getValueKey('glm-4')).toBe('glm-4');
expect(getValueKey('glm4')).toBe('glm4');
});
it('should match GLM model variations with provider prefixes', () => {
expect(getValueKey('z-ai/glm-4.6')).toBe('glm-4.6');
expect(getValueKey('z-ai/glm-4.5')).toBe('glm-4.5');
expect(getValueKey('z-ai/glm-4.5-air')).toBe('glm-4.5-air');
expect(getValueKey('z-ai/glm-4.5v')).toBe('glm-4.5v');
expect(getValueKey('z-ai/glm-4-32b')).toBe('glm-4-32b');
expect(getValueKey('zai/glm-4.6')).toBe('glm-4.6');
expect(getValueKey('zai/glm-4.5')).toBe('glm-4.5');
expect(getValueKey('zai/glm-4.5-air')).toBe('glm-4.5-air');
expect(getValueKey('zai/glm-4.5v')).toBe('glm-4.5v');
expect(getValueKey('zai-org/GLM-4.6')).toBe('glm-4.6');
expect(getValueKey('zai-org/GLM-4.5')).toBe('glm-4.5');
expect(getValueKey('zai-org/GLM-4.5-Air')).toBe('glm-4.5-air');
expect(getValueKey('zai-org/GLM-4.5V')).toBe('glm-4.5v');
expect(getValueKey('zai-org/GLM-4-32B-0414')).toBe('glm-4-32b');
});
it('should match GLM model variations with suffixes', () => {
expect(getValueKey('glm-4.6-fp8')).toBe('glm-4.6');
expect(getValueKey('zai-org/GLM-4.6-FP8')).toBe('glm-4.6');
expect(getValueKey('zai-org/GLM-4.5-Air-FP8')).toBe('glm-4.5-air');
});
it('should prioritize more specific GLM model patterns', () => {
expect(getValueKey('glm-4.5-air-something')).toBe('glm-4.5-air');
expect(getValueKey('glm-4.5-something')).toBe('glm-4.5');
expect(getValueKey('glm-4.5v-something')).toBe('glm-4.5v');
});
it('should return correct multipliers for all GLM models', () => {
expect(getMultiplier({ model: 'glm-4.6', tokenType: 'prompt' })).toBe(
tokenValues['glm-4.6'].prompt,
);
expect(getMultiplier({ model: 'glm-4.6', tokenType: 'completion' })).toBe(
tokenValues['glm-4.6'].completion,
);
expect(getMultiplier({ model: 'glm-4.5v', tokenType: 'prompt' })).toBe(
tokenValues['glm-4.5v'].prompt,
);
expect(getMultiplier({ model: 'glm-4.5v', tokenType: 'completion' })).toBe(
tokenValues['glm-4.5v'].completion,
);
expect(getMultiplier({ model: 'glm-4.5-air', tokenType: 'prompt' })).toBe(
tokenValues['glm-4.5-air'].prompt,
);
expect(getMultiplier({ model: 'glm-4.5-air', tokenType: 'completion' })).toBe(
tokenValues['glm-4.5-air'].completion,
);
expect(getMultiplier({ model: 'glm-4.5', tokenType: 'prompt' })).toBe(
tokenValues['glm-4.5'].prompt,
);
expect(getMultiplier({ model: 'glm-4.5', tokenType: 'completion' })).toBe(
tokenValues['glm-4.5'].completion,
);
expect(getMultiplier({ model: 'glm-4-32b', tokenType: 'prompt' })).toBe(
tokenValues['glm-4-32b'].prompt,
);
expect(getMultiplier({ model: 'glm-4-32b', tokenType: 'completion' })).toBe(
tokenValues['glm-4-32b'].completion,
);
expect(getMultiplier({ model: 'glm-4', tokenType: 'prompt' })).toBe(
tokenValues['glm-4'].prompt,
);
expect(getMultiplier({ model: 'glm-4', tokenType: 'completion' })).toBe(
tokenValues['glm-4'].completion,
);
expect(getMultiplier({ model: 'glm4', tokenType: 'prompt' })).toBe(tokenValues['glm4'].prompt);
expect(getMultiplier({ model: 'glm4', tokenType: 'completion' })).toBe(
tokenValues['glm4'].completion,
);
});
it('should return correct multipliers for GLM models with provider prefixes', () => {
expect(getMultiplier({ model: 'z-ai/glm-4.6', tokenType: 'prompt' })).toBe(
tokenValues['glm-4.6'].prompt,
);
expect(getMultiplier({ model: 'zai/glm-4.5-air', tokenType: 'completion' })).toBe(
tokenValues['glm-4.5-air'].completion,
);
expect(getMultiplier({ model: 'zai-org/GLM-4.5V', tokenType: 'prompt' })).toBe(
tokenValues['glm-4.5v'].prompt,
);
});
});
describe('Claude Model Tests', () => {
it('should return correct prompt and completion rates for Claude 4 models', () => {
expect(getMultiplier({ model: 'claude-sonnet-4', tokenType: 'prompt' })).toBe(

View File

@@ -3,7 +3,7 @@ const bcrypt = require('bcryptjs');
/**
* Compares the provided password with the user's password.
*
* @param {IUser} user - The user to compare the password for.
* @param {MongoUser} user - The user to compare the password for.
* @param {string} candidatePassword - The password to test against the user's password.
* @returns {Promise<boolean>} A promise that resolves to a boolean indicating if the password matches.
*/

View File

@@ -1,6 +1,6 @@
{
"name": "@librechat/backend",
"version": "v0.8.0",
"version": "v0.7.9",
"description": "",
"scripts": {
"start": "echo 'please run this from the root directory'",
@@ -47,15 +47,15 @@
"@langchain/core": "^0.3.62",
"@langchain/google-genai": "^0.2.13",
"@langchain/google-vertexai": "^0.2.13",
"@langchain/openai": "^0.5.18",
"@langchain/textsplitters": "^0.1.0",
"@librechat/agents": "^2.4.85",
"@librechat/agents": "^2.4.68",
"@librechat/api": "*",
"@librechat/data-schemas": "*",
"@node-saml/passport-saml": "^5.0.0",
"@microsoft/microsoft-graph-client": "^3.0.7",
"@modelcontextprotocol/sdk": "^1.17.1",
"@node-saml/passport-saml": "^5.1.0",
"@waylaidwanderer/fetch-event-source": "^3.0.1",
"axios": "^1.12.1",
"axios": "^1.8.2",
"bcryptjs": "^2.4.3",
"compression": "^1.8.1",
"connect-redis": "^8.1.0",
@@ -93,9 +93,10 @@
"multer": "^2.0.2",
"nanoid": "^3.3.7",
"node-fetch": "^2.7.0",
"nodemailer": "^7.0.9",
"nodemailer": "^6.9.15",
"ollama": "^0.5.0",
"openai": "^5.10.1",
"openai-chat-tokens": "^0.2.8",
"openid-client": "^6.5.0",
"passport": "^0.6.0",
"passport-apple": "^2.0.2",
@@ -119,7 +120,7 @@
},
"devDependencies": {
"jest": "^29.7.0",
"mongodb-memory-server": "^10.1.4",
"mongodb-memory-server": "^10.1.3",
"nodemon": "^3.0.3",
"supertest": "^7.1.0"
}

View File

@@ -1,6 +1,6 @@
const { logger } = require('@librechat/data-schemas');
const { logger } = require('~/config');
/** WeakMap to hold temporary data associated with requests */
// WeakMap to hold temporary data associated with requests
const requestDataMap = new WeakMap();
const FinalizationRegistry = global.FinalizationRegistry || null;
@@ -23,7 +23,7 @@ const clientRegistry = FinalizationRegistry
} else {
logger.debug('[FinalizationRegistry] Cleaning up client');
}
} catch {
} catch (e) {
// Ignore errors
}
})
@@ -55,9 +55,6 @@ function disposeClient(client) {
if (client.responseMessageId) {
client.responseMessageId = null;
}
if (client.parentMessageId) {
client.parentMessageId = null;
}
if (client.message_file_map) {
client.message_file_map = null;
}
@@ -337,7 +334,7 @@ function disposeClient(client) {
}
}
client.options = null;
} catch {
} catch (e) {
// Ignore errors during disposal
}
}

View File

@@ -1,8 +1,8 @@
const cookies = require('cookie');
const jwt = require('jsonwebtoken');
const openIdClient = require('openid-client');
const { isEnabled } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { isEnabled, findOpenIDUser } = require('@librechat/api');
const {
requestPasswordReset,
setOpenIDAuthTokens,
@@ -11,8 +11,6 @@ const {
registerUser,
} = require('~/server/services/AuthService');
const { findUser, getUserById, deleteAllUserSessions, findSession } = require('~/models');
const { getGraphApiToken } = require('~/server/services/GraphTokenService');
const { getOAuthReconnectionManager } = require('~/config');
const { getOpenIdConfig } = require('~/strategies');
const registrationController = async (req, res) => {
@@ -72,17 +70,11 @@ const refreshController = async (req, res) => {
const openIdConfig = getOpenIdConfig();
const tokenset = await openIdClient.refreshTokenGrant(openIdConfig, refreshToken);
const claims = tokenset.claims();
const { user, error } = await findOpenIDUser({
findUser,
email: claims.email,
openidId: claims.sub,
idOnTheSource: claims.oid,
strategyName: 'refreshController',
});
if (error || !user) {
const user = await findUser({ email: claims.email });
if (!user) {
return res.status(401).redirect('/login');
}
const token = setOpenIDAuthTokens(tokenset, res, user._id.toString());
const token = setOpenIDAuthTokens(tokenset, res);
return res.status(200).send({ token, user });
} catch (error) {
logger.error('[refreshController] OpenID token refresh error', error);
@@ -91,7 +83,7 @@ const refreshController = async (req, res) => {
}
try {
const payload = jwt.verify(refreshToken, process.env.JWT_REFRESH_SECRET);
const user = await getUserById(payload.id, '-password -__v -totpSecret -backupCodes');
const user = await getUserById(payload.id, '-password -__v -totpSecret');
if (!user) {
return res.status(401).redirect('/login');
}
@@ -103,25 +95,14 @@ const refreshController = async (req, res) => {
return res.status(200).send({ token, user });
}
/** Session with the hashed refresh token */
const session = await findSession(
{
userId: userId,
refreshToken: refreshToken,
},
{ lean: false },
);
// Find the session with the hashed refresh token
const session = await findSession({
userId: userId,
refreshToken: refreshToken,
});
if (session && session.expiration > new Date()) {
const token = await setAuthTokens(userId, res, session);
// trigger OAuth MCP server reconnection asynchronously (best effort)
void getOAuthReconnectionManager()
.reconnectServers(userId)
.catch((err) => {
logger.error('Error reconnecting OAuth MCP servers:', err);
});
const token = await setAuthTokens(userId, res, session._id);
res.status(200).send({ token, user });
} else if (req?.query?.retry) {
// Retrying from a refresh token request that failed (401)
@@ -132,59 +113,14 @@ const refreshController = async (req, res) => {
res.status(401).send('Refresh token expired or not found for this user');
}
} catch (err) {
logger.error(`[refreshController] Invalid refresh token:`, err);
logger.error(`[refreshController] Refresh token: ${refreshToken}`, err);
res.status(403).send('Invalid refresh token');
}
};
const graphTokenController = async (req, res) => {
try {
// Validate user is authenticated via Entra ID
if (!req.user.openidId || req.user.provider !== 'openid') {
return res.status(403).json({
message: 'Microsoft Graph access requires Entra ID authentication',
});
}
// Check if OpenID token reuse is active (required for on-behalf-of flow)
if (!isEnabled(process.env.OPENID_REUSE_TOKENS)) {
return res.status(403).json({
message: 'SharePoint integration requires OpenID token reuse to be enabled',
});
}
// Extract access token from Authorization header
const authHeader = req.headers.authorization;
if (!authHeader || !authHeader.startsWith('Bearer ')) {
return res.status(401).json({
message: 'Valid authorization token required',
});
}
// Get scopes from query parameters
const scopes = req.query.scopes;
if (!scopes) {
return res.status(400).json({
message: 'Graph API scopes are required as query parameter',
});
}
const accessToken = authHeader.substring(7); // Remove 'Bearer ' prefix
const tokenResponse = await getGraphApiToken(req.user, accessToken, scopes);
res.json(tokenResponse);
} catch (error) {
logger.error('[graphTokenController] Failed to obtain Graph API token:', error);
res.status(500).json({
message: 'Failed to obtain Microsoft Graph token',
});
}
};
module.exports = {
refreshController,
registrationController,
resetPasswordController,
resetPasswordRequestController,
graphTokenController,
};

View File

@@ -0,0 +1,46 @@
const { logger } = require('~/config');
//handle duplicates
const handleDuplicateKeyError = (err, res) => {
logger.error('Duplicate key error:', err.keyValue);
const field = `${JSON.stringify(Object.keys(err.keyValue))}`;
const code = 409;
res
.status(code)
.send({ messages: `An document with that ${field} already exists.`, fields: field });
};
//handle validation errors
const handleValidationError = (err, res) => {
logger.error('Validation error:', err.errors);
let errors = Object.values(err.errors).map((el) => el.message);
let fields = `${JSON.stringify(Object.values(err.errors).map((el) => el.path))}`;
let code = 400;
if (errors.length > 1) {
errors = errors.join(' ');
res.status(code).send({ messages: `${JSON.stringify(errors)}`, fields: fields });
} else {
res.status(code).send({ messages: `${JSON.stringify(errors)}`, fields: fields });
}
};
module.exports = (err, _req, res, _next) => {
try {
if (err.name === 'ValidationError') {
return handleValidationError(err, res);
}
if (err.code && err.code == 11000) {
return handleDuplicateKeyError(err, res);
}
// Special handling for errors like SyntaxError
if (err.statusCode && err.body) {
return res.status(err.statusCode).send(err.body);
}
logger.error('ErrorController => error', err);
return res.status(500).send('An unknown error occurred.');
} catch (err) {
logger.error('ErrorController => processing error', err);
return res.status(500).send('Processing error in ErrorController.');
}
};

View File

@@ -1,45 +1,36 @@
import { logger } from '@librechat/data-schemas';
import { ErrorController } from './error';
import type { Request, Response } from 'express';
import type { ValidationError, MongoServerError, CustomError } from '~/types';
const errorController = require('./ErrorController');
const { logger } = require('~/config');
// Mock the logger
jest.mock('@librechat/data-schemas', () => ({
...jest.requireActual('@librechat/data-schemas'),
jest.mock('~/config', () => ({
logger: {
error: jest.fn(),
warn: jest.fn(),
},
}));
describe('ErrorController', () => {
let mockReq: Request;
let mockRes: Response;
let mockNext: jest.Mock;
let mockReq, mockRes, mockNext;
beforeEach(() => {
mockReq = {
originalUrl: '',
} as Request;
mockReq = {};
mockRes = {
status: jest.fn().mockReturnThis(),
send: jest.fn(),
} as unknown as Response;
(logger.error as jest.Mock).mockClear();
};
mockNext = jest.fn();
logger.error.mockClear();
});
describe('ValidationError handling', () => {
it('should handle ValidationError with single error', () => {
const validationError = {
name: 'ValidationError',
message: 'Validation error',
errors: {
email: { message: 'Email is required', path: 'email' },
},
} as ValidationError;
};
ErrorController(validationError, mockReq, mockRes, mockNext);
errorController(validationError, mockReq, mockRes, mockNext);
expect(mockRes.status).toHaveBeenCalledWith(400);
expect(mockRes.send).toHaveBeenCalledWith({
@@ -52,14 +43,13 @@ describe('ErrorController', () => {
it('should handle ValidationError with multiple errors', () => {
const validationError = {
name: 'ValidationError',
message: 'Validation error',
errors: {
email: { message: 'Email is required', path: 'email' },
password: { message: 'Password is required', path: 'password' },
},
} as ValidationError;
};
ErrorController(validationError, mockReq, mockRes, mockNext);
errorController(validationError, mockReq, mockRes, mockNext);
expect(mockRes.status).toHaveBeenCalledWith(400);
expect(mockRes.send).toHaveBeenCalledWith({
@@ -73,9 +63,9 @@ describe('ErrorController', () => {
const validationError = {
name: 'ValidationError',
errors: {},
} as ValidationError;
};
ErrorController(validationError, mockReq, mockRes, mockNext);
errorController(validationError, mockReq, mockRes, mockNext);
expect(mockRes.status).toHaveBeenCalledWith(400);
expect(mockRes.send).toHaveBeenCalledWith({
@@ -88,59 +78,43 @@ describe('ErrorController', () => {
describe('Duplicate key error handling', () => {
it('should handle duplicate key error (code 11000)', () => {
const duplicateKeyError = {
name: 'MongoServerError',
message: 'Duplicate key error',
code: 11000,
keyValue: { email: 'test@example.com' },
errmsg:
'E11000 duplicate key error collection: test.users index: email_1 dup key: { email: "test@example.com" }',
} as MongoServerError;
};
ErrorController(duplicateKeyError, mockReq, mockRes, mockNext);
errorController(duplicateKeyError, mockReq, mockRes, mockNext);
expect(mockRes.status).toHaveBeenCalledWith(409);
expect(mockRes.send).toHaveBeenCalledWith({
messages: 'An document with that ["email"] already exists.',
fields: '["email"]',
});
expect(logger.warn).toHaveBeenCalledWith(
'Duplicate key error: E11000 duplicate key error collection: test.users index: email_1 dup key: { email: "test@example.com" }',
);
expect(logger.error).toHaveBeenCalledWith('Duplicate key error:', duplicateKeyError.keyValue);
});
it('should handle duplicate key error with multiple fields', () => {
const duplicateKeyError = {
name: 'MongoServerError',
message: 'Duplicate key error',
code: 11000,
keyValue: { email: 'test@example.com', username: 'testuser' },
errmsg:
'E11000 duplicate key error collection: test.users index: email_1 dup key: { email: "test@example.com" }',
} as MongoServerError;
};
ErrorController(duplicateKeyError, mockReq, mockRes, mockNext);
errorController(duplicateKeyError, mockReq, mockRes, mockNext);
expect(mockRes.status).toHaveBeenCalledWith(409);
expect(mockRes.send).toHaveBeenCalledWith({
messages: 'An document with that ["email","username"] already exists.',
fields: '["email","username"]',
});
expect(logger.warn).toHaveBeenCalledWith(
'Duplicate key error: E11000 duplicate key error collection: test.users index: email_1 dup key: { email: "test@example.com" }',
);
expect(logger.error).toHaveBeenCalledWith('Duplicate key error:', duplicateKeyError.keyValue);
});
it('should handle error with code 11000 as string', () => {
const duplicateKeyError = {
name: 'MongoServerError',
message: 'Duplicate key error',
code: 11000,
code: '11000',
keyValue: { email: 'test@example.com' },
errmsg:
'E11000 duplicate key error collection: test.users index: email_1 dup key: { email: "test@example.com" }',
} as MongoServerError;
};
ErrorController(duplicateKeyError, mockReq, mockRes, mockNext);
errorController(duplicateKeyError, mockReq, mockRes, mockNext);
expect(mockRes.status).toHaveBeenCalledWith(409);
expect(mockRes.send).toHaveBeenCalledWith({
@@ -155,9 +129,9 @@ describe('ErrorController', () => {
const syntaxError = {
statusCode: 400,
body: 'Invalid JSON syntax',
} as CustomError;
};
ErrorController(syntaxError, mockReq, mockRes, mockNext);
errorController(syntaxError, mockReq, mockRes, mockNext);
expect(mockRes.status).toHaveBeenCalledWith(400);
expect(mockRes.send).toHaveBeenCalledWith('Invalid JSON syntax');
@@ -167,9 +141,9 @@ describe('ErrorController', () => {
const customError = {
statusCode: 422,
body: { error: 'Unprocessable entity' },
} as CustomError;
};
ErrorController(customError, mockReq, mockRes, mockNext);
errorController(customError, mockReq, mockRes, mockNext);
expect(mockRes.status).toHaveBeenCalledWith(422);
expect(mockRes.send).toHaveBeenCalledWith({ error: 'Unprocessable entity' });
@@ -178,9 +152,9 @@ describe('ErrorController', () => {
it('should handle error with statusCode but no body', () => {
const partialError = {
statusCode: 400,
} as CustomError;
};
ErrorController(partialError, mockReq, mockRes, mockNext);
errorController(partialError, mockReq, mockRes, mockNext);
expect(mockRes.status).toHaveBeenCalledWith(500);
expect(mockRes.send).toHaveBeenCalledWith('An unknown error occurred.');
@@ -189,9 +163,9 @@ describe('ErrorController', () => {
it('should handle error with body but no statusCode', () => {
const partialError = {
body: 'Some error message',
} as CustomError;
};
ErrorController(partialError, mockReq, mockRes, mockNext);
errorController(partialError, mockReq, mockRes, mockNext);
expect(mockRes.status).toHaveBeenCalledWith(500);
expect(mockRes.send).toHaveBeenCalledWith('An unknown error occurred.');
@@ -202,7 +176,7 @@ describe('ErrorController', () => {
it('should handle unknown errors', () => {
const unknownError = new Error('Some unknown error');
ErrorController(unknownError, mockReq, mockRes, mockNext);
errorController(unknownError, mockReq, mockRes, mockNext);
expect(mockRes.status).toHaveBeenCalledWith(500);
expect(mockRes.send).toHaveBeenCalledWith('An unknown error occurred.');
@@ -213,31 +187,32 @@ describe('ErrorController', () => {
const mongoError = {
code: 11100,
message: 'Some MongoDB error',
} as MongoServerError;
};
ErrorController(mongoError, mockReq, mockRes, mockNext);
errorController(mongoError, mockReq, mockRes, mockNext);
expect(mockRes.status).toHaveBeenCalledWith(500);
expect(mockRes.send).toHaveBeenCalledWith('An unknown error occurred.');
expect(logger.error).toHaveBeenCalledWith('ErrorController => error', mongoError);
});
it('should handle generic errors', () => {
const genericError = new Error('Test error');
ErrorController(genericError, mockReq, mockRes, mockNext);
it('should handle null/undefined errors', () => {
errorController(null, mockReq, mockRes, mockNext);
expect(mockRes.status).toHaveBeenCalledWith(500);
expect(mockRes.send).toHaveBeenCalledWith('An unknown error occurred.');
expect(logger.error).toHaveBeenCalledWith('ErrorController => error', genericError);
expect(mockRes.send).toHaveBeenCalledWith('Processing error in ErrorController.');
expect(logger.error).toHaveBeenCalledWith(
'ErrorController => processing error',
expect.any(Error),
);
});
});
describe('Catch block handling', () => {
beforeEach(() => {
// Restore logger mock to normal behavior for these tests
(logger.error as jest.Mock).mockRestore();
(logger.error as jest.Mock) = jest.fn();
logger.error.mockRestore();
logger.error = jest.fn();
});
it('should handle errors when logger.error throws', () => {
@@ -245,10 +220,10 @@ describe('ErrorController', () => {
const freshMockRes = {
status: jest.fn().mockReturnThis(),
send: jest.fn(),
} as unknown as Response;
};
// Mock logger to throw on the first call, succeed on the second
(logger.error as jest.Mock)
logger.error
.mockImplementationOnce(() => {
throw new Error('Logger error');
})
@@ -256,7 +231,7 @@ describe('ErrorController', () => {
const testError = new Error('Test error');
ErrorController(testError, mockReq, freshMockRes, mockNext);
errorController(testError, mockReq, freshMockRes, mockNext);
expect(freshMockRes.status).toHaveBeenCalledWith(500);
expect(freshMockRes.send).toHaveBeenCalledWith('Processing error in ErrorController.');

View File

@@ -1,11 +1,10 @@
const { logger } = require('@librechat/data-schemas');
const { CacheKeys } = require('librechat-data-provider');
const { loadDefaultModels, loadConfigModels } = require('~/server/services/Config');
const { getLogStores } = require('~/cache');
const { logger } = require('~/config');
/**
* @param {ServerRequest} req
* @returns {Promise<TModelsConfig>} The models config.
*/
const getModelsConfig = async (req) => {
const cache = getLogStores(CacheKeys.CONFIG_STORE);

View File

@@ -0,0 +1,27 @@
const { CacheKeys } = require('librechat-data-provider');
const { loadOverrideConfig } = require('~/server/services/Config');
const { getLogStores } = require('~/cache');
async function overrideController(req, res) {
const cache = getLogStores(CacheKeys.CONFIG_STORE);
let overrideConfig = await cache.get(CacheKeys.OVERRIDE_CONFIG);
if (overrideConfig) {
res.send(overrideConfig);
return;
} else if (overrideConfig === false) {
res.send(false);
return;
}
overrideConfig = await loadOverrideConfig();
const { endpointsConfig, modelsConfig } = overrideConfig;
if (endpointsConfig) {
await cache.set(CacheKeys.ENDPOINT_CONFIG, endpointsConfig);
}
if (modelsConfig) {
await cache.set(CacheKeys.MODELS_CONFIG, modelsConfig);
}
await cache.set(CacheKeys.OVERRIDE_CONFIG, overrideConfig);
res.send(JSON.stringify(overrideConfig));
}
module.exports = overrideController;

Some files were not shown because too many files have changed in this diff Show More