Compare commits

...

63 Commits

Author SHA1 Message Date
Dustin Healy
3fd84216cf ci: add unit tests for backend prompt file attachment code and for uploaded file processing 2025-09-11 00:53:00 -07:00
Dustin Healy
d8997fdf0e feat: capabilities filtering on AttachFileButton 2025-09-11 00:53:00 -07:00
Dustin Healy
bb6ee0dc58 feat: fix mismatched sizes for icons 2025-09-11 00:53:00 -07:00
Dustin Healy
c4e86539c6 chore: remove debugging logs and tidy up misc stuff 2025-09-11 00:53:00 -07:00
Dustin Healy
f1bc15b3d5 chore: remove unnecessary comments 2025-09-11 00:53:00 -07:00
Dustin Healy
58678be0f8 chore: clean up comments and remove debugging log statements 2025-09-11 00:53:00 -07:00
Dustin Healy
441e69181c chore: import order 2025-09-11 00:53:00 -07:00
Dustin Healy
2c1d7a6b71 refactor: remove unnecessary onFileChange, just handle onSave stuff in onFilesChange 2025-09-11 00:53:00 -07:00
Dustin Healy
384d6c870b chore: remove unused translation string 2025-09-11 00:53:00 -07:00
Dustin Healy
013c002cbb fix: bring back proper deletion handling we lost with refactor for onRemoveHandler 2025-09-11 00:53:00 -07:00
Dustin Healy
e062ed5832 fix: refactor to ifs rather than switch case to maintain codebase style 2025-09-11 00:53:00 -07:00
Dustin Healy
d07d05a8d0 feat: add localization strings for tool resource types in file preview 2025-09-11 00:53:00 -07:00
Dustin Healy
5b38ce8fd9 fix: type guard for compiler 2025-09-11 00:53:00 -07:00
Dustin Healy
0a61e3cb39 feat: remove propdrilling for custom onFileRemove handler and just make it default behavior for PromptFile rather than working around old deletion handlers 2025-09-11 00:53:00 -07:00
Dustin Healy
a52c37faad chore: revert unnecessary change to message file handling 2025-09-11 00:53:00 -07:00
Dustin Healy
1f49c569c3 chore: remove debugging logs 2025-09-11 00:53:00 -07:00
Dustin Healy
479ce5df48 fix: use proper enum for promptGroup in useResourcePermissions arg and remove console.logs
chore: remove debugging logs

chore: remove debugging logs

chore: remove unused component
2025-09-11 00:53:00 -07:00
Dustin Healy
c37e368d98 chore: remove unused component and translation strings 2025-09-11 00:53:00 -07:00
Dustin Healy
fd29cbed4f chore: remove debug logs 2025-09-11 00:53:00 -07:00
Dustin Healy
277a321155 fix: attachments go in new prompt so that sidenav bar updates without refresh 2025-09-11 00:53:00 -07:00
Dustin Healy
0dba5c6450 fix: paperclip was getting larger as title got longer 2025-09-11 00:53:00 -07:00
Dustin Healy
93490764e6 refactor: move attach button to bottom of div when no attachments present 2025-09-11 00:53:00 -07:00
Dustin Healy
094320fcd9 feat: auto send working (still needs clean up) 2025-09-11 00:53:00 -07:00
Dustin Healy
cee11d3353 chore: address ESLint comments 2025-09-11 00:53:00 -07:00
Dustin Healy
69772317b2 chore: clean up usePromptFileHandling 2025-09-11 00:53:00 -07:00
Dustin Healy
607a5a2fcf feat: chat ui and functionality for prompts (auto-send not working) 2025-09-11 00:53:00 -07:00
Dustin Healy
7c3356e10b fix: deletion doesn't cause reference loss in versioning anymore - file reference maintained in db 2025-09-11 00:53:00 -07:00
Dustin Healy
d4fd0047cb fix: deletion + version updates not working properly 2025-09-11 00:53:00 -07:00
Dustin Healy
797fdf4286 feat: add attach section to PromptForm 2025-09-11 00:53:00 -07:00
Dustin Healy
623dfa5b63 feat: add file attachment section PromptFiles, new file display: PromptFile (needed for deletion to work properly), and usePromptFileHandling hook 2025-09-11 00:53:00 -07:00
Dustin Healy
600641d02f feat: add SharePoint picker support 2025-09-11 00:53:00 -07:00
Dustin Healy
d65accddc1 feat: add AttachFileButton for uploading files from a prompt context rather than chat
This is pretty much a stripped down version of AttachFileMenu so ofc there is duplication across this new component and AttachFileMenu but I believe it outweighs the increased complexity that would come from attempting to handle both contexts within just AttachFileMenu in regards to ephemeral agents and the file handling hooks - though we could probably refactor this without too much hassle later on in the file upload unification push once things are more settled.
2025-09-11 00:53:00 -07:00
Dustin Healy
195d2e2014 feat: add tool_resources to the productionPrompt for making and getting groups 2025-09-11 00:53:00 -07:00
Dustin Healy
c0ae6f277f feat: add schemas and types 2025-09-11 00:53:00 -07:00
Danny Avila
d91f34dd42 🔒 refactor: Optimize Email Domain Validation in OpenID, SAML, and Social Logins (#9567)
* refactor: Optimize Email Domain Validation in OpenID, SAML, and Social Login Strategies

    - Implemented email domain validation for user authentication in OpenID and SAML strategies, ensuring only allowed domains are processed.
    - Adjusted error messages for clarity and consistency across authentication methods.
    - Refactored social login to validate email domains before checking for existing users, improving registration flow.

* refactor: Email Domain Validation in LDAP and Social Login Strategies
2025-09-11 01:01:58 -04:00
Danny Avila
5676976564 🔒 fix: Email Domain Validation Order and Coverage (#9566) 2025-09-10 23:13:39 -04:00
Danny Avila
85aa3e7d9c 🔧 refactor: Centralize Collection Checks for Permissions Migration (#9565)
* 🔧 refactor: Centralize Collection Existence Checks for Permissions Migration

* Replace individual collection existence checks with a unified function `ensureRequiredCollectionsExist` in the database utility module.
* Update migration scripts for agents and prompts to utilize the new function, ensuring all required collections are verified for existence in a single call.
* Remove redundant collection existence logic from migration files, improving code maintainability and clarity.

* chore: import order in migration scripts

* 🔧 test: Update Token Test Cases for Realistic Scenarios

* Changed email in test data to 'user1-alt@example.com' for a more realistic scenario.
* Clarified expectation comment for token retrieval to indicate it finds the only matching token based on criteria.
2025-09-10 20:40:58 -04:00
Dustin Healy
a2ff6613c5 🪄 fix: MCP UI Renders for OAuth and Custom User Vars Servers (#9559) 2025-09-10 19:02:30 -04:00
Theo N. Truong
8d6cb5eee0 🧹 chore: Remove Unused Cache Configuration Keys (#9551)
* Remove unused STATIC_CONFIG and LIBRECHAT_YAML_CONFIG cache keys.

These cache keys were identified as dead code - they were being written to but never read from anywhere in the codebase after a recent refactor:

- STATIC_CONFIG was used as a cache namespace that stored configuration data
- LIBRECHAT_YAML_CONFIG was the key used within that namespace to store parsed YAML config
- The cache.set() operation in loadCustomConfig.js stored the config but no cache.get() operations retrieved it
- Configuration data is already handled through other mechanisms without caching

* # removed tests regarding cache
2025-09-10 19:01:44 -04:00
Federico Ruggi
31445e391a 🔖 fix: Agent Marketplace Bookmark and New Chat buttons (#9549)
* don't require conversation for bookmark button

* wrap marketplace component so it can correctly use context hooks

* chore: re-order import statement for MarketplaceProvider

---------

Co-authored-by: Danny Avila <danacordially@gmail.com>
2025-09-10 19:01:34 -04:00
Federico Ruggi
04c3a5a861 🔌 feat: Revoke MCP OAuth Credentials (#9464)
* revocation metadata fields

* store metadata

* get client info and meta

* revoke oauth tokens

* delete flow

* uninstall oauth mcp

* revoke button

* revoke oauth refactor, add comments, test

* adjust for clarity

* test deleteFlow

* handle metadata type

* no mutation

* adjust for clarity

* styling

* restructure for clarity

* move token-specific stuff

* use mcpmanager's oauth servers

* fix typo

* fix addressing of oauth prop

* log prefix

* remove debug log
2025-09-10 18:53:34 -04:00
Federico Ruggi
5667cc9702 🏪 fix: Show Agent Builder in Marketplace (#9537)
* don't require conversation endpoint

* bump up render time a bit

* a little less
2025-09-10 18:48:17 -04:00
Theo N. Truong
c0f95f971a 🗄️ refactor: Make APP_CONFIG a Dedicated Cache Store (#9558)
- This allows use APP_CONFIG in FORCED_IN_MEMORY_CACHE_NAMESPACES
- Remove the complexity of nested namespace (e.g. we no longer have to worry about the prefix of every role key)
2025-09-10 18:46:54 -04:00
Danny Avila
f125f5bd32 🤖 refactor: Auto-validate IDs in Agent Query (#9555)
* 🤖 refactor: Auto-validate IDs in Agent Query

* chore: remove comments in useAgentToolPermissions
2025-09-10 18:38:33 -04:00
Danny Avila
f3eca8c7a7 📦 chore: bump vite to address low severity vulns (#9553)
* 📦 chore: bump `vite` to address low severity vulns

* chore: update bun.lockb to reflect dependency changes
2025-09-10 14:56:46 -04:00
github-actions[bot]
f22e5f965e 🌍 i18n: Update translation.json with latest translations (#9533)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-10 14:29:33 -04:00
Danny Avila
749f539dfc 📬 refactor: Improved Rendering and Localization for Drag & Drop Files (#9547)
* 📬 refactor: Improved Rendering and Localization for Drag & Drop Files

- Refactored DragDropOverlay to use memoization and props for active state management.
- Updated the overlay to always render, reducing mount/unmount overhead.
- Improved user experience with localized text for drag-and-drop instructions.
- Enhanced file handling logic in useDragHelpers for better performance and clarity.

* fix: agent data retrieval in drag helper
2025-09-10 14:27:57 -04:00
Danny Avila
1247207afe 🔒 fix: Memory Disabled Config UI Permissions (2/2) 2025-09-09 22:00:01 -04:00
Danny Avila
5c0e9d8fbb 📂 refactor: Show File Search and Code File Upload Options Based on Agent Tools (#9532) 2025-09-09 20:48:29 -04:00
Dustin Healy
957fa7a994 😶‍🌫️ refactor: Conditionally Hide Tools Dropdown (#9530) 2025-09-09 19:57:50 -04:00
Danny Avila
751c2e1d17 👻 refactor: LocalStorage Cleanup and MCP State Optimization (#9528)
* 👻 refactor: MCP Select State with Jotai Atoms

* refactor: Implement timestamp management for ChatArea localStorage entries

* refactor: Integrate MCP Server Manager into BadgeRow context and components to avoid double-calling within BadgeRow

* refactor: add try/catch

* chore: remove comment
2025-09-09 17:32:10 -04:00
Danny Avila
519645c0b0 🔻 fix: Role and System Message Handling for ChatGPT Imports (#9524)
* fix: ChatGPT import logic breaks message graph when it encounters a system message

- Implemented `findNonSystemParent` to maintain parent-child relationships by skipping system messages.
- Added a test case to ensure system messages do not disrupt the conversation flow during import.

* fix: ChatGPT import, correct sender for user messages with GPT-4 model

* fix: Enhance model name extraction for assistant messages in import process

- Updated sender assignment logic to dynamically extract model names from model slugs, improving accuracy for various GPT models.
- Added comprehensive tests to validate the extraction and formatting of model names from different model slugs, ensuring robustness in the import functionality.
2025-09-09 13:51:26 -04:00
Danny Avila
0d0a318c3c 📦 chore: Update caniuse-lite to v1.0.30001741 (#9523) 2025-09-09 09:26:15 -04:00
Danny Avila
588e0c4611 🔒 fix: Memory Disabled Config UI Permissions (#9522) 2025-09-09 09:14:40 -04:00
github-actions[bot]
79144a6365 🌍 i18n: Update translation.json with latest translations (#9515)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-09-09 09:08:28 -04:00
Danny Avila
ca53c20370 🚃 refactor: Normalize paths for Vite Config Chunking (#9513) 2025-09-08 21:53:15 -04:00
Danny Avila
d635503f49 🔐 ci: Add MCP Environment Processing tests 2025-09-08 15:38:44 -04:00
Dev
920966f895 🔐 fix: Resolve Env. Variables for MCP OAuth Manual Config (#9501)
* Added functionality to process OAuth configuration within the MCP environment.
* Implemented handling for string values in OAuth settings, ensuring proper processing of environment variables.
* Maintained original structure for non-string values to preserve existing configurations.
2025-09-08 15:29:10 -04:00
Danny Avila
c46e0d3ecc 🔒 fix: href Attribute in Email Microsoft Template 2025-09-08 14:39:00 -04:00
Dustin Healy
c6ecf0095b 🎚️ feat: Anthropic Parameter Set Support via Custom Endpoints (#9415)
* refactor: modularize openai llm config logic into new getOpenAILLMConfig function (#9412)

* ✈️ refactor: Migrate Anthropic's getLLMConfig to TypeScript (#9413)

* refactor: move tokens.js over to packages/api and update imports

* refactor: port tokens.js to typescript

* refactor: move helpers.js over to packages/api and update imports

* refactor: port helpers.js to typescript

* refactor: move anthropic/llm.js over to packages/api and update imports

* refactor: port anthropic/llm.js to typescript with supporting types in types/anthropic.ts and updated tests in llm.spec.js

* refactor: move llm.spec.js over to packages/api and update import

* refactor: port llm.spec.js over to typescript

* 📝  Add Prompt Parameter Support for Anthropic Custom Endpoints (#9414)

feat: add anthropic llm config support for openai-like (custom) endpoints

* fix: missed compiler / type issues from addition of getAnthropicLLMConfig

* refactor: update tokens.ts to export constants and functions, enhance type definitions, and adjust default values

* WIP: first pass, decouple `llmConfig` from `configOptions`

* chore: update import path for OpenAI configuration from 'llm' to 'config'

* refactor: enhance type definitions for ThinkingConfig and update modelOptions in AnthropicConfigOptions

* refactor: cleanup type, introduce openai transform from alt provider

* chore: integrate removeNullishValues in Google llmConfig and update OpenAI exports

* chore: bump version of @librechat/api to 1.3.5 in package.json and package-lock.json

* refactor: update customParams type in OpenAIConfigOptions to use TConfig['customParams']

* refactor: enhance transformToOpenAIConfig to include fromEndpoint and improve config extraction

* refactor: conform userId field for anthropic/openai, cleanup anthropic typing

* ci: add backward compatibility tests for getOpenAIConfig with various endpoints and configurations

* ci: replace userId with user in clientOptions for getLLMConfig

* test: add Azure OpenAI endpoint tests for various configurations in getOpenAIConfig

* refactor: defaultHeaders retrieval for prompt caching for anthropic-based custom endpoint (litellm)

* test: add unit tests for getOpenAIConfig with various Anthropic model configurations

* test: enhance Anthropic compatibility tests with addParams and dropParams handling

* chore: update @librechat/agents dependency to version 2.4.78 in package.json and package-lock.json

* chore: update @librechat/agents dependency to version 2.4.79 in package.json and package-lock.json

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-09-08 14:35:29 -04:00
Danny Avila
7de6f6e44c ⚙️ chore: Update Build Config due to Windows Tests (#9511)
* chore: remove `rollup-plugin-generate-package-json`

* chore: increase maximum file size to cache in Vite configuration for windows builds
2025-09-08 14:16:49 -04:00
Danny Avila
035f85c3ba 🧪 ci: Tests for Anthropic and OpenAI LLM Configuration (#9484)
* fix: freq. and pres. penalty use camelcase

* ci: OpenAI Configuration Tests

* ci: Enhance OpenAI Configuration Tests with Azure and Custom Endpoint Scenarios

* Added integration tests for OpenAI and Azure configurations simulating various initialization scenarios.
* Updated OpenAIConfigOptions to allow null values for reverseProxyUrl and proxy.
* Improved handling of reasoning parameters in tests for both OpenAI and Azure setups.
* Ensured robust error handling for missing API keys and malformed configurations.
* Optimized performance for large parameter sets in configuration.

* test: Add comprehensive integration tests for Anthropic LLM configuration

* Introduced real usage integration tests for various Anthropic endpoint configurations, including handling of proxy and reverse proxy setups.
* Implemented model-specific scenarios for Claude-3.7 and web search functionality.
* Enhanced error handling for missing user IDs and large parameter sets.
* Validated parameter logic, including default values, boundary conditions, and type handling for numeric and array parameters.
* Ensured proper exclusion of system options from model options and maintained expected behavior across different model variations.
2025-09-06 09:42:12 -04:00
Daniel Andersen
6f6a34d126 🔗 feat: Custom Jina API URL for Web Search Reranking (#9236)
* feat: added support for custom JINA_API_URL

* fixed tests

* chore: Update @librechat/agents dependency to version 2.4.77 in package-lock.json and package.json files

* fix: Update Jina API URL to use environment variable in configuration files

* Refactor AppService, web.ts, and config.ts to replace hardcoded Jina API URL with an environment variable placeholder.
* Ensure consistency across tests and configuration for Jina API URL.

* chore: alphabetical order translation.json

* fix: alphabetical order

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-09-06 08:39:20 -04:00
161 changed files with 10269 additions and 2497 deletions

View File

@@ -690,8 +690,8 @@ HELP_AND_FAQ_URL=https://librechat.ai
# REDIS_PING_INTERVAL=300
# Force specific cache namespaces to use in-memory storage even when Redis is enabled
# Comma-separated list of CacheKeys (e.g., STATIC_CONFIG,ROLES,MESSAGES)
# FORCED_IN_MEMORY_CACHE_NAMESPACES=STATIC_CONFIG,ROLES
# Comma-separated list of CacheKeys (e.g., ROLES,MESSAGES)
# FORCED_IN_MEMORY_CACHE_NAMESPACES=ROLES,MESSAGES
#==================================================#
# Others #

View File

@@ -75,6 +75,7 @@
- 🔍 **Web Search**:
- Search the internet and retrieve relevant information to enhance your AI context
- Combines search providers, content scrapers, and result rerankers for optimal results
- **Customizable Jina Reranking**: Configure custom Jina API URLs for reranking services
- **[Learn More →](https://www.librechat.ai/docs/features/web_search)**
- 🪄 **Generative UI with Code Artifacts**:

View File

@@ -10,7 +10,17 @@ const {
validateVisionModel,
} = require('librechat-data-provider');
const { SplitStreamHandler: _Handler } = require('@librechat/agents');
const { Tokenizer, createFetch, createStreamEventHandlers } = require('@librechat/api');
const {
Tokenizer,
createFetch,
matchModelName,
getClaudeHeaders,
getModelMaxTokens,
configureReasoning,
checkPromptCacheSupport,
getModelMaxOutputTokens,
createStreamEventHandlers,
} = require('@librechat/api');
const {
truncateText,
formatMessage,
@@ -19,12 +29,6 @@ const {
parseParamFromPrompt,
createContextHandlers,
} = require('./prompts');
const {
getClaudeHeaders,
configureReasoning,
checkPromptCacheSupport,
} = require('~/server/services/Endpoints/anthropic/helpers');
const { getModelMaxTokens, getModelMaxOutputTokens, matchModelName } = require('~/utils');
const { spendTokens, spendStructuredTokens } = require('~/models/spendTokens');
const { encodeAndFormat } = require('~/server/services/Files/images/encode');
const { sleep } = require('~/server/utils');

View File

@@ -1,4 +1,5 @@
const { google } = require('googleapis');
const { getModelMaxTokens } = require('@librechat/api');
const { concat } = require('@langchain/core/utils/stream');
const { ChatVertexAI } = require('@langchain/google-vertexai');
const { Tokenizer, getSafetySettings } = require('@librechat/api');
@@ -21,7 +22,6 @@ const {
} = require('librechat-data-provider');
const { encodeAndFormat } = require('~/server/services/Files/images');
const { spendTokens } = require('~/models/spendTokens');
const { getModelMaxTokens } = require('~/utils');
const { sleep } = require('~/server/utils');
const { logger } = require('~/config');
const {

View File

@@ -7,7 +7,9 @@ const {
createFetch,
resolveHeaders,
constructAzureURL,
getModelMaxTokens,
genAzureChatCompletion,
getModelMaxOutputTokens,
createStreamEventHandlers,
} = require('@librechat/api');
const {
@@ -31,13 +33,13 @@ const {
titleInstruction,
createContextHandlers,
} = require('./prompts');
const { extractBaseURL, getModelMaxTokens, getModelMaxOutputTokens } = require('~/utils');
const { encodeAndFormat } = require('~/server/services/Files/images/encode');
const { addSpaceIfNeeded, sleep } = require('~/server/utils');
const { spendTokens } = require('~/models/spendTokens');
const { handleOpenAIErrors } = require('./tools/util');
const { summaryBuffer } = require('./memory');
const { runTitleChain } = require('./chains');
const { extractBaseURL } = require('~/utils');
const { tokenSplit } = require('./document');
const BaseClient = require('./BaseClient');
const { createLLM } = require('./llm');

View File

@@ -1,5 +1,5 @@
const { getModelMaxTokens } = require('@librechat/api');
const BaseClient = require('../BaseClient');
const { getModelMaxTokens } = require('../../../utils');
class FakeClient extends BaseClient {
constructor(apiKey, options = {}) {

View File

@@ -157,12 +157,11 @@ describe('cacheConfig', () => {
describe('FORCED_IN_MEMORY_CACHE_NAMESPACES validation', () => {
test('should parse comma-separated cache keys correctly', () => {
process.env.FORCED_IN_MEMORY_CACHE_NAMESPACES = ' ROLES, STATIC_CONFIG ,MESSAGES ';
process.env.FORCED_IN_MEMORY_CACHE_NAMESPACES = ' ROLES, MESSAGES ';
const { cacheConfig } = require('./cacheConfig');
expect(cacheConfig.FORCED_IN_MEMORY_CACHE_NAMESPACES).toEqual([
'ROLES',
'STATIC_CONFIG',
'MESSAGES',
]);
});

View File

@@ -31,8 +31,8 @@ const namespaces = {
[CacheKeys.SAML_SESSION]: sessionCache(CacheKeys.SAML_SESSION),
[CacheKeys.ROLES]: standardCache(CacheKeys.ROLES),
[CacheKeys.APP_CONFIG]: standardCache(CacheKeys.APP_CONFIG),
[CacheKeys.CONFIG_STORE]: standardCache(CacheKeys.CONFIG_STORE),
[CacheKeys.STATIC_CONFIG]: standardCache(CacheKeys.STATIC_CONFIG),
[CacheKeys.PENDING_REQ]: standardCache(CacheKeys.PENDING_REQ),
[CacheKeys.ENCODED_DOMAINS]: new Keyv({ store: keyvMongo, namespace: CacheKeys.ENCODED_DOMAINS }),
[CacheKeys.ABORT_KEYS]: standardCache(CacheKeys.ABORT_KEYS, Time.TEN_MINUTES),

View File

@@ -51,6 +51,7 @@ const createGroupPipeline = (query, skip, limit) => {
createdAt: 1,
updatedAt: 1,
'productionPrompt.prompt': 1,
'productionPrompt.tool_resources': 1,
// 'productionPrompt._id': 1,
// 'productionPrompt.type': 1,
},
@@ -328,6 +329,7 @@ async function getListPromptGroupsByAccess({
createdAt: 1,
updatedAt: 1,
'productionPrompt.prompt': 1,
'productionPrompt.tool_resources': 1,
},
},
);
@@ -411,7 +413,10 @@ module.exports = {
prompt: newPrompt,
group: {
...newPromptGroup,
productionPrompt: { prompt: newPrompt.prompt },
productionPrompt: {
prompt: newPrompt.prompt,
tool_resources: newPrompt.tool_resources,
},
},
};
} catch (error) {

View File

@@ -562,3 +562,884 @@ describe('Prompt ACL Permissions', () => {
});
});
});
describe('Prompt Model - File Attachments', () => {
describe('Creating Prompts with tool_resources', () => {
it('should create a prompt with file attachments in tool_resources', async () => {
const testGroup = await PromptGroup.create({
name: 'Attachment Test Group',
category: 'testing',
author: testUsers.owner._id,
authorName: testUsers.owner.name,
productionId: new mongoose.Types.ObjectId(),
});
const promptData = {
prompt: {
prompt: 'Test prompt with file attachments',
type: 'text',
groupId: testGroup._id,
tool_resources: {
file_search: {
file_ids: ['file-1', 'file-2'],
},
execute_code: {
file_ids: ['file-3'],
},
image_edit: {
file_ids: ['file-4'],
},
},
},
author: testUsers.owner._id,
};
const result = await promptFns.savePrompt(promptData);
expect(result.prompt).toBeTruthy();
expect(result.prompt.tool_resources).toEqual({
file_search: {
file_ids: ['file-1', 'file-2'],
},
execute_code: {
file_ids: ['file-3'],
},
image_edit: {
file_ids: ['file-4'],
},
});
const savedPrompt = await Prompt.findById(result.prompt._id);
expect(savedPrompt.tool_resources).toEqual(promptData.prompt.tool_resources);
});
it('should create a prompt without tool_resources when none provided', async () => {
const testGroup = await PromptGroup.create({
name: 'No Attachment Test Group',
category: 'testing',
author: testUsers.owner._id,
authorName: testUsers.owner.name,
productionId: new mongoose.Types.ObjectId(),
});
const promptData = {
prompt: {
prompt: 'Test prompt without attachments',
type: 'text',
groupId: testGroup._id,
},
author: testUsers.owner._id,
};
const result = await promptFns.savePrompt(promptData);
expect(result.prompt).toBeTruthy();
expect(result.prompt.tool_resources).toEqual({});
const savedPrompt = await Prompt.findById(result.prompt._id);
expect(savedPrompt.tool_resources).toEqual({});
});
it('should create a prompt group with tool_resources', async () => {
const saveData = {
prompt: {
type: 'text',
prompt: 'Test prompt with file attachments',
tool_resources: {
file_search: {
file_ids: ['file-1', 'file-2'],
},
ocr: {
file_ids: ['file-3'],
},
},
},
group: {
name: 'Test Prompt Group with Attachments',
category: 'test-category',
oneliner: 'Test description',
},
author: testUsers.owner._id,
authorName: testUsers.owner.name,
};
const result = await promptFns.createPromptGroup(saveData);
expect(result.prompt).toBeTruthy();
expect(result.group).toBeTruthy();
expect(result.prompt.tool_resources).toEqual({
file_search: {
file_ids: ['file-1', 'file-2'],
},
ocr: {
file_ids: ['file-3'],
},
});
expect(result.group.productionPrompt.tool_resources).toEqual(result.prompt.tool_resources);
});
});
describe('Retrieving Prompts with tool_resources', () => {
let testGroup;
let testPrompt;
beforeEach(async () => {
testGroup = await PromptGroup.create({
name: 'Retrieval Test Group',
category: 'testing',
author: testUsers.owner._id,
authorName: testUsers.owner.name,
productionId: new mongoose.Types.ObjectId(),
});
testPrompt = await Prompt.create({
prompt: 'Test prompt with attachments for retrieval',
type: 'text',
author: testUsers.owner._id,
groupId: testGroup._id,
tool_resources: {
file_search: {
file_ids: ['file-1', 'file-2'],
},
execute_code: {
file_ids: ['file-3'],
},
},
});
});
afterEach(async () => {
await Prompt.deleteMany({});
await PromptGroup.deleteMany({});
});
it('should retrieve a prompt with tool_resources', async () => {
const result = await promptFns.getPrompt({ _id: testPrompt._id });
expect(result).toBeTruthy();
expect(result.tool_resources).toEqual({
file_search: {
file_ids: ['file-1', 'file-2'],
},
execute_code: {
file_ids: ['file-3'],
},
});
});
it('should retrieve prompts with tool_resources by groupId', async () => {
const result = await promptFns.getPrompts({ groupId: testGroup._id });
expect(result).toBeTruthy();
expect(Array.isArray(result)).toBe(true);
expect(result.length).toBe(1);
expect(result[0].tool_resources).toEqual({
file_search: {
file_ids: ['file-1', 'file-2'],
},
execute_code: {
file_ids: ['file-3'],
},
});
});
it('should handle prompts without tool_resources', async () => {
const promptWithoutAttachments = await Prompt.create({
prompt: 'Test prompt without attachments',
type: 'text',
author: testUsers.owner._id,
groupId: testGroup._id,
});
const result = await promptFns.getPrompt({ _id: promptWithoutAttachments._id });
expect(result).toBeTruthy();
expect(result.tool_resources).toBeUndefined();
});
});
describe('Updating Prompts with tool_resources', () => {
let testGroup;
beforeEach(async () => {
testGroup = await PromptGroup.create({
name: 'Update Test Group',
category: 'testing',
author: testUsers.owner._id,
authorName: testUsers.owner.name,
productionId: new mongoose.Types.ObjectId(),
});
await Prompt.create({
prompt: 'Original prompt',
type: 'text',
author: testUsers.owner._id,
groupId: testGroup._id,
tool_resources: {
file_search: {
file_ids: ['file-1'],
},
},
});
});
afterEach(async () => {
await Prompt.deleteMany({});
await PromptGroup.deleteMany({});
});
it('should update prompt with new tool_resources', async () => {
const updatedPromptData = {
prompt: {
prompt: 'Updated prompt with new attachments',
type: 'text',
groupId: testGroup._id,
tool_resources: {
file_search: {
file_ids: ['file-1', 'file-2'],
},
execute_code: {
file_ids: ['file-3'],
},
},
},
author: testUsers.owner._id,
};
const result = await promptFns.savePrompt(updatedPromptData);
expect(result.prompt).toBeTruthy();
expect(result.prompt.tool_resources).toEqual({
file_search: {
file_ids: ['file-1', 'file-2'],
},
execute_code: {
file_ids: ['file-3'],
},
});
});
it('should update prompt to remove tool_resources', async () => {
const updatedPromptData = {
prompt: {
prompt: 'Updated prompt without attachments',
type: 'text',
groupId: testGroup._id,
// No tool_resources field
},
author: testUsers.owner._id,
};
const result = await promptFns.savePrompt(updatedPromptData);
expect(result.prompt).toBeTruthy();
expect(result.prompt.tool_resources).toEqual({});
});
});
describe('Deleting Prompts with tool_resources', () => {
let testGroup;
let testPrompt;
beforeEach(async () => {
testGroup = await PromptGroup.create({
name: 'Deletion Test Group',
category: 'testing',
author: testUsers.owner._id,
authorName: testUsers.owner.name,
productionId: new mongoose.Types.ObjectId(),
});
testPrompt = await Prompt.create({
prompt: 'Prompt to be deleted',
type: 'text',
author: testUsers.owner._id,
groupId: testGroup._id,
tool_resources: {
file_search: {
file_ids: ['file-1', 'file-2'],
},
execute_code: {
file_ids: ['file-3'],
},
},
});
});
afterEach(async () => {
await Prompt.deleteMany({});
await PromptGroup.deleteMany({});
});
it('should delete a prompt with tool_resources', async () => {
const result = await promptFns.deletePrompt({
promptId: testPrompt._id,
groupId: testGroup._id,
author: testUsers.owner._id,
role: SystemRoles.USER,
});
expect(result.prompt).toBe('Prompt deleted successfully');
const deletedPrompt = await Prompt.findById(testPrompt._id);
expect(deletedPrompt).toBeNull();
});
it('should delete prompt group when last prompt with tool_resources is deleted', async () => {
const result = await promptFns.deletePrompt({
promptId: testPrompt._id,
groupId: testGroup._id,
author: testUsers.owner._id,
role: SystemRoles.USER,
});
expect(result.prompt).toBe('Prompt deleted successfully');
expect(result.promptGroup).toBeTruthy();
expect(result.promptGroup.message).toBe('Prompt group deleted successfully');
const deletedPrompt = await Prompt.findById(testPrompt._id);
const deletedGroup = await PromptGroup.findById(testGroup._id);
expect(deletedPrompt).toBeNull();
expect(deletedGroup).toBeNull();
});
});
describe('Making Prompts Production with tool_resources', () => {
let testGroup;
let testPrompt;
beforeEach(async () => {
testGroup = await PromptGroup.create({
name: 'Production Test Group',
category: 'testing',
author: testUsers.owner._id,
authorName: testUsers.owner.name,
productionId: new mongoose.Types.ObjectId(),
});
testPrompt = await Prompt.create({
prompt: 'Prompt to be made production',
type: 'text',
author: testUsers.owner._id,
groupId: testGroup._id,
tool_resources: {
file_search: {
file_ids: ['file-1', 'file-2'],
},
image_edit: {
file_ids: ['file-3'],
},
},
});
});
afterEach(async () => {
await Prompt.deleteMany({});
await PromptGroup.deleteMany({});
});
it('should make a prompt with tool_resources production', async () => {
const result = await promptFns.makePromptProduction(testPrompt._id.toString());
expect(result.message).toBe('Prompt production made successfully');
const updatedGroup = await PromptGroup.findById(testGroup._id);
expect(updatedGroup.productionId.toString()).toBe(testPrompt._id.toString());
});
it('should return error message when prompt not found', async () => {
const nonExistentId = new mongoose.Types.ObjectId().toString();
const result = await promptFns.makePromptProduction(nonExistentId);
expect(result.message).toBe('Error making prompt production');
});
});
describe('Prompt Groups with tool_resources projection', () => {
let testGroup;
let testPrompt;
beforeEach(async () => {
testGroup = await PromptGroup.create({
name: 'Projection Test Group',
category: 'testing',
author: testUsers.owner._id,
authorName: testUsers.owner.name,
productionId: new mongoose.Types.ObjectId(),
});
testPrompt = await Prompt.create({
prompt: 'Test prompt for projection',
type: 'text',
author: testUsers.owner._id,
groupId: testGroup._id,
tool_resources: {
file_search: {
file_ids: ['file-1'],
},
execute_code: {
file_ids: ['file-2', 'file-3'],
},
},
});
await PromptGroup.findByIdAndUpdate(testGroup._id, {
productionId: testPrompt._id,
});
});
afterEach(async () => {
await Prompt.deleteMany({});
await PromptGroup.deleteMany({});
});
it('should include tool_resources in prompt group projection', async () => {
const mockReq = { user: { id: testUsers.owner._id } };
const filter = {
pageNumber: 1,
pageSize: 10,
category: 'testing',
};
const result = await promptFns.getPromptGroups(mockReq, filter);
expect(result.promptGroups).toBeTruthy();
expect(Array.isArray(result.promptGroups)).toBe(true);
expect(result.promptGroups.length).toBeGreaterThan(0);
const foundGroup = result.promptGroups.find(
(group) => group._id.toString() === testGroup._id.toString(),
);
expect(foundGroup).toBeTruthy();
expect(foundGroup.productionPrompt.tool_resources).toEqual({
file_search: {
file_ids: ['file-1'],
},
execute_code: {
file_ids: ['file-2', 'file-3'],
},
});
});
});
describe('Error handling with tool_resources', () => {
it('should handle errors when creating prompt with tool_resources', async () => {
const invalidPromptData = {
prompt: {
prompt: 'Test prompt',
type: 'text',
groupId: 'invalid-id',
tool_resources: {
file_search: {
file_ids: ['file-1'],
},
},
},
author: testUsers.owner._id,
};
const result = await promptFns.savePrompt(invalidPromptData);
expect(result.message).toBe('Error saving prompt');
});
it('should handle errors when retrieving prompt with tool_resources', async () => {
const result = await promptFns.getPrompt({ _id: 'invalid-id' });
expect(result.message).toBe('Error getting prompt');
});
});
describe('Edge Cases - File Attachment Scenarios', () => {
let testGroup;
let testPrompt;
beforeEach(async () => {
testGroup = await PromptGroup.create({
name: 'Edge Case Test Group',
category: 'testing',
author: testUsers.owner._id,
authorName: testUsers.owner.name,
productionId: new mongoose.Types.ObjectId(),
});
testPrompt = await Prompt.create({
prompt: 'Test prompt with file attachments for edge cases',
type: 'text',
author: testUsers.owner._id,
groupId: testGroup._id,
tool_resources: {
file_search: {
file_ids: ['file-1', 'file-2', 'file-3'],
},
execute_code: {
file_ids: ['file-4'],
},
image_edit: {
file_ids: ['file-5', 'file-6'],
},
},
});
});
afterEach(async () => {
await Prompt.deleteMany({});
await PromptGroup.deleteMany({});
});
describe('Orphaned File References', () => {
it('should maintain prompt functionality when referenced files are deleted', async () => {
const result = await promptFns.getPrompt({ _id: testPrompt._id });
expect(result).toBeTruthy();
expect(result.tool_resources).toEqual({
file_search: {
file_ids: ['file-1', 'file-2', 'file-3'],
},
execute_code: {
file_ids: ['file-4'],
},
image_edit: {
file_ids: ['file-5', 'file-6'],
},
});
expect(result.prompt).toBe('Test prompt with file attachments for edge cases');
expect(result.type).toBe('text');
});
it('should handle prompts with empty file_ids arrays', async () => {
const promptWithEmptyFileIds = await Prompt.create({
prompt: 'Prompt with empty file_ids',
type: 'text',
author: testUsers.owner._id,
groupId: testGroup._id,
tool_resources: {
file_search: {
file_ids: [],
},
execute_code: {
file_ids: [],
},
},
});
const result = await promptFns.getPrompt({ _id: promptWithEmptyFileIds._id });
expect(result).toBeTruthy();
expect(result.tool_resources).toEqual({
file_search: {
file_ids: [],
},
execute_code: {
file_ids: [],
},
});
});
it('should handle prompts with null/undefined file_ids', async () => {
const promptWithNullFileIds = await Prompt.create({
prompt: 'Prompt with null file_ids',
type: 'text',
author: testUsers.owner._id,
groupId: testGroup._id,
tool_resources: {
file_search: {
file_ids: null,
},
execute_code: {
file_ids: undefined,
},
},
});
const result = await promptFns.getPrompt({ _id: promptWithNullFileIds._id });
expect(result).toBeTruthy();
expect(result.tool_resources).toEqual({
file_search: {
file_ids: null,
},
});
});
});
describe('Invalid File References', () => {
it('should handle prompts with malformed file_ids', async () => {
const promptWithMalformedIds = await Prompt.create({
prompt: 'Prompt with malformed file_ids',
type: 'text',
author: testUsers.owner._id,
groupId: testGroup._id,
tool_resources: {
file_search: {
file_ids: ['', null, undefined, 'invalid-id', 'file-valid'],
},
execute_code: {
file_ids: [123, {}, []],
},
},
});
const result = await promptFns.getPrompt({ _id: promptWithMalformedIds._id });
expect(result).toBeTruthy();
expect(result.tool_resources).toEqual({
file_search: {
file_ids: ['', null, null, 'invalid-id', 'file-valid'],
},
execute_code: {
file_ids: [123, {}, []],
},
});
});
it('should handle prompts with duplicate file_ids', async () => {
const promptWithDuplicates = await Prompt.create({
prompt: 'Prompt with duplicate file_ids',
type: 'text',
author: testUsers.owner._id,
groupId: testGroup._id,
tool_resources: {
file_search: {
file_ids: ['file-1', 'file-2', 'file-1', 'file-3', 'file-2'],
},
},
});
const result = await promptFns.getPrompt({ _id: promptWithDuplicates._id });
expect(result).toBeTruthy();
expect(result.tool_resources).toEqual({
file_search: {
file_ids: ['file-1', 'file-2', 'file-1', 'file-3', 'file-2'],
},
});
});
});
describe('Tool Resource Edge Cases', () => {
it('should handle prompts with unknown tool resource types', async () => {
const promptWithUnknownTools = await Prompt.create({
prompt: 'Prompt with unknown tool resources',
type: 'text',
author: testUsers.owner._id,
groupId: testGroup._id,
tool_resources: {
unknown_tool: {
file_ids: ['file-1'],
},
another_unknown: {
file_ids: ['file-2', 'file-3'],
},
file_search: {
file_ids: ['file-4'],
},
},
});
const result = await promptFns.getPrompt({ _id: promptWithUnknownTools._id });
expect(result).toBeTruthy();
expect(result.tool_resources).toEqual({
unknown_tool: {
file_ids: ['file-1'],
},
another_unknown: {
file_ids: ['file-2', 'file-3'],
},
file_search: {
file_ids: ['file-4'],
},
});
});
it('should handle prompts with malformed tool_resources structure', async () => {
const promptWithMalformedTools = await Prompt.create({
prompt: 'Prompt with malformed tool_resources',
type: 'text',
author: testUsers.owner._id,
groupId: testGroup._id,
tool_resources: {
file_search: 'not-an-object',
execute_code: {
file_ids: 'not-an-array',
},
image_edit: {
wrong_property: ['file-1'],
},
},
});
const result = await promptFns.getPrompt({ _id: promptWithMalformedTools._id });
expect(result).toBeTruthy();
expect(result.tool_resources).toEqual({
file_search: 'not-an-object',
execute_code: {
file_ids: 'not-an-array',
},
image_edit: {
wrong_property: ['file-1'],
},
});
});
});
describe('Prompt Deletion vs File Persistence', () => {
it('should delete prompt but preserve file references in tool_resources', async () => {
const beforeDelete = await promptFns.getPrompt({ _id: testPrompt._id });
expect(beforeDelete.tool_resources).toEqual({
file_search: {
file_ids: ['file-1', 'file-2', 'file-3'],
},
execute_code: {
file_ids: ['file-4'],
},
image_edit: {
file_ids: ['file-5', 'file-6'],
},
});
const result = await promptFns.deletePrompt({
promptId: testPrompt._id,
groupId: testGroup._id,
author: testUsers.owner._id,
role: SystemRoles.USER,
});
expect(result.prompt).toBe('Prompt deleted successfully');
const deletedPrompt = await Prompt.findById(testPrompt._id);
expect(deletedPrompt).toBeNull();
});
it('should handle prompt deletion when tool_resources contain non-existent files', async () => {
const promptWithNonExistentFiles = await Prompt.create({
prompt: 'Prompt with non-existent file references',
type: 'text',
author: testUsers.owner._id,
groupId: testGroup._id,
tool_resources: {
file_search: {
file_ids: ['non-existent-file-1', 'non-existent-file-2'],
},
},
});
const result = await promptFns.deletePrompt({
promptId: promptWithNonExistentFiles._id,
groupId: testGroup._id,
author: testUsers.owner._id,
role: SystemRoles.USER,
});
expect(result.prompt).toBe('Prompt deleted successfully');
const deletedPrompt = await Prompt.findById(promptWithNonExistentFiles._id);
expect(deletedPrompt).toBeNull();
});
});
describe('Large File Collections', () => {
it('should handle prompts with many file attachments', async () => {
const manyFileIds = Array.from({ length: 100 }, (_, i) => `file-${i + 1}`);
const promptWithManyFiles = await Prompt.create({
prompt: 'Prompt with many file attachments',
type: 'text',
author: testUsers.owner._id,
groupId: testGroup._id,
tool_resources: {
file_search: {
file_ids: manyFileIds.slice(0, 50),
},
execute_code: {
file_ids: manyFileIds.slice(50, 100),
},
},
});
const result = await promptFns.getPrompt({ _id: promptWithManyFiles._id });
expect(result).toBeTruthy();
expect(result.tool_resources.file_search.file_ids).toHaveLength(50);
expect(result.tool_resources.execute_code.file_ids).toHaveLength(50);
expect(result.tool_resources.file_search.file_ids[0]).toBe('file-1');
expect(result.tool_resources.execute_code.file_ids[49]).toBe('file-100');
});
it('should handle prompts with very long file_ids', async () => {
const longFileId = 'a'.repeat(1000);
const promptWithLongFileId = await Prompt.create({
prompt: 'Prompt with very long file ID',
type: 'text',
author: testUsers.owner._id,
groupId: testGroup._id,
tool_resources: {
file_search: {
file_ids: [longFileId],
},
},
});
const result = await promptFns.getPrompt({ _id: promptWithLongFileId._id });
expect(result).toBeTruthy();
expect(result.tool_resources.file_search.file_ids[0]).toBe(longFileId);
expect(result.tool_resources.file_search.file_ids[0].length).toBe(1000);
});
});
describe('Concurrent Operations', () => {
it('should handle concurrent updates to prompts with tool_resources', async () => {
const concurrentPrompts = await Promise.all([
Prompt.create({
prompt: 'Concurrent prompt 1',
type: 'text',
author: testUsers.owner._id,
groupId: testGroup._id,
tool_resources: {
file_search: {
file_ids: ['shared-file-1', 'unique-file-1'],
},
},
}),
Prompt.create({
prompt: 'Concurrent prompt 2',
type: 'text',
author: testUsers.owner._id,
groupId: testGroup._id,
tool_resources: {
file_search: {
file_ids: ['shared-file-1', 'unique-file-2'],
},
},
}),
Prompt.create({
prompt: 'Concurrent prompt 3',
type: 'text',
author: testUsers.owner._id,
groupId: testGroup._id,
tool_resources: {
file_search: {
file_ids: ['shared-file-1', 'unique-file-3'],
},
},
}),
]);
expect(concurrentPrompts).toHaveLength(3);
concurrentPrompts.forEach((prompt, index) => {
expect(prompt.tool_resources.file_search.file_ids).toContain('shared-file-1');
expect(prompt.tool_resources.file_search.file_ids).toContain(`unique-file-${index + 1}`);
});
const retrievedPrompts = await promptFns.getPrompts({ groupId: testGroup._id });
expect(retrievedPrompts.length).toBeGreaterThanOrEqual(3);
});
});
});
});

View File

@@ -1,4 +1,4 @@
const { matchModelName } = require('../utils/tokens');
const { matchModelName } = require('@librechat/api');
const defaultRate = 6;
/**

View File

@@ -49,7 +49,7 @@
"@langchain/google-vertexai": "^0.2.13",
"@langchain/openai": "^0.5.18",
"@langchain/textsplitters": "^0.1.0",
"@librechat/agents": "^2.4.76",
"@librechat/agents": "^2.4.79",
"@librechat/api": "*",
"@librechat/data-schemas": "*",
"@microsoft/microsoft-graph-client": "^3.0.7",

View File

@@ -1,5 +1,10 @@
const { logger } = require('@librechat/data-schemas');
const { webSearchKeys, extractWebSearchEnvVars, normalizeHttpError } = require('@librechat/api');
const {
webSearchKeys,
extractWebSearchEnvVars,
normalizeHttpError,
MCPTokenStorage,
} = require('@librechat/api');
const {
getFiles,
updateUser,
@@ -16,11 +21,17 @@ const { verifyEmail, resendVerificationEmail } = require('~/server/services/Auth
const { needsRefresh, getNewS3URL } = require('~/server/services/Files/S3/crud');
const { Tools, Constants, FileSources } = require('librechat-data-provider');
const { processDeleteRequest } = require('~/server/services/Files/process');
const { Transaction, Balance, User } = require('~/db/models');
const { Transaction, Balance, User, Token } = require('~/db/models');
const { getAppConfig } = require('~/server/services/Config');
const { deleteToolCalls } = require('~/models/ToolCall');
const { deleteAllSharedLinks } = require('~/models');
const { getMCPManager } = require('~/config');
const { MCPOAuthHandler } = require('@librechat/api');
const { getFlowStateManager } = require('~/config');
const { CacheKeys } = require('librechat-data-provider');
const { getLogStores } = require('~/cache');
const { clearMCPServerTools } = require('~/server/services/Config/mcpToolsCache');
const { findToken } = require('~/models');
const getUserController = async (req, res) => {
const appConfig = await getAppConfig({ role: req.user?.role });
@@ -162,6 +173,15 @@ const updateUserPluginsController = async (req, res) => {
);
({ status, message } = normalizeHttpError(authService));
}
try {
// if the MCP server uses OAuth, perform a full cleanup and token revocation
await maybeUninstallOAuthMCP(user.id, pluginKey, appConfig);
} catch (error) {
logger.error(
`[updateUserPluginsController] Error uninstalling OAuth MCP for ${pluginKey}:`,
error,
);
}
} else {
// This handles:
// 1. Web_search uninstall (keys will be populated with all webSearchKeys if auth was {}).
@@ -269,6 +289,97 @@ const resendVerificationController = async (req, res) => {
}
};
/**
* OAuth MCP specific uninstall logic
*/
const maybeUninstallOAuthMCP = async (userId, pluginKey, appConfig) => {
if (!pluginKey.startsWith(Constants.mcp_prefix)) {
// this is not an MCP server, so nothing to do here
return;
}
const serverName = pluginKey.replace(Constants.mcp_prefix, '');
const mcpManager = getMCPManager(userId);
const serverConfig = mcpManager.getRawConfig(serverName) ?? appConfig?.mcpServers?.[serverName];
if (!mcpManager.getOAuthServers().has(serverName)) {
// this server does not use OAuth, so nothing to do here as well
return;
}
// 1. get client info used for revocation (client id, secret)
const clientTokenData = await MCPTokenStorage.getClientInfoAndMetadata({
userId,
serverName,
findToken,
});
if (clientTokenData == null) {
return;
}
const { clientInfo, clientMetadata } = clientTokenData;
// 2. get decrypted tokens before deletion
const tokens = await MCPTokenStorage.getTokens({
userId,
serverName,
findToken,
});
// 3. revoke OAuth tokens at the provider
const revocationEndpoint =
serverConfig.oauth?.revocation_endpoint ?? clientMetadata.revocation_endpoint;
const revocationEndpointAuthMethodsSupported =
serverConfig.oauth?.revocation_endpoint_auth_methods_supported ??
clientMetadata.revocation_endpoint_auth_methods_supported;
if (tokens?.access_token) {
try {
await MCPOAuthHandler.revokeOAuthToken(serverName, tokens.access_token, 'access', {
serverUrl: serverConfig.url,
clientId: clientInfo.client_id,
clientSecret: clientInfo.client_secret ?? '',
revocationEndpoint,
revocationEndpointAuthMethodsSupported,
});
} catch (error) {
logger.error(`Error revoking OAuth access token for ${serverName}:`, error);
}
}
if (tokens?.refresh_token) {
try {
await MCPOAuthHandler.revokeOAuthToken(serverName, tokens.refresh_token, 'refresh', {
serverUrl: serverConfig.url,
clientId: clientInfo.client_id,
clientSecret: clientInfo.client_secret ?? '',
revocationEndpoint,
revocationEndpointAuthMethodsSupported,
});
} catch (error) {
logger.error(`Error revoking OAuth refresh token for ${serverName}:`, error);
}
}
// 4. delete tokens from the DB after revocation attempts
await MCPTokenStorage.deleteUserTokens({
userId,
serverName,
deleteToken: async (filter) => {
await Token.deleteOne(filter);
},
});
// 5. clear the flow state for the OAuth tokens
const flowsCache = getLogStores(CacheKeys.FLOWS);
const flowManager = getFlowStateManager(flowsCache);
const flowId = MCPOAuthHandler.generateFlowId(userId, serverName);
await flowManager.deleteFlow(flowId, 'mcp_get_tokens');
await flowManager.deleteFlow(flowId, 'mcp_oauth');
// 6. clear the tools cache for the server
await clearMCPServerTools({ userId, serverName });
};
module.exports = {
getUserController,
getTermsStatusController,

View File

@@ -872,11 +872,10 @@ class AgentClient extends BaseClient {
if (agent.useLegacyContent === true) {
messages = formatContentStrings(messages);
}
if (
agent.model_parameters?.clientOptions?.defaultHeaders?.['anthropic-beta']?.includes(
'prompt-caching',
)
) {
const defaultHeaders =
agent.model_parameters?.clientOptions?.defaultHeaders ??
agent.model_parameters?.configuration?.defaultHeaders;
if (defaultHeaders?.['anthropic-beta']?.includes('prompt-caching')) {
messages = addCacheControl(messages);
}

View File

@@ -1,7 +1,7 @@
const { v4 } = require('uuid');
const { sleep } = require('@librechat/agents');
const { logger } = require('@librechat/data-schemas');
const { sendEvent, getBalanceConfig } = require('@librechat/api');
const { sendEvent, getBalanceConfig, getModelMaxTokens } = require('@librechat/api');
const {
Time,
Constants,
@@ -34,7 +34,6 @@ const { checkBalance } = require('~/models/balanceMethods');
const { getConvo } = require('~/models/Conversation');
const getLogStores = require('~/cache/getLogStores');
const { countTokens } = require('~/server/utils');
const { getModelMaxTokens } = require('~/utils');
const { getOpenAIClient } = require('./helpers');
/**

View File

@@ -1,7 +1,7 @@
const { v4 } = require('uuid');
const { sleep } = require('@librechat/agents');
const { logger } = require('@librechat/data-schemas');
const { sendEvent, getBalanceConfig } = require('@librechat/api');
const { sendEvent, getBalanceConfig, getModelMaxTokens } = require('@librechat/api');
const {
Time,
Constants,
@@ -31,7 +31,6 @@ const { checkBalance } = require('~/models/balanceMethods');
const { getConvo } = require('~/models/Conversation');
const getLogStores = require('~/cache/getLogStores');
const { countTokens } = require('~/server/utils');
const { getModelMaxTokens } = require('~/utils');
const { getOpenAIClient } = require('./helpers');
/**

View File

@@ -11,18 +11,25 @@ const { getAppConfig } = require('~/server/services/Config');
* @param {Object} res - Express response object.
* @param {Function} next - Next middleware function.
*
* @returns {Promise<function|Object>} - Returns a Promise which when resolved calls next middleware if the domain's email is allowed
* @returns {Promise<void>} - Calls next middleware if the domain's email is allowed, otherwise redirects to login
*/
const checkDomainAllowed = async (req, res, next = () => {}) => {
const email = req?.user?.email;
const appConfig = await getAppConfig({
role: req?.user?.role,
});
if (email && !isEmailDomainAllowed(email, appConfig?.registration?.allowedDomains)) {
logger.error(`[Social Login] [Social Login not allowed] [Email: ${email}]`);
return res.redirect('/login');
} else {
return next();
const checkDomainAllowed = async (req, res, next) => {
try {
const email = req?.user?.email;
const appConfig = await getAppConfig({
role: req?.user?.role,
});
if (email && !isEmailDomainAllowed(email, appConfig?.registration?.allowedDomains)) {
logger.error(`[Social Login] [Social Login not allowed] [Email: ${email}]`);
res.redirect('/login');
return;
}
next();
} catch (error) {
logger.error('[checkDomainAllowed] Error checking domain:', error);
res.redirect('/login');
}
};

View File

@@ -26,9 +26,12 @@ const domains = {
router.use(logHeaders);
router.use(loginLimiter);
const oauthHandler = async (req, res) => {
const oauthHandler = async (req, res, next) => {
try {
await checkDomainAllowed(req, res);
if (res.headersSent) {
return;
}
await checkBan(req, res);
if (req.banned) {
return;
@@ -46,6 +49,7 @@ const oauthHandler = async (req, res) => {
res.redirect(domains.client);
} catch (err) {
logger.error('Error in setting authentication tokens:', err);
next(err);
}
};
@@ -79,6 +83,7 @@ router.get(
scope: ['openid', 'profile', 'email'],
}),
setBalanceConfig,
checkDomainAllowed,
oauthHandler,
);
@@ -104,6 +109,7 @@ router.get(
profileFields: ['id', 'email', 'name'],
}),
setBalanceConfig,
checkDomainAllowed,
oauthHandler,
);
@@ -125,6 +131,7 @@ router.get(
session: false,
}),
setBalanceConfig,
checkDomainAllowed,
oauthHandler,
);
@@ -148,6 +155,7 @@ router.get(
scope: ['user:email', 'read:user'],
}),
setBalanceConfig,
checkDomainAllowed,
oauthHandler,
);
@@ -171,6 +179,7 @@ router.get(
scope: ['identify', 'email'],
}),
setBalanceConfig,
checkDomainAllowed,
oauthHandler,
);
@@ -192,6 +201,7 @@ router.post(
session: false,
}),
setBalanceConfig,
checkDomainAllowed,
oauthHandler,
);

View File

@@ -152,6 +152,7 @@ describe('AppService', () => {
webSearch: expect.objectContaining({
safeSearch: 1,
jinaApiKey: '${JINA_API_KEY}',
jinaApiUrl: '${JINA_API_URL}',
cohereApiKey: '${COHERE_API_KEY}',
serperApiKey: '${SERPER_API_KEY}',
searxngApiKey: '${SEARXNG_API_KEY}',

View File

@@ -3,12 +3,12 @@ const jwt = require('jsonwebtoken');
const { webcrypto } = require('node:crypto');
const { logger } = require('@librechat/data-schemas');
const { isEnabled, checkEmailConfig } = require('@librechat/api');
const { SystemRoles, errorsToString } = require('librechat-data-provider');
const { ErrorTypes, SystemRoles, errorsToString } = require('librechat-data-provider');
const {
findUser,
findToken,
createUser,
updateUser,
findToken,
countUsers,
getUserById,
findSession,
@@ -181,6 +181,14 @@ const registerUser = async (user, additionalData = {}) => {
let newUserId;
try {
const appConfig = await getAppConfig();
if (!isEmailDomainAllowed(email, appConfig?.registration?.allowedDomains)) {
const errorMessage =
'The email address provided cannot be used. Please use a different email address.';
logger.error(`[registerUser] [Registration not allowed] [Email: ${user.email}]`);
return { status: 403, message: errorMessage };
}
const existingUser = await findUser({ email }, 'email _id');
if (existingUser) {
@@ -195,14 +203,6 @@ const registerUser = async (user, additionalData = {}) => {
return { status: 200, message: genericVerificationMessage };
}
const appConfig = await getAppConfig({ role: user.role });
if (!isEmailDomainAllowed(email, appConfig?.registration?.allowedDomains)) {
const errorMessage =
'The email address provided cannot be used. Please use a different email address.';
logger.error(`[registerUser] [Registration not allowed] [Email: ${user.email}]`);
return { status: 403, message: errorMessage };
}
//determine if this is the first registered user (not counting anonymous_user)
const isFirstRegisteredUser = (await countUsers()) === 0;
@@ -252,6 +252,13 @@ const registerUser = async (user, additionalData = {}) => {
*/
const requestPasswordReset = async (req) => {
const { email } = req.body;
const appConfig = await getAppConfig();
if (!isEmailDomainAllowed(email, appConfig?.registration?.allowedDomains)) {
const error = new Error(ErrorTypes.AUTH_FAILED);
error.code = ErrorTypes.AUTH_FAILED;
error.message = 'Email domain not allowed';
return error;
}
const user = await findUser({ email }, 'email _id');
const emailEnabled = checkEmailConfig();

View File

@@ -4,6 +4,8 @@ const AppService = require('~/server/services/AppService');
const { setCachedTools } = require('./getCachedTools');
const getLogStores = require('~/cache/getLogStores');
const BASE_CONFIG_KEY = '_BASE_';
/**
* Get the app configuration based on user context
* @param {Object} [options]
@@ -14,8 +16,8 @@ const getLogStores = require('~/cache/getLogStores');
async function getAppConfig(options = {}) {
const { role, refresh } = options;
const cache = getLogStores(CacheKeys.CONFIG_STORE);
const cacheKey = role ? `${CacheKeys.APP_CONFIG}:${role}` : CacheKeys.APP_CONFIG;
const cache = getLogStores(CacheKeys.APP_CONFIG);
const cacheKey = role ? role : BASE_CONFIG_KEY;
if (!refresh) {
const cached = await cache.get(cacheKey);
@@ -24,7 +26,7 @@ async function getAppConfig(options = {}) {
}
}
let baseConfig = await cache.get(CacheKeys.APP_CONFIG);
let baseConfig = await cache.get(BASE_CONFIG_KEY);
if (!baseConfig) {
logger.info('[getAppConfig] App configuration not initialized. Initializing AppService...');
baseConfig = await AppService();
@@ -37,7 +39,7 @@ async function getAppConfig(options = {}) {
await setCachedTools(baseConfig.availableTools, { isGlobal: true });
}
await cache.set(CacheKeys.APP_CONFIG, baseConfig);
await cache.set(BASE_CONFIG_KEY, baseConfig);
}
// For now, return the base config

View File

@@ -119,10 +119,6 @@ https://www.librechat.ai/docs/configuration/stt_tts`);
.filter((endpoint) => endpoint.customParams)
.forEach((endpoint) => parseCustomParams(endpoint.name, endpoint.customParams));
if (customConfig.cache) {
const cache = getLogStores(CacheKeys.STATIC_CONFIG);
await cache.set(CacheKeys.LIBRECHAT_YAML_CONFIG, customConfig);
}
if (result.data.modelSpecs) {
customConfig.modelSpecs = result.data.modelSpecs;

View File

@@ -48,16 +48,11 @@ const axios = require('axios');
const { loadYaml } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const loadCustomConfig = require('./loadCustomConfig');
const getLogStores = require('~/cache/getLogStores');
describe('loadCustomConfig', () => {
const mockSet = jest.fn();
const mockCache = { set: mockSet };
beforeEach(() => {
jest.resetAllMocks();
delete process.env.CONFIG_PATH;
getLogStores.mockReturnValue(mockCache);
});
it('should return null and log error if remote config fetch fails', async () => {
@@ -94,7 +89,6 @@ describe('loadCustomConfig', () => {
const result = await loadCustomConfig();
expect(result).toEqual(mockConfig);
expect(mockSet).toHaveBeenCalledWith(expect.anything(), mockConfig);
});
it('should return null and log if config schema validation fails', async () => {
@@ -134,7 +128,6 @@ describe('loadCustomConfig', () => {
axios.get.mockResolvedValue({ data: mockConfig });
const result = await loadCustomConfig();
expect(result).toEqual(mockConfig);
expect(mockSet).toHaveBeenCalledWith(expect.anything(), mockConfig);
});
it('should return null if the remote config file is not found', async () => {
@@ -168,7 +161,6 @@ describe('loadCustomConfig', () => {
process.env.CONFIG_PATH = 'validConfig.yaml';
loadYaml.mockReturnValueOnce(mockConfig);
await loadCustomConfig();
expect(mockSet).not.toHaveBeenCalled();
});
it('should log the loaded custom config', async () => {

View File

@@ -1,6 +1,7 @@
const { Providers } = require('@librechat/agents');
const {
primeResources,
getModelMaxTokens,
extractLibreChatParams,
optionalChainWithEmptyCheck,
} = require('@librechat/api');
@@ -17,7 +18,6 @@ const { getProviderConfig } = require('~/server/services/Endpoints');
const { processFiles } = require('~/server/services/Files/process');
const { getFiles, getToolFilesByIds } = require('~/models/File');
const { getConvoFiles } = require('~/models/Conversation');
const { getModelMaxTokens } = require('~/utils');
/**
* @param {object} params

View File

@@ -1,6 +1,6 @@
const { getLLMConfig } = require('@librechat/api');
const { EModelEndpoint } = require('librechat-data-provider');
const { getUserKey, checkUserKeyExpiry } = require('~/server/services/UserService');
const { getLLMConfig } = require('~/server/services/Endpoints/anthropic/llm');
const AnthropicClient = require('~/app/clients/AnthropicClient');
const initializeClient = async ({ req, res, endpointOption, overrideModel, optionsOnly }) => {
@@ -40,7 +40,6 @@ const initializeClient = async ({ req, res, endpointOption, overrideModel, optio
clientOptions = Object.assign(
{
proxy: PROXY ?? null,
userId: req.user.id,
reverseProxyUrl: ANTHROPIC_REVERSE_PROXY ?? null,
modelOptions: endpointOption?.model_parameters ?? {},
},
@@ -49,6 +48,7 @@ const initializeClient = async ({ req, res, endpointOption, overrideModel, optio
if (overrideModel) {
clientOptions.modelOptions.model = overrideModel;
}
clientOptions.modelOptions.user = req.user.id;
return getLLMConfig(anthropicApiKey, clientOptions);
}

View File

@@ -1,103 +0,0 @@
const { ProxyAgent } = require('undici');
const { anthropicSettings, removeNullishValues } = require('librechat-data-provider');
const { checkPromptCacheSupport, getClaudeHeaders, configureReasoning } = require('./helpers');
/**
* Generates configuration options for creating an Anthropic language model (LLM) instance.
*
* @param {string} apiKey - The API key for authentication with Anthropic.
* @param {Object} [options={}] - Additional options for configuring the LLM.
* @param {Object} [options.modelOptions] - Model-specific options.
* @param {string} [options.modelOptions.model] - The name of the model to use.
* @param {number} [options.modelOptions.maxOutputTokens] - The maximum number of tokens to generate.
* @param {number} [options.modelOptions.temperature] - Controls randomness in output generation.
* @param {number} [options.modelOptions.topP] - Controls diversity of output generation.
* @param {number} [options.modelOptions.topK] - Controls the number of top tokens to consider.
* @param {string[]} [options.modelOptions.stop] - Sequences where the API will stop generating further tokens.
* @param {boolean} [options.modelOptions.stream] - Whether to stream the response.
* @param {string} options.userId - The user ID for tracking and personalization.
* @param {string} [options.proxy] - Proxy server URL.
* @param {string} [options.reverseProxyUrl] - URL for a reverse proxy, if used.
*
* @returns {Object} Configuration options for creating an Anthropic LLM instance, with null and undefined values removed.
*/
function getLLMConfig(apiKey, options = {}) {
const systemOptions = {
thinking: options.modelOptions.thinking ?? anthropicSettings.thinking.default,
promptCache: options.modelOptions.promptCache ?? anthropicSettings.promptCache.default,
thinkingBudget: options.modelOptions.thinkingBudget ?? anthropicSettings.thinkingBudget.default,
};
for (let key in systemOptions) {
delete options.modelOptions[key];
}
const defaultOptions = {
model: anthropicSettings.model.default,
maxOutputTokens: anthropicSettings.maxOutputTokens.default,
stream: true,
};
const mergedOptions = Object.assign(defaultOptions, options.modelOptions);
/** @type {AnthropicClientOptions} */
let requestOptions = {
apiKey,
model: mergedOptions.model,
stream: mergedOptions.stream,
temperature: mergedOptions.temperature,
stopSequences: mergedOptions.stop,
maxTokens:
mergedOptions.maxOutputTokens || anthropicSettings.maxOutputTokens.reset(mergedOptions.model),
clientOptions: {},
invocationKwargs: {
metadata: {
user_id: options.userId,
},
},
};
requestOptions = configureReasoning(requestOptions, systemOptions);
if (!/claude-3[-.]7/.test(mergedOptions.model)) {
requestOptions.topP = mergedOptions.topP;
requestOptions.topK = mergedOptions.topK;
} else if (requestOptions.thinking == null) {
requestOptions.topP = mergedOptions.topP;
requestOptions.topK = mergedOptions.topK;
}
const supportsCacheControl =
systemOptions.promptCache === true && checkPromptCacheSupport(requestOptions.model);
const headers = getClaudeHeaders(requestOptions.model, supportsCacheControl);
if (headers) {
requestOptions.clientOptions.defaultHeaders = headers;
}
if (options.proxy) {
const proxyAgent = new ProxyAgent(options.proxy);
requestOptions.clientOptions.fetchOptions = {
dispatcher: proxyAgent,
};
}
if (options.reverseProxyUrl) {
requestOptions.clientOptions.baseURL = options.reverseProxyUrl;
requestOptions.anthropicApiUrl = options.reverseProxyUrl;
}
const tools = [];
if (mergedOptions.web_search) {
tools.push({
type: 'web_search_20250305',
name: 'web_search',
});
}
return {
tools,
/** @type {AnthropicClientOptions} */
llmConfig: removeNullishValues(requestOptions),
};
}
module.exports = { getLLMConfig };

View File

@@ -1,341 +0,0 @@
const { getLLMConfig } = require('~/server/services/Endpoints/anthropic/llm');
jest.mock('https-proxy-agent', () => ({
HttpsProxyAgent: jest.fn().mockImplementation((proxy) => ({ proxy })),
}));
describe('getLLMConfig', () => {
beforeEach(() => {
jest.clearAllMocks();
});
it('should create a basic configuration with default values', () => {
const result = getLLMConfig('test-api-key', { modelOptions: {} });
expect(result.llmConfig).toHaveProperty('apiKey', 'test-api-key');
expect(result.llmConfig).toHaveProperty('model', 'claude-3-5-sonnet-latest');
expect(result.llmConfig).toHaveProperty('stream', true);
expect(result.llmConfig).toHaveProperty('maxTokens');
});
it('should include proxy settings when provided', () => {
const result = getLLMConfig('test-api-key', {
modelOptions: {},
proxy: 'http://proxy:8080',
});
expect(result.llmConfig.clientOptions).toHaveProperty('fetchOptions');
expect(result.llmConfig.clientOptions.fetchOptions).toHaveProperty('dispatcher');
expect(result.llmConfig.clientOptions.fetchOptions.dispatcher).toBeDefined();
expect(result.llmConfig.clientOptions.fetchOptions.dispatcher.constructor.name).toBe(
'ProxyAgent',
);
});
it('should include reverse proxy URL when provided', () => {
const result = getLLMConfig('test-api-key', {
modelOptions: {},
reverseProxyUrl: 'http://reverse-proxy',
});
expect(result.llmConfig.clientOptions).toHaveProperty('baseURL', 'http://reverse-proxy');
expect(result.llmConfig).toHaveProperty('anthropicApiUrl', 'http://reverse-proxy');
});
it('should include topK and topP for non-Claude-3.7 models', () => {
const result = getLLMConfig('test-api-key', {
modelOptions: {
model: 'claude-3-opus',
topK: 10,
topP: 0.9,
},
});
expect(result.llmConfig).toHaveProperty('topK', 10);
expect(result.llmConfig).toHaveProperty('topP', 0.9);
});
it('should include topK and topP for Claude-3.5 models', () => {
const result = getLLMConfig('test-api-key', {
modelOptions: {
model: 'claude-3-5-sonnet',
topK: 10,
topP: 0.9,
},
});
expect(result.llmConfig).toHaveProperty('topK', 10);
expect(result.llmConfig).toHaveProperty('topP', 0.9);
});
it('should NOT include topK and topP for Claude-3-7 models with thinking enabled (hyphen notation)', () => {
const result = getLLMConfig('test-api-key', {
modelOptions: {
model: 'claude-3-7-sonnet',
topK: 10,
topP: 0.9,
thinking: true,
},
});
expect(result.llmConfig).not.toHaveProperty('topK');
expect(result.llmConfig).not.toHaveProperty('topP');
expect(result.llmConfig).toHaveProperty('thinking');
expect(result.llmConfig.thinking).toHaveProperty('type', 'enabled');
// When thinking is enabled, it uses the default thinkingBudget of 2000
expect(result.llmConfig.thinking).toHaveProperty('budget_tokens', 2000);
});
it('should add "prompt-caching" and "context-1m" beta headers for claude-sonnet-4 model', () => {
const modelOptions = {
model: 'claude-sonnet-4-20250514',
promptCache: true,
};
const result = getLLMConfig('test-key', { modelOptions });
const clientOptions = result.llmConfig.clientOptions;
expect(clientOptions.defaultHeaders).toBeDefined();
expect(clientOptions.defaultHeaders).toHaveProperty('anthropic-beta');
expect(clientOptions.defaultHeaders['anthropic-beta']).toBe(
'prompt-caching-2024-07-31,context-1m-2025-08-07',
);
});
it('should add "prompt-caching" and "context-1m" beta headers for claude-sonnet-4 model formats', () => {
const modelVariations = [
'claude-sonnet-4-20250514',
'claude-sonnet-4-latest',
'anthropic/claude-sonnet-4-20250514',
];
modelVariations.forEach((model) => {
const modelOptions = { model, promptCache: true };
const result = getLLMConfig('test-key', { modelOptions });
const clientOptions = result.llmConfig.clientOptions;
expect(clientOptions.defaultHeaders).toBeDefined();
expect(clientOptions.defaultHeaders).toHaveProperty('anthropic-beta');
expect(clientOptions.defaultHeaders['anthropic-beta']).toBe(
'prompt-caching-2024-07-31,context-1m-2025-08-07',
);
});
});
it('should NOT include topK and topP for Claude-3.7 models with thinking enabled (decimal notation)', () => {
const result = getLLMConfig('test-api-key', {
modelOptions: {
model: 'claude-3.7-sonnet',
topK: 10,
topP: 0.9,
thinking: true,
},
});
expect(result.llmConfig).not.toHaveProperty('topK');
expect(result.llmConfig).not.toHaveProperty('topP');
expect(result.llmConfig).toHaveProperty('thinking');
expect(result.llmConfig.thinking).toHaveProperty('type', 'enabled');
// When thinking is enabled, it uses the default thinkingBudget of 2000
expect(result.llmConfig.thinking).toHaveProperty('budget_tokens', 2000);
});
it('should handle custom maxOutputTokens', () => {
const result = getLLMConfig('test-api-key', {
modelOptions: {
model: 'claude-3-opus',
maxOutputTokens: 2048,
},
});
expect(result.llmConfig).toHaveProperty('maxTokens', 2048);
});
it('should handle promptCache setting', () => {
const result = getLLMConfig('test-api-key', {
modelOptions: {
model: 'claude-3-5-sonnet',
promptCache: true,
},
});
// We're not checking specific header values since that depends on the actual helper function
// Just verifying that the promptCache setting is processed
expect(result.llmConfig).toBeDefined();
});
it('should include topK and topP for Claude-3.7 models when thinking is not enabled', () => {
// Test with thinking explicitly set to null/undefined
const result = getLLMConfig('test-api-key', {
modelOptions: {
model: 'claude-3-7-sonnet',
topK: 10,
topP: 0.9,
thinking: false,
},
});
expect(result.llmConfig).toHaveProperty('topK', 10);
expect(result.llmConfig).toHaveProperty('topP', 0.9);
// Test with thinking explicitly set to false
const result2 = getLLMConfig('test-api-key', {
modelOptions: {
model: 'claude-3-7-sonnet',
topK: 10,
topP: 0.9,
thinking: false,
},
});
expect(result2.llmConfig).toHaveProperty('topK', 10);
expect(result2.llmConfig).toHaveProperty('topP', 0.9);
// Test with decimal notation as well
const result3 = getLLMConfig('test-api-key', {
modelOptions: {
model: 'claude-3.7-sonnet',
topK: 10,
topP: 0.9,
thinking: false,
},
});
expect(result3.llmConfig).toHaveProperty('topK', 10);
expect(result3.llmConfig).toHaveProperty('topP', 0.9);
});
describe('Edge cases', () => {
it('should handle missing apiKey', () => {
const result = getLLMConfig(undefined, { modelOptions: {} });
expect(result.llmConfig).not.toHaveProperty('apiKey');
});
it('should handle empty modelOptions', () => {
expect(() => {
getLLMConfig('test-api-key', {});
}).toThrow("Cannot read properties of undefined (reading 'thinking')");
});
it('should handle no options parameter', () => {
expect(() => {
getLLMConfig('test-api-key');
}).toThrow("Cannot read properties of undefined (reading 'thinking')");
});
it('should handle temperature, stop sequences, and stream settings', () => {
const result = getLLMConfig('test-api-key', {
modelOptions: {
temperature: 0.7,
stop: ['\n\n', 'END'],
stream: false,
},
});
expect(result.llmConfig).toHaveProperty('temperature', 0.7);
expect(result.llmConfig).toHaveProperty('stopSequences', ['\n\n', 'END']);
expect(result.llmConfig).toHaveProperty('stream', false);
});
it('should handle maxOutputTokens when explicitly set to falsy value', () => {
const result = getLLMConfig('test-api-key', {
modelOptions: {
model: 'claude-3-opus',
maxOutputTokens: null,
},
});
// The actual anthropicSettings.maxOutputTokens.reset('claude-3-opus') returns 4096
expect(result.llmConfig).toHaveProperty('maxTokens', 4096);
});
it('should handle both proxy and reverseProxyUrl', () => {
const result = getLLMConfig('test-api-key', {
modelOptions: {},
proxy: 'http://proxy:8080',
reverseProxyUrl: 'https://reverse-proxy.com',
});
expect(result.llmConfig.clientOptions).toHaveProperty('fetchOptions');
expect(result.llmConfig.clientOptions.fetchOptions).toHaveProperty('dispatcher');
expect(result.llmConfig.clientOptions.fetchOptions.dispatcher).toBeDefined();
expect(result.llmConfig.clientOptions.fetchOptions.dispatcher.constructor.name).toBe(
'ProxyAgent',
);
expect(result.llmConfig.clientOptions).toHaveProperty('baseURL', 'https://reverse-proxy.com');
expect(result.llmConfig).toHaveProperty('anthropicApiUrl', 'https://reverse-proxy.com');
});
it('should handle prompt cache with supported model', () => {
const result = getLLMConfig('test-api-key', {
modelOptions: {
model: 'claude-3-5-sonnet',
promptCache: true,
},
});
// claude-3-5-sonnet supports prompt caching and should get the appropriate headers
expect(result.llmConfig.clientOptions.defaultHeaders).toEqual({
'anthropic-beta': 'max-tokens-3-5-sonnet-2024-07-15,prompt-caching-2024-07-31',
});
});
it('should handle thinking and thinkingBudget options', () => {
const result = getLLMConfig('test-api-key', {
modelOptions: {
model: 'claude-3-7-sonnet',
thinking: true,
thinkingBudget: 10000, // This exceeds the default max_tokens of 8192
},
});
// The function should add thinking configuration for claude-3-7 models
expect(result.llmConfig).toHaveProperty('thinking');
expect(result.llmConfig.thinking).toHaveProperty('type', 'enabled');
// With claude-3-7-sonnet, the max_tokens default is 8192
// Budget tokens gets adjusted to 90% of max_tokens (8192 * 0.9 = 7372) when it exceeds max_tokens
expect(result.llmConfig.thinking).toHaveProperty('budget_tokens', 7372);
// Test with budget_tokens within max_tokens limit
const result2 = getLLMConfig('test-api-key', {
modelOptions: {
model: 'claude-3-7-sonnet',
thinking: true,
thinkingBudget: 2000,
},
});
expect(result2.llmConfig.thinking).toHaveProperty('budget_tokens', 2000);
});
it('should remove system options from modelOptions', () => {
const modelOptions = {
model: 'claude-3-opus',
thinking: true,
promptCache: true,
thinkingBudget: 1000,
temperature: 0.5,
};
getLLMConfig('test-api-key', { modelOptions });
expect(modelOptions).not.toHaveProperty('thinking');
expect(modelOptions).not.toHaveProperty('promptCache');
expect(modelOptions).not.toHaveProperty('thinkingBudget');
expect(modelOptions).toHaveProperty('temperature', 0.5);
});
it('should handle all nullish values removal', () => {
const result = getLLMConfig('test-api-key', {
modelOptions: {
temperature: null,
topP: undefined,
topK: 0,
stop: [],
},
});
expect(result.llmConfig).not.toHaveProperty('temperature');
expect(result.llmConfig).not.toHaveProperty('topP');
expect(result.llmConfig).toHaveProperty('topK', 0);
expect(result.llmConfig).toHaveProperty('stopSequences', []);
});
});
});

View File

@@ -1,3 +1,4 @@
const { getModelMaxTokens } = require('@librechat/api');
const { createContentAggregator } = require('@librechat/agents');
const {
EModelEndpoint,
@@ -7,7 +8,6 @@ const {
const { getDefaultHandlers } = require('~/server/controllers/agents/callbacks');
const getOptions = require('~/server/services/Endpoints/bedrock/options');
const AgentClient = require('~/server/controllers/agents/client');
const { getModelMaxTokens } = require('~/utils');
const initializeClient = async ({ req, res, endpointOption }) => {
if (!endpointOption) {

View File

@@ -702,6 +702,8 @@ const processAgentFileUpload = async ({ req, res, metadata }) => {
returnFile: true,
});
filepath = result.filepath;
width = result.width;
height = result.height;
}
const fileInfo = removeNullishValues({

View File

@@ -0,0 +1,520 @@
const { EToolResources, FileSources, FileContext } = require('librechat-data-provider');
jest.mock('~/server/services/Files/strategies', () => {
const mockHandleFileUpload = jest.fn();
const mockHandleImageUpload = jest.fn();
return {
getStrategyFunctions: jest.fn((source) => ({
handleFileUpload: mockHandleFileUpload.mockImplementation(({ file, file_id }) =>
Promise.resolve({
filepath: `/uploads/${source}/${file_id}`,
bytes: file?.size || 20,
}),
),
handleImageUpload: mockHandleImageUpload.mockImplementation(({ file, file_id }) =>
Promise.resolve({
filepath: `/uploads/${source}/images/${file_id}`,
bytes: file.size,
width: 800,
height: 600,
}),
),
})),
};
});
jest.mock('~/models/File', () => {
const mockCreateFile = jest.fn();
return {
createFile: mockCreateFile.mockImplementation((fileInfo) =>
Promise.resolve({ _id: 'test-file-id', ...fileInfo }),
),
updateFileUsage: jest.fn().mockResolvedValue(),
};
});
jest.mock('~/models/Agent', () => ({
addAgentResourceFile: jest.fn().mockResolvedValue(),
}));
jest.mock('~/server/services/Config/getEndpointsConfig', () => ({
checkCapability: jest.fn().mockResolvedValue(true),
}));
jest.mock('~/server/utils/getFileStrategy', () => ({
getFileStrategy: jest.fn(() => {
return 'local';
}),
}));
jest.mock('~/server/services/Files/VectorDB/crud', () => ({
uploadVectors: jest.fn(({ file_id }) =>
Promise.resolve({
success: true,
vectorIds: [`vector-${file_id}-1`, `vector-${file_id}-2`],
}),
),
}));
jest.mock('~/server/controllers/assistants/helpers', () => ({
getOpenAIClient: jest.fn(),
}));
jest.mock('~/server/services/Tools/credentials', () => ({
loadAuthValues: jest.fn(),
}));
jest.mock('fs', () => ({
...jest.requireActual('fs'),
createReadStream: jest.fn(() => 'mock-stream'),
}));
jest.mock('~/server/utils/queue', () => ({
LB_QueueAsyncCall: jest.fn((fn, args, callback) => {
if (callback) {
callback(null, { success: true });
}
return Promise.resolve({ success: true });
}),
}));
jest.mock('~/server/services/Config/app', () => ({
getAppConfig: jest.fn().mockResolvedValue({
fileStrategy: 'local',
fileStrategies: {
agents: 'local',
},
imageOutputType: 'jpeg',
}),
}));
jest.mock('~/server/services/Files/images', () => ({
processImageFile: jest.fn().mockResolvedValue({
filepath: '/test/image/path',
width: 800,
height: 600,
}),
handleImageUpload: jest.fn().mockResolvedValue({
filepath: '/test/image/uploaded/path',
bytes: 1024,
width: 800,
height: 600,
}),
}));
describe('File Processing - processAgentFileUpload', () => {
let processAgentFileUpload;
let mockHandleFileUpload;
let mockHandleImageUpload;
let mockCreateFile;
let mockAddAgentResourceFile;
let mockUploadVectors;
let mockCheckCapability;
let mockGetFileStrategy;
beforeAll(() => {
const processModule = require('./process');
processAgentFileUpload = processModule.processAgentFileUpload;
const { getStrategyFunctions } = require('~/server/services/Files/strategies');
const mockStrategies = getStrategyFunctions();
mockHandleFileUpload = mockStrategies.handleFileUpload;
mockHandleImageUpload = mockStrategies.handleImageUpload;
mockCreateFile = require('~/models/File').createFile;
mockAddAgentResourceFile = require('~/models/Agent').addAgentResourceFile;
mockUploadVectors = require('~/server/services/Files/VectorDB/crud').uploadVectors;
mockCheckCapability = require('~/server/services/Config/getEndpointsConfig').checkCapability;
mockGetFileStrategy = require('~/server/utils/getFileStrategy').getFileStrategy;
});
beforeEach(() => {
jest.clearAllMocks();
});
describe('processAgentFileUpload', () => {
it('should process image file upload for agent with proper file handling', async () => {
const mockReq = {
user: { id: 'test-user-id' },
file: {
buffer: Buffer.from('test image data'),
mimetype: 'image/jpeg',
size: 1024,
originalname: 'test-image.jpg',
},
body: {
file_id: 'test-file-id',
},
config: {
fileStrategy: 'local',
fileStrategies: {
agents: 'local',
},
imageOutputType: 'jpeg',
},
};
const mockRes = {
status: jest.fn().mockReturnThis(),
json: jest.fn(),
};
const metadata = {
agent_id: 'test-agent-id',
tool_resource: EToolResources.image_edit,
file_id: 'test-file-id',
};
await processAgentFileUpload({ req: mockReq, res: mockRes, metadata });
expect(mockGetFileStrategy).toHaveBeenCalledWith(mockReq.config, { isImage: true });
expect(mockHandleImageUpload).toHaveBeenCalledWith(
expect.objectContaining({
req: mockReq,
file: mockReq.file,
file_id: expect.any(String),
}),
);
expect(mockCreateFile).toHaveBeenCalledWith(
expect.objectContaining({
user: 'test-user-id',
file_id: 'test-file-id',
bytes: 1024,
filename: 'test-image.jpg',
context: FileContext.agents,
type: 'image/jpeg',
source: FileSources.local,
width: 800,
height: 600,
}),
true,
);
expect(mockAddAgentResourceFile).toHaveBeenCalledWith(
expect.objectContaining({
agent_id: 'test-agent-id',
file_id: 'test-file-id',
tool_resource: EToolResources.image_edit,
req: mockReq,
}),
);
expect(mockRes.status).toHaveBeenCalledWith(200);
expect(mockRes.json).toHaveBeenCalledWith(
expect.objectContaining({
message: 'Agent file uploaded and processed successfully',
}),
);
});
it('should process file_search tool resource with dual storage (file + vector)', async () => {
const mockReq = {
user: { id: 'test-user-id' },
file: {
buffer: Buffer.from('test file data'),
mimetype: 'application/pdf',
size: 2048,
originalname: 'test-document.pdf',
},
body: {
file_id: 'test-file-id',
},
config: {
fileStrategy: 'local',
fileStrategies: {
agents: 'local',
},
imageOutputType: 'jpeg',
},
};
const mockRes = {
status: jest.fn().mockReturnThis(),
json: jest.fn(),
};
const metadata = {
agent_id: 'test-agent-id',
tool_resource: EToolResources.file_search,
file_id: 'test-file-id',
};
await processAgentFileUpload({ req: mockReq, res: mockRes, metadata });
expect(mockGetFileStrategy).toHaveBeenCalledWith(mockReq.config, { isImage: false });
expect(mockHandleFileUpload).toHaveBeenCalledWith({
req: mockReq,
file: mockReq.file,
file_id: 'test-file-id',
basePath: 'uploads',
entity_id: 'test-agent-id',
});
expect(mockUploadVectors).toHaveBeenCalledWith({
req: mockReq,
file: mockReq.file,
file_id: 'test-file-id',
entity_id: 'test-agent-id',
});
expect(mockCreateFile).toHaveBeenCalledWith(
expect.objectContaining({
user: 'test-user-id',
file_id: 'test-file-id',
filename: 'test-document.pdf',
context: FileContext.agents,
type: 'application/pdf',
source: FileSources.local,
bytes: 2048,
filepath: '/uploads/local/test-file-id',
metadata: {},
}),
true,
);
expect(mockAddAgentResourceFile).toHaveBeenCalledWith(
expect.objectContaining({
agent_id: 'test-agent-id',
file_id: 'test-file-id',
tool_resource: EToolResources.file_search,
req: mockReq,
}),
);
});
it('should handle missing tool_resource parameter', async () => {
const mockReq = {
user: { id: 'test-user-id' },
file: {
buffer: Buffer.from('test file data'),
mimetype: 'application/pdf',
size: 2048,
originalname: 'test-document.pdf',
},
body: {
file_id: 'test-file-id',
},
config: {
fileStrategy: 'local',
fileStrategies: {
agents: 'local',
},
imageOutputType: 'jpeg',
},
};
const mockRes = {
status: jest.fn().mockReturnThis(),
json: jest.fn(),
};
const metadata = {
agent_id: 'test-agent-id',
file_id: 'test-file-id',
};
await expect(
processAgentFileUpload({ req: mockReq, res: mockRes, metadata }),
).rejects.toThrow('No tool resource provided for agent file upload');
});
it('should handle missing agent_id parameter', async () => {
const mockReq = {
user: { id: 'test-user-id' },
file: {
buffer: Buffer.from('test file data'),
mimetype: 'application/pdf',
size: 2048,
originalname: 'test-document.pdf',
},
body: {
file_id: 'test-file-id',
},
config: {
fileStrategy: 'local',
fileStrategies: {
agents: 'local',
},
imageOutputType: 'jpeg',
},
};
const mockRes = {
status: jest.fn().mockReturnThis(),
json: jest.fn(),
};
const metadata = {
tool_resource: EToolResources.file_search,
file_id: 'test-file-id',
};
await expect(
processAgentFileUpload({ req: mockReq, res: mockRes, metadata }),
).rejects.toThrow('No agent ID provided for agent file upload');
});
it('should handle image uploads for non-image tool resources', async () => {
const mockReq = {
user: { id: 'test-user-id' },
file: {
buffer: Buffer.from('test image data'),
mimetype: 'image/jpeg',
size: 1024,
originalname: 'test-image.jpg',
},
body: {
file_id: 'test-file-id',
},
config: {
fileStrategy: 'local',
fileStrategies: {
agents: 'local',
},
imageOutputType: 'jpeg',
},
};
const mockRes = {
status: jest.fn().mockReturnThis(),
json: jest.fn(),
};
const metadata = {
agent_id: 'test-agent-id',
tool_resource: EToolResources.file_search,
file_id: 'test-file-id',
};
await expect(
processAgentFileUpload({ req: mockReq, res: mockRes, metadata }),
).rejects.toThrow('Image uploads are not supported for file search tool resources');
});
it('should check execute_code capability and load auth values when processing code files', async () => {
const mockReq = {
user: { id: 'test-user-id' },
file: {
buffer: Buffer.from('print("hello world")'),
mimetype: 'text/x-python',
size: 20,
originalname: 'test.py',
path: '/tmp/test-file.py',
},
body: {
file_id: 'test-file-id',
},
config: {
fileStrategy: 'local',
fileStrategies: {
agents: 'local',
},
imageOutputType: 'jpeg',
},
};
const mockRes = {
status: jest.fn().mockReturnThis(),
json: jest.fn(),
};
const metadata = {
agent_id: 'test-agent-id',
tool_resource: EToolResources.execute_code,
file_id: 'test-file-id',
};
const mockLoadAuthValues = require('~/server/services/Tools/credentials').loadAuthValues;
mockLoadAuthValues.mockResolvedValue({ CODE_API_KEY: 'test-key' });
await processAgentFileUpload({ req: mockReq, res: mockRes, metadata });
expect(mockCheckCapability).toHaveBeenCalledWith(mockReq, 'execute_code');
expect(mockLoadAuthValues).toHaveBeenCalledWith({
userId: 'test-user-id',
authFields: ['LIBRECHAT_CODE_API_KEY'],
});
expect(mockHandleFileUpload).toHaveBeenNthCalledWith(
1,
expect.objectContaining({
req: mockReq,
stream: 'mock-stream',
filename: 'test.py',
entity_id: 'test-agent-id',
apiKey: undefined,
}),
);
expect(mockHandleFileUpload).toHaveBeenNthCalledWith(
2,
expect.objectContaining({
req: mockReq,
file: mockReq.file,
file_id: 'test-file-id',
basePath: 'uploads',
entity_id: 'test-agent-id',
}),
);
expect(mockAddAgentResourceFile).toHaveBeenCalledWith(
expect.objectContaining({
agent_id: 'test-agent-id',
file_id: 'test-file-id',
tool_resource: EToolResources.execute_code,
req: mockReq,
}),
);
});
it('should throw error when example capability (execute_code) is not enabled', async () => {
const mockReq = {
user: { id: 'test-user-id' },
file: {
buffer: Buffer.from('print("hello world")'),
mimetype: 'text/x-python',
size: 20,
originalname: 'test.py',
},
body: {
file_id: 'test-file-id',
},
config: {
fileStrategy: 'local',
fileStrategies: {
agents: 'local',
},
imageOutputType: 'jpeg',
},
};
const mockRes = {
status: jest.fn().mockReturnThis(),
json: jest.fn(),
};
const metadata = {
agent_id: 'test-agent-id',
tool_resource: EToolResources.execute_code,
file_id: 'test-file-id',
};
mockCheckCapability.mockResolvedValueOnce(false);
await expect(
processAgentFileUpload({ req: mockReq, res: mockRes, metadata }),
).rejects.toThrow('Code execution is not enabled for Agents');
expect(mockCheckCapability).toHaveBeenCalledWith(mockReq, 'execute_code');
expect(mockHandleFileUpload).not.toHaveBeenCalled();
expect(mockCreateFile).not.toHaveBeenCalled();
expect(mockAddAgentResourceFile).not.toHaveBeenCalled();
});
});
});

View File

@@ -1,13 +1,13 @@
const axios = require('axios');
const { Providers } = require('@librechat/agents');
const { logAxiosError } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { HttpsProxyAgent } = require('https-proxy-agent');
const { logAxiosError, inputSchema, processModelData } = require('@librechat/api');
const { EModelEndpoint, defaultModels, CacheKeys } = require('librechat-data-provider');
const { inputSchema, extractBaseURL, processModelData } = require('~/utils');
const { OllamaClient } = require('~/app/clients/OllamaClient');
const { isUserProvided } = require('~/server/utils');
const getLogStores = require('~/cache/getLogStores');
const { extractBaseURL } = require('~/utils');
/**
* Splits a string by commas and trims each resulting value.

View File

@@ -11,8 +11,8 @@ const {
getAnthropicModels,
} = require('./ModelService');
jest.mock('~/utils', () => {
const originalUtils = jest.requireActual('~/utils');
jest.mock('@librechat/api', () => {
const originalUtils = jest.requireActual('@librechat/api');
return {
...originalUtils,
processModelData: jest.fn((...args) => {
@@ -108,7 +108,7 @@ describe('fetchModels with createTokenConfig true', () => {
beforeEach(() => {
// Clears the mock's history before each test
const _utils = require('~/utils');
const _utils = require('@librechat/api');
axios.get.mockResolvedValue({ data });
});
@@ -120,7 +120,7 @@ describe('fetchModels with createTokenConfig true', () => {
createTokenConfig: true,
});
const { processModelData } = require('~/utils');
const { processModelData } = require('@librechat/api');
expect(processModelData).toHaveBeenCalled();
expect(processModelData).toHaveBeenCalledWith(data);
});

View File

@@ -229,7 +229,7 @@
>
<!--[if mso]><style>.v-button {background: transparent !important;}</style><![endif]-->
<div align='left'>
<!--[if mso]><v:roundrect xmlns:v="urn:schemas-microsoft-com:vml" xmlns:w="urn:schemas-microsoft-com:office:word" href="href=&quot;{{verificationLink}}&quot;" style="height:37px; v-text-anchor:middle; width:114px;" arcsize="11%" stroke="f" fillcolor="#10a37f"><w:anchorlock/><center style="color:#FFFFFF;"><![endif]-->
<!--[if mso]><v:roundrect xmlns:v="urn:schemas-microsoft-com:vml" xmlns:w="urn:schemas-microsoft-com:office:word" href="{{verificationLink}}" style="height:37px; v-text-anchor:middle; width:114px;" arcsize="11%" stroke="f" fillcolor="#10a37f"><w:anchorlock/><center style="color:#FFFFFF;"><![endif]-->
<a
href='{{verificationLink}}'
target='_blank'

View File

@@ -1,9 +1,9 @@
const { v4: uuidv4 } = require('uuid');
const { logger } = require('@librechat/data-schemas');
const { EModelEndpoint, Constants, openAISettings, CacheKeys } = require('librechat-data-provider');
const { createImportBatchBuilder } = require('./importBatchBuilder');
const { cloneMessagesWithTimestamps } = require('./fork');
const getLogStores = require('~/cache/getLogStores');
const logger = require('~/config/winston');
/**
* Returns the appropriate importer function based on the provided JSON data.
@@ -212,6 +212,29 @@ function processConversation(conv, importBatchBuilder, requestUserId) {
}
}
/**
* Helper function to find the nearest non-system parent
* @param {string} parentId - The ID of the parent message.
* @returns {string} The ID of the nearest non-system parent message.
*/
const findNonSystemParent = (parentId) => {
if (!parentId || !messageMap.has(parentId)) {
return Constants.NO_PARENT;
}
const parentMapping = conv.mapping[parentId];
if (!parentMapping?.message) {
return Constants.NO_PARENT;
}
/* If parent is a system message, traverse up to find the nearest non-system parent */
if (parentMapping.message.author?.role === 'system') {
return findNonSystemParent(parentMapping.parent);
}
return messageMap.get(parentId);
};
// Create and save messages using the mapped IDs
const messages = [];
for (const [id, mapping] of Object.entries(conv.mapping)) {
@@ -220,23 +243,27 @@ function processConversation(conv, importBatchBuilder, requestUserId) {
messageMap.delete(id);
continue;
} else if (role === 'system') {
messageMap.delete(id);
// Skip system messages but keep their ID in messageMap for parent references
continue;
}
const newMessageId = messageMap.get(id);
const parentMessageId =
mapping.parent && messageMap.has(mapping.parent)
? messageMap.get(mapping.parent)
: Constants.NO_PARENT;
const parentMessageId = findNonSystemParent(mapping.parent);
const messageText = formatMessageText(mapping.message);
const isCreatedByUser = role === 'user';
let sender = isCreatedByUser ? 'user' : 'GPT-3.5';
let sender = isCreatedByUser ? 'user' : 'assistant';
const model = mapping.message.metadata.model_slug || openAISettings.model.default;
if (model.includes('gpt-4')) {
sender = 'GPT-4';
if (!isCreatedByUser) {
/** Extracted model name from model slug */
const gptMatch = model.match(/gpt-(.+)/i);
if (gptMatch) {
sender = `GPT-${gptMatch[1]}`;
} else {
sender = model || 'assistant';
}
}
messages.push({

View File

@@ -99,6 +99,404 @@ describe('importChatGptConvo', () => {
expect(importBatchBuilder.saveBatch).toHaveBeenCalled();
});
it('should handle system messages without breaking parent-child relationships', async () => {
/**
* Test data that reproduces message graph "breaking" when it encounters a system message
*/
const testData = [
{
title: 'System Message Parent Test',
create_time: 1714585031.148505,
update_time: 1714585060.879308,
mapping: {
'root-node': {
id: 'root-node',
message: null,
parent: null,
children: ['user-msg-1'],
},
'user-msg-1': {
id: 'user-msg-1',
message: {
id: 'user-msg-1',
author: { role: 'user' },
create_time: 1714585031.150442,
content: { content_type: 'text', parts: ['First user message'] },
metadata: { model_slug: 'gpt-4' },
},
parent: 'root-node',
children: ['assistant-msg-1'],
},
'assistant-msg-1': {
id: 'assistant-msg-1',
message: {
id: 'assistant-msg-1',
author: { role: 'assistant' },
create_time: 1714585032.150442,
content: { content_type: 'text', parts: ['First assistant response'] },
metadata: { model_slug: 'gpt-4' },
},
parent: 'user-msg-1',
children: ['system-msg'],
},
'system-msg': {
id: 'system-msg',
message: {
id: 'system-msg',
author: { role: 'system' },
create_time: 1714585033.150442,
content: { content_type: 'text', parts: ['System message in middle'] },
metadata: { model_slug: 'gpt-4' },
},
parent: 'assistant-msg-1',
children: ['user-msg-2'],
},
'user-msg-2': {
id: 'user-msg-2',
message: {
id: 'user-msg-2',
author: { role: 'user' },
create_time: 1714585034.150442,
content: { content_type: 'text', parts: ['Second user message'] },
metadata: { model_slug: 'gpt-4' },
},
parent: 'system-msg',
children: ['assistant-msg-2'],
},
'assistant-msg-2': {
id: 'assistant-msg-2',
message: {
id: 'assistant-msg-2',
author: { role: 'assistant' },
create_time: 1714585035.150442,
content: { content_type: 'text', parts: ['Second assistant response'] },
metadata: { model_slug: 'gpt-4' },
},
parent: 'user-msg-2',
children: [],
},
},
},
];
const requestUserId = 'user-123';
const importBatchBuilder = new ImportBatchBuilder(requestUserId);
jest.spyOn(importBatchBuilder, 'saveMessage');
const importer = getImporter(testData);
await importer(testData, requestUserId, () => importBatchBuilder);
/** 2 user messages + 2 assistant messages (system message should be skipped) */
const expectedMessages = 4;
expect(importBatchBuilder.saveMessage).toHaveBeenCalledTimes(expectedMessages);
const savedMessages = importBatchBuilder.saveMessage.mock.calls.map((call) => call[0]);
const messageMap = new Map();
savedMessages.forEach((msg) => {
messageMap.set(msg.text, msg);
});
const firstUser = messageMap.get('First user message');
const firstAssistant = messageMap.get('First assistant response');
const secondUser = messageMap.get('Second user message');
const secondAssistant = messageMap.get('Second assistant response');
expect(firstUser).toBeDefined();
expect(firstAssistant).toBeDefined();
expect(secondUser).toBeDefined();
expect(secondAssistant).toBeDefined();
expect(firstUser.parentMessageId).toBe(Constants.NO_PARENT);
expect(firstAssistant.parentMessageId).toBe(firstUser.messageId);
// This is the key test: second user message should have first assistant as parent
// (not NO_PARENT which would indicate the system message broke the chain)
expect(secondUser.parentMessageId).toBe(firstAssistant.messageId);
expect(secondAssistant.parentMessageId).toBe(secondUser.messageId);
});
it('should maintain correct sender for user messages regardless of GPT-4 model', async () => {
/**
* Test data with GPT-4 model to ensure user messages keep 'user' sender
*/
const testData = [
{
title: 'GPT-4 Sender Test',
create_time: 1714585031.148505,
update_time: 1714585060.879308,
mapping: {
'root-node': {
id: 'root-node',
message: null,
parent: null,
children: ['user-msg-1'],
},
'user-msg-1': {
id: 'user-msg-1',
message: {
id: 'user-msg-1',
author: { role: 'user' },
create_time: 1714585031.150442,
content: { content_type: 'text', parts: ['User message with GPT-4'] },
metadata: { model_slug: 'gpt-4' },
},
parent: 'root-node',
children: ['assistant-msg-1'],
},
'assistant-msg-1': {
id: 'assistant-msg-1',
message: {
id: 'assistant-msg-1',
author: { role: 'assistant' },
create_time: 1714585032.150442,
content: { content_type: 'text', parts: ['Assistant response with GPT-4'] },
metadata: { model_slug: 'gpt-4' },
},
parent: 'user-msg-1',
children: ['user-msg-2'],
},
'user-msg-2': {
id: 'user-msg-2',
message: {
id: 'user-msg-2',
author: { role: 'user' },
create_time: 1714585033.150442,
content: { content_type: 'text', parts: ['Another user message with GPT-4o-mini'] },
metadata: { model_slug: 'gpt-4o-mini' },
},
parent: 'assistant-msg-1',
children: ['assistant-msg-2'],
},
'assistant-msg-2': {
id: 'assistant-msg-2',
message: {
id: 'assistant-msg-2',
author: { role: 'assistant' },
create_time: 1714585034.150442,
content: { content_type: 'text', parts: ['Assistant response with GPT-3.5'] },
metadata: { model_slug: 'gpt-3.5-turbo' },
},
parent: 'user-msg-2',
children: [],
},
},
},
];
const requestUserId = 'user-123';
const importBatchBuilder = new ImportBatchBuilder(requestUserId);
jest.spyOn(importBatchBuilder, 'saveMessage');
const importer = getImporter(testData);
await importer(testData, requestUserId, () => importBatchBuilder);
const savedMessages = importBatchBuilder.saveMessage.mock.calls.map((call) => call[0]);
const userMsg1 = savedMessages.find((msg) => msg.text === 'User message with GPT-4');
const assistantMsg1 = savedMessages.find((msg) => msg.text === 'Assistant response with GPT-4');
const userMsg2 = savedMessages.find(
(msg) => msg.text === 'Another user message with GPT-4o-mini',
);
const assistantMsg2 = savedMessages.find(
(msg) => msg.text === 'Assistant response with GPT-3.5',
);
expect(userMsg1.sender).toBe('user');
expect(userMsg1.isCreatedByUser).toBe(true);
expect(userMsg1.model).toBe('gpt-4');
expect(userMsg2.sender).toBe('user');
expect(userMsg2.isCreatedByUser).toBe(true);
expect(userMsg2.model).toBe('gpt-4o-mini');
expect(assistantMsg1.sender).toBe('GPT-4');
expect(assistantMsg1.isCreatedByUser).toBe(false);
expect(assistantMsg1.model).toBe('gpt-4');
expect(assistantMsg2.sender).toBe('GPT-3.5-turbo');
expect(assistantMsg2.isCreatedByUser).toBe(false);
expect(assistantMsg2.model).toBe('gpt-3.5-turbo');
});
it('should correctly extract and format model names from various model slugs', async () => {
/**
* Test data with various model slugs to test dynamic model identifier extraction
*/
const testData = [
{
title: 'Dynamic Model Identifier Test',
create_time: 1714585031.148505,
update_time: 1714585060.879308,
mapping: {
'root-node': {
id: 'root-node',
message: null,
parent: null,
children: ['msg-1'],
},
'msg-1': {
id: 'msg-1',
message: {
id: 'msg-1',
author: { role: 'user' },
create_time: 1714585031.150442,
content: { content_type: 'text', parts: ['Test message'] },
metadata: {},
},
parent: 'root-node',
children: ['msg-2', 'msg-3', 'msg-4', 'msg-5', 'msg-6', 'msg-7', 'msg-8', 'msg-9'],
},
'msg-2': {
id: 'msg-2',
message: {
id: 'msg-2',
author: { role: 'assistant' },
create_time: 1714585032.150442,
content: { content_type: 'text', parts: ['GPT-4 response'] },
metadata: { model_slug: 'gpt-4' },
},
parent: 'msg-1',
children: [],
},
'msg-3': {
id: 'msg-3',
message: {
id: 'msg-3',
author: { role: 'assistant' },
create_time: 1714585033.150442,
content: { content_type: 'text', parts: ['GPT-4o response'] },
metadata: { model_slug: 'gpt-4o' },
},
parent: 'msg-1',
children: [],
},
'msg-4': {
id: 'msg-4',
message: {
id: 'msg-4',
author: { role: 'assistant' },
create_time: 1714585034.150442,
content: { content_type: 'text', parts: ['GPT-4o-mini response'] },
metadata: { model_slug: 'gpt-4o-mini' },
},
parent: 'msg-1',
children: [],
},
'msg-5': {
id: 'msg-5',
message: {
id: 'msg-5',
author: { role: 'assistant' },
create_time: 1714585035.150442,
content: { content_type: 'text', parts: ['GPT-3.5-turbo response'] },
metadata: { model_slug: 'gpt-3.5-turbo' },
},
parent: 'msg-1',
children: [],
},
'msg-6': {
id: 'msg-6',
message: {
id: 'msg-6',
author: { role: 'assistant' },
create_time: 1714585036.150442,
content: { content_type: 'text', parts: ['GPT-4-turbo response'] },
metadata: { model_slug: 'gpt-4-turbo' },
},
parent: 'msg-1',
children: [],
},
'msg-7': {
id: 'msg-7',
message: {
id: 'msg-7',
author: { role: 'assistant' },
create_time: 1714585037.150442,
content: { content_type: 'text', parts: ['GPT-4-1106-preview response'] },
metadata: { model_slug: 'gpt-4-1106-preview' },
},
parent: 'msg-1',
children: [],
},
'msg-8': {
id: 'msg-8',
message: {
id: 'msg-8',
author: { role: 'assistant' },
create_time: 1714585038.150442,
content: { content_type: 'text', parts: ['Claude response'] },
metadata: { model_slug: 'claude-3-opus' },
},
parent: 'msg-1',
children: [],
},
'msg-9': {
id: 'msg-9',
message: {
id: 'msg-9',
author: { role: 'assistant' },
create_time: 1714585039.150442,
content: { content_type: 'text', parts: ['No model slug response'] },
metadata: {},
},
parent: 'msg-1',
children: [],
},
},
},
];
const requestUserId = 'user-123';
const importBatchBuilder = new ImportBatchBuilder(requestUserId);
jest.spyOn(importBatchBuilder, 'saveMessage');
const importer = getImporter(testData);
await importer(testData, requestUserId, () => importBatchBuilder);
const savedMessages = importBatchBuilder.saveMessage.mock.calls.map((call) => call[0]);
// Test various GPT model slug formats
const gpt4 = savedMessages.find((msg) => msg.text === 'GPT-4 response');
expect(gpt4.sender).toBe('GPT-4');
expect(gpt4.model).toBe('gpt-4');
const gpt4o = savedMessages.find((msg) => msg.text === 'GPT-4o response');
expect(gpt4o.sender).toBe('GPT-4o');
expect(gpt4o.model).toBe('gpt-4o');
const gpt4oMini = savedMessages.find((msg) => msg.text === 'GPT-4o-mini response');
expect(gpt4oMini.sender).toBe('GPT-4o-mini');
expect(gpt4oMini.model).toBe('gpt-4o-mini');
const gpt35Turbo = savedMessages.find((msg) => msg.text === 'GPT-3.5-turbo response');
expect(gpt35Turbo.sender).toBe('GPT-3.5-turbo');
expect(gpt35Turbo.model).toBe('gpt-3.5-turbo');
const gpt4Turbo = savedMessages.find((msg) => msg.text === 'GPT-4-turbo response');
expect(gpt4Turbo.sender).toBe('GPT-4-turbo');
expect(gpt4Turbo.model).toBe('gpt-4-turbo');
const gpt4Preview = savedMessages.find((msg) => msg.text === 'GPT-4-1106-preview response');
expect(gpt4Preview.sender).toBe('GPT-4-1106-preview');
expect(gpt4Preview.model).toBe('gpt-4-1106-preview');
// Test non-GPT model (should use the model slug as sender)
const claude = savedMessages.find((msg) => msg.text === 'Claude response');
expect(claude.sender).toBe('claude-3-opus');
expect(claude.model).toBe('claude-3-opus');
// Test missing model slug (should default to openAISettings.model.default)
const noModel = savedMessages.find((msg) => msg.text === 'No model slug response');
// When no model slug is provided, it defaults to gpt-4o-mini which gets formatted to GPT-4o-mini
expect(noModel.sender).toBe('GPT-4o-mini');
expect(noModel.model).toBe(openAISettings.model.default);
// Verify user message is unaffected
const userMsg = savedMessages.find((msg) => msg.text === 'Test message');
expect(userMsg.sender).toBe('user');
expect(userMsg.isCreatedByUser).toBe(true);
});
});
describe('importLibreChatConvo', () => {

View File

@@ -4,6 +4,7 @@ const { logger } = require('@librechat/data-schemas');
const { isEnabled, getBalanceConfig } = require('@librechat/api');
const { SystemRoles, ErrorTypes } = require('librechat-data-provider');
const { createUser, findUser, updateUser, countUsers } = require('~/models');
const { isEmailDomainAllowed } = require('~/server/services/domains');
const { getAppConfig } = require('~/server/services/Config');
const {
@@ -121,9 +122,18 @@ const ldapLogin = new LdapStrategy(ldapOptions, async (userinfo, done) => {
);
}
const appConfig = await getAppConfig();
if (!isEmailDomainAllowed(mail, appConfig?.registration?.allowedDomains)) {
logger.error(
`[LDAP Strategy] Authentication blocked - email domain not allowed [Email: ${mail}]`,
);
return done(null, false, { message: 'Email domain not allowed' });
}
if (!user) {
const isFirstRegisteredUser = (await countUsers()) === 0;
const role = isFirstRegisteredUser ? SystemRoles.ADMIN : SystemRoles.USER;
user = {
provider: 'ldap',
ldapId,
@@ -133,7 +143,6 @@ const ldapLogin = new LdapStrategy(ldapOptions, async (userinfo, done) => {
name: fullName,
role,
};
const appConfig = await getAppConfig({ role: user?.role });
const balanceConfig = getBalanceConfig(appConfig);
const userId = await createUser(user, balanceConfig);
user._id = userId;

View File

@@ -15,6 +15,7 @@ const {
getBalanceConfig,
} = require('@librechat/api');
const { getStrategyFunctions } = require('~/server/services/Files/strategies');
const { isEmailDomainAllowed } = require('~/server/services/domains');
const { findUser, createUser, updateUser } = require('~/models');
const { getAppConfig } = require('~/server/services/Config');
const getLogStores = require('~/cache/getLogStores');
@@ -339,6 +340,19 @@ async function setupOpenId() {
async (tokenset, done) => {
try {
const claims = tokenset.claims();
const userinfo = {
...claims,
...(await getUserInfo(openidConfig, tokenset.access_token, claims.sub)),
};
const appConfig = await getAppConfig();
if (!isEmailDomainAllowed(userinfo.email, appConfig?.registration?.allowedDomains)) {
logger.error(
`[OpenID Strategy] Authentication blocked - email domain not allowed [Email: ${userinfo.email}]`,
);
return done(null, false, { message: 'Email domain not allowed' });
}
const result = await findOpenIDUser({
openidId: claims.sub,
email: claims.email,
@@ -353,10 +367,7 @@ async function setupOpenId() {
message: ErrorTypes.AUTH_FAILED,
});
}
const userinfo = {
...claims,
...(await getUserInfo(openidConfig, tokenset.access_token, claims.sub)),
};
const fullName = getFullName(userinfo);
if (requiredRole) {
@@ -398,7 +409,6 @@ async function setupOpenId() {
);
}
const appConfig = await getAppConfig();
if (!user) {
user = {
provider: 'openid',

View File

@@ -7,6 +7,7 @@ const { ErrorTypes } = require('librechat-data-provider');
const { hashToken, logger } = require('@librechat/data-schemas');
const { Strategy: SamlStrategy } = require('@node-saml/passport-saml');
const { getStrategyFunctions } = require('~/server/services/Files/strategies');
const { isEmailDomainAllowed } = require('~/server/services/domains');
const { findUser, createUser, updateUser } = require('~/models');
const { getAppConfig } = require('~/server/services/Config');
const paths = require('~/config/paths');
@@ -192,16 +193,25 @@ async function setupSaml() {
logger.info(`[samlStrategy] SAML authentication received for NameID: ${profile.nameID}`);
logger.debug('[samlStrategy] SAML profile:', profile);
const userEmail = getEmail(profile) || '';
const appConfig = await getAppConfig();
if (!isEmailDomainAllowed(userEmail, appConfig?.registration?.allowedDomains)) {
logger.error(
`[SAML Strategy] Authentication blocked - email domain not allowed [Email: ${userEmail}]`,
);
return done(null, false, { message: 'Email domain not allowed' });
}
let user = await findUser({ samlId: profile.nameID });
logger.info(
`[samlStrategy] User ${user ? 'found' : 'not found'} with SAML ID: ${profile.nameID}`,
);
if (!user) {
const email = getEmail(profile) || '';
user = await findUser({ email });
user = await findUser({ email: userEmail });
logger.info(
`[samlStrategy] User ${user ? 'found' : 'not found'} with email: ${profile.email}`,
`[samlStrategy] User ${user ? 'found' : 'not found'} with email: ${userEmail}`,
);
}
@@ -220,13 +230,12 @@ async function setupSaml() {
getUserName(profile) || getGivenName(profile) || getEmail(profile),
);
const appConfig = await getAppConfig();
if (!user) {
user = {
provider: 'saml',
samlId: profile.nameID,
username,
email: getEmail(profile) || '',
email: userEmail,
emailVerified: true,
name: fullName,
};

View File

@@ -2,6 +2,7 @@ const { isEnabled } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { ErrorTypes } = require('librechat-data-provider');
const { createSocialUser, handleExistingUser } = require('./process');
const { isEmailDomainAllowed } = require('~/server/services/domains');
const { getAppConfig } = require('~/server/services/Config');
const { findUser } = require('~/models');
@@ -14,8 +15,18 @@ const socialLogin =
});
const appConfig = await getAppConfig();
if (!isEmailDomainAllowed(email, appConfig?.registration?.allowedDomains)) {
logger.error(
`[${provider}Login] Authentication blocked - email domain not allowed [Email: ${email}]`,
);
const error = new Error(ErrorTypes.AUTH_FAILED);
error.code = ErrorTypes.AUTH_FAILED;
error.message = 'Email domain not allowed';
return cb(error);
}
const existingUser = await findUser({ email: email.trim() });
const ALLOW_SOCIAL_REGISTRATION = isEnabled(process.env.ALLOW_SOCIAL_REGISTRATION);
if (existingUser?.provider === provider) {
await handleExistingUser(existingUser, avatarUrl, appConfig);
@@ -30,20 +41,29 @@ const socialLogin =
return cb(error);
}
if (ALLOW_SOCIAL_REGISTRATION) {
const newUser = await createSocialUser({
email,
avatarUrl,
provider,
providerKey: `${provider}Id`,
providerId: id,
username,
name,
emailVerified,
appConfig,
});
return cb(null, newUser);
const ALLOW_SOCIAL_REGISTRATION = isEnabled(process.env.ALLOW_SOCIAL_REGISTRATION);
if (!ALLOW_SOCIAL_REGISTRATION) {
logger.error(
`[${provider}Login] Registration blocked - social registration is disabled [Email: ${email}]`,
);
const error = new Error(ErrorTypes.AUTH_FAILED);
error.code = ErrorTypes.AUTH_FAILED;
error.message = 'Social registration is disabled';
return cb(error);
}
const newUser = await createSocialUser({
email,
avatarUrl,
provider,
providerKey: `${provider}Id`,
providerId: id,
username,
name,
emailVerified,
appConfig,
});
return cb(null, newUser);
} catch (err) {
logger.error(`[${provider}Login]`, err);
return cb(err);

View File

@@ -1,7 +1,7 @@
const axios = require('axios');
const deriveBaseURL = require('./deriveBaseURL');
jest.mock('~/utils', () => {
const originalUtils = jest.requireActual('~/utils');
jest.mock('@librechat/api', () => {
const originalUtils = jest.requireActual('@librechat/api');
return {
...originalUtils,
processModelData: jest.fn((...args) => {

View File

@@ -1,4 +1,3 @@
const tokenHelpers = require('./tokens');
const deriveBaseURL = require('./deriveBaseURL');
const extractBaseURL = require('./extractBaseURL');
const findMessageContent = require('./findMessageContent');
@@ -6,6 +5,5 @@ const findMessageContent = require('./findMessageContent');
module.exports = {
deriveBaseURL,
extractBaseURL,
...tokenHelpers,
findMessageContent,
};

View File

@@ -1,12 +1,12 @@
const { EModelEndpoint } = require('librechat-data-provider');
const {
maxTokensMap,
matchModelName,
processModelData,
getModelMaxTokens,
maxOutputTokensMap,
findMatchingPattern,
getModelMaxTokens,
processModelData,
matchModelName,
maxTokensMap,
} = require('./tokens');
} = require('@librechat/api');
describe('getModelMaxTokens', () => {
test('should return correct tokens for exact match', () => {
@@ -394,7 +394,7 @@ describe('getModelMaxTokens', () => {
});
test('should return correct max output tokens for GPT-5 models', () => {
const { getModelMaxOutputTokens } = require('./tokens');
const { getModelMaxOutputTokens } = require('@librechat/api');
['gpt-5', 'gpt-5-mini', 'gpt-5-nano'].forEach((model) => {
expect(getModelMaxOutputTokens(model)).toBe(maxOutputTokensMap[EModelEndpoint.openAI][model]);
expect(getModelMaxOutputTokens(model, EModelEndpoint.openAI)).toBe(
@@ -407,7 +407,7 @@ describe('getModelMaxTokens', () => {
});
test('should return correct max output tokens for GPT-OSS models', () => {
const { getModelMaxOutputTokens } = require('./tokens');
const { getModelMaxOutputTokens } = require('@librechat/api');
['gpt-oss-20b', 'gpt-oss-120b'].forEach((model) => {
expect(getModelMaxOutputTokens(model)).toBe(maxOutputTokensMap[EModelEndpoint.openAI][model]);
expect(getModelMaxOutputTokens(model, EModelEndpoint.openAI)).toBe(

BIN
bun.lockb

Binary file not shown.

View File

@@ -148,8 +148,8 @@
"tailwindcss": "^3.4.1",
"ts-jest": "^29.2.5",
"typescript": "^5.3.3",
"vite": "^6.3.4",
"vite-plugin-compression2": "^1.3.3",
"vite": "^6.3.6",
"vite-plugin-compression2": "^2.2.1",
"vite-plugin-node-polyfills": "^0.23.0",
"vite-plugin-pwa": "^0.21.2"
}

View File

@@ -1,19 +1,19 @@
import React, { createContext, useContext, useEffect, useMemo, useRef } from 'react';
import React, { createContext, useContext, useEffect, useRef } from 'react';
import { useSetRecoilState } from 'recoil';
import { Tools, Constants, LocalStorageKeys, AgentCapabilities } from 'librechat-data-provider';
import type { TAgentsEndpoint } from 'librechat-data-provider';
import {
useMCPServerManager,
useSearchApiKeyForm,
useGetAgentsConfig,
useCodeApiKeyForm,
useGetMCPTools,
useToolToggle,
} from '~/hooks';
import { getTimestampedValue, setTimestamp } from '~/utils/timestamps';
import { ephemeralAgentByConvoId } from '~/store';
interface BadgeRowContextType {
conversationId?: string | null;
mcpServerNames?: string[] | null;
agentsConfig?: TAgentsEndpoint | null;
webSearch: ReturnType<typeof useToolToggle>;
artifacts: ReturnType<typeof useToolToggle>;
@@ -21,6 +21,7 @@ interface BadgeRowContextType {
codeInterpreter: ReturnType<typeof useToolToggle>;
codeApiKeyForm: ReturnType<typeof useCodeApiKeyForm>;
searchApiKeyForm: ReturnType<typeof useSearchApiKeyForm>;
mcpServerManager: ReturnType<typeof useMCPServerManager>;
}
const BadgeRowContext = createContext<BadgeRowContextType | undefined>(undefined);
@@ -46,7 +47,6 @@ export default function BadgeRowProvider({
}: BadgeRowProviderProps) {
const lastKeyRef = useRef<string>('');
const hasInitializedRef = useRef(false);
const { mcpToolDetails } = useGetMCPTools();
const { agentsConfig } = useGetAgentsConfig();
const key = conversationId ?? Constants.NEW_CONVO;
@@ -62,16 +62,15 @@ export default function BadgeRowProvider({
hasInitializedRef.current = true;
lastKeyRef.current = key;
// Load all localStorage values
const codeToggleKey = `${LocalStorageKeys.LAST_CODE_TOGGLE_}${key}`;
const webSearchToggleKey = `${LocalStorageKeys.LAST_WEB_SEARCH_TOGGLE_}${key}`;
const fileSearchToggleKey = `${LocalStorageKeys.LAST_FILE_SEARCH_TOGGLE_}${key}`;
const artifactsToggleKey = `${LocalStorageKeys.LAST_ARTIFACTS_TOGGLE_}${key}`;
const codeToggleValue = localStorage.getItem(codeToggleKey);
const webSearchToggleValue = localStorage.getItem(webSearchToggleKey);
const fileSearchToggleValue = localStorage.getItem(fileSearchToggleKey);
const artifactsToggleValue = localStorage.getItem(artifactsToggleKey);
const codeToggleValue = getTimestampedValue(codeToggleKey);
const webSearchToggleValue = getTimestampedValue(webSearchToggleKey);
const fileSearchToggleValue = getTimestampedValue(fileSearchToggleKey);
const artifactsToggleValue = getTimestampedValue(artifactsToggleKey);
const initialValues: Record<string, any> = {};
@@ -107,15 +106,37 @@ export default function BadgeRowProvider({
}
}
// Always set values for all tools (use defaults if not in localStorage)
// If ephemeralAgent is null, create a new object with just our tool values
setEphemeralAgent((prev) => ({
...(prev || {}),
/**
* Always set values for all tools (use defaults if not in `localStorage`)
* If `ephemeralAgent` is `null`, create a new object with just our tool values
*/
const finalValues = {
[Tools.execute_code]: initialValues[Tools.execute_code] ?? false,
[Tools.web_search]: initialValues[Tools.web_search] ?? false,
[Tools.file_search]: initialValues[Tools.file_search] ?? false,
[AgentCapabilities.artifacts]: initialValues[AgentCapabilities.artifacts] ?? false,
};
setEphemeralAgent((prev) => ({
...(prev || {}),
...finalValues,
}));
Object.entries(finalValues).forEach(([toolKey, value]) => {
if (value !== false) {
let storageKey = artifactsToggleKey;
if (toolKey === Tools.execute_code) {
storageKey = codeToggleKey;
} else if (toolKey === Tools.web_search) {
storageKey = webSearchToggleKey;
} else if (toolKey === Tools.file_search) {
storageKey = fileSearchToggleKey;
}
// Store the value and set timestamp for existing values
localStorage.setItem(storageKey, JSON.stringify(value));
setTimestamp(storageKey);
}
});
}
}, [key, isSubmitting, setEphemeralAgent]);
@@ -165,20 +186,18 @@ export default function BadgeRowProvider({
isAuthenticated: true,
});
const mcpServerNames = useMemo(() => {
return (mcpToolDetails ?? []).map((tool) => tool.name);
}, [mcpToolDetails]);
const mcpServerManager = useMCPServerManager({ conversationId });
const value: BadgeRowContextType = {
webSearch,
artifacts,
fileSearch,
agentsConfig,
mcpServerNames,
conversationId,
codeApiKeyForm,
codeInterpreter,
searchApiKeyForm,
mcpServerManager,
};
return <BadgeRowContext.Provider value={value}>{children}</BadgeRowContext.Provider>;

View File

@@ -350,6 +350,7 @@ export type TAskProps = {
conversationId?: string | null;
messageId?: string | null;
clientTimestamp?: string;
toolResources?: t.AgentToolResources;
};
export type TOptions = {

View File

@@ -11,7 +11,6 @@ import { useDocumentTitle, useHasAccess, useLocalize, TranslationKeys } from '~/
import { useGetEndpointsQuery, useGetAgentCategoriesQuery } from '~/data-provider';
import MarketplaceAdminSettings from './MarketplaceAdminSettings';
import { SidePanelProvider, useChatContext } from '~/Providers';
import { MarketplaceProvider } from './MarketplaceContext';
import { SidePanelGroup } from '~/components/SidePanel';
import { OpenSidebar } from '~/components/Chat/Menus';
import CategoryTabs from './CategoryTabs';
@@ -272,100 +271,176 @@ const AgentMarketplace: React.FC<AgentMarketplaceProps> = ({ className = '' }) =
}
return (
<div className={`relative flex w-full grow overflow-hidden bg-presentation ${className}`}>
<MarketplaceProvider>
<SidePanelProvider>
<SidePanelGroup
defaultLayout={defaultLayout}
fullPanelCollapse={fullCollapse}
defaultCollapsed={defaultCollapsed}
>
<main className="flex h-full flex-col overflow-hidden" role="main">
{/* Scrollable container */}
<div
ref={scrollContainerRef}
className="scrollbar-gutter-stable relative flex h-full flex-col overflow-y-auto overflow-x-hidden"
>
{/* Simplified header for agents marketplace - only show nav controls when needed */}
{!isSmallScreen && (
<div className="sticky top-0 z-20 flex items-center justify-between bg-surface-secondary p-2 font-semibold text-text-primary md:h-14">
<div className="mx-1 flex items-center gap-2">
{!navVisible ? (
<>
<OpenSidebar setNavVisible={setNavVisible} />
<TooltipAnchor
description={localize('com_ui_new_chat')}
render={
<Button
size="icon"
variant="outline"
data-testid="agents-new-chat-button"
aria-label={localize('com_ui_new_chat')}
className="rounded-xl border border-border-light bg-surface-secondary p-2 hover:bg-surface-hover max-md:hidden"
onClick={handleNewChat}
>
<NewChatIcon />
</Button>
}
/>
</>
) : (
// Invisible placeholder to maintain height
<div className="h-10 w-10" />
)}
</div>
</div>
)}
{/* Hero Section - scrolls away */}
{!isSmallScreen && (
<div className="container mx-auto max-w-4xl">
<div className={cn('mb-8 text-center', 'mt-12')}>
<h1 className="mb-3 text-3xl font-bold tracking-tight text-text-primary md:text-5xl">
{localize('com_agents_marketplace')}
</h1>
<p className="mx-auto mb-6 max-w-2xl text-lg text-text-secondary">
{localize('com_agents_marketplace_subtitle')}
</p>
</div>
</div>
)}
{/* Sticky wrapper for search bar and categories */}
<div
className={cn(
'sticky z-10 bg-presentation pb-4',
isSmallScreen ? 'top-0' : 'top-14',
)}
>
<div className="container mx-auto max-w-4xl px-4">
{/* Search bar */}
<div className="mx-auto flex max-w-2xl gap-2 pb-6">
<SearchBar value={searchQuery} onSearch={handleSearch} />
{/* TODO: Remove this once we have a better way to handle admin settings */}
{/* Admin Settings */}
<MarketplaceAdminSettings />
</div>
{/* Category tabs */}
<CategoryTabs
categories={categoriesQuery.data || []}
activeTab={displayCategory}
isLoading={categoriesQuery.isLoading}
onChange={handleTabChange}
/>
<SidePanelProvider>
<SidePanelGroup
defaultLayout={defaultLayout}
fullPanelCollapse={fullCollapse}
defaultCollapsed={defaultCollapsed}
>
<main className="flex h-full flex-col overflow-hidden" role="main">
{/* Scrollable container */}
<div
ref={scrollContainerRef}
className="scrollbar-gutter-stable relative flex h-full flex-col overflow-y-auto overflow-x-hidden"
>
{/* Simplified header for agents marketplace - only show nav controls when needed */}
{!isSmallScreen && (
<div className="sticky top-0 z-20 flex items-center justify-between bg-surface-secondary p-2 font-semibold text-text-primary md:h-14">
<div className="mx-1 flex items-center gap-2">
{!navVisible ? (
<>
<OpenSidebar setNavVisible={setNavVisible} />
<TooltipAnchor
description={localize('com_ui_new_chat')}
render={
<Button
size="icon"
variant="outline"
data-testid="agents-new-chat-button"
aria-label={localize('com_ui_new_chat')}
className="rounded-xl border border-border-light bg-surface-secondary p-2 hover:bg-surface-hover max-md:hidden"
onClick={handleNewChat}
>
<NewChatIcon />
</Button>
}
/>
</>
) : (
// Invisible placeholder to maintain height
<div className="h-10 w-10" />
)}
</div>
</div>
{/* Scrollable content area */}
<div className="container mx-auto max-w-4xl px-4 pb-8">
{/* Two-pane animated container wrapping category header + grid */}
<div className="relative overflow-hidden">
{/* Current content pane */}
)}
{/* Hero Section - scrolls away */}
{!isSmallScreen && (
<div className="container mx-auto max-w-4xl">
<div className={cn('mb-8 text-center', 'mt-12')}>
<h1 className="mb-3 text-3xl font-bold tracking-tight text-text-primary md:text-5xl">
{localize('com_agents_marketplace')}
</h1>
<p className="mx-auto mb-6 max-w-2xl text-lg text-text-secondary">
{localize('com_agents_marketplace_subtitle')}
</p>
</div>
</div>
)}
{/* Sticky wrapper for search bar and categories */}
<div
className={cn(
'sticky z-10 bg-presentation pb-4',
isSmallScreen ? 'top-0' : 'top-14',
)}
>
<div className="container mx-auto max-w-4xl px-4">
{/* Search bar */}
<div className="mx-auto flex max-w-2xl gap-2 pb-6">
<SearchBar value={searchQuery} onSearch={handleSearch} />
{/* TODO: Remove this once we have a better way to handle admin settings */}
{/* Admin Settings */}
<MarketplaceAdminSettings />
</div>
{/* Category tabs */}
<CategoryTabs
categories={categoriesQuery.data || []}
activeTab={displayCategory}
isLoading={categoriesQuery.isLoading}
onChange={handleTabChange}
/>
</div>
</div>
{/* Scrollable content area */}
<div className="container mx-auto max-w-4xl px-4 pb-8">
{/* Two-pane animated container wrapping category header + grid */}
<div className="relative overflow-hidden">
{/* Current content pane */}
<div
className={cn(
isTransitioning &&
(animationDirection === 'right'
? 'motion-safe:animate-slide-out-left'
: 'motion-safe:animate-slide-out-right'),
)}
key={`pane-current-${displayCategory}`}
>
{/* Category header - only show when not searching */}
{!searchQuery && (
<div className="mb-6 mt-6">
{(() => {
// Get category data for display
const getCategoryData = () => {
if (displayCategory === 'promoted') {
return {
name: localize('com_agents_top_picks'),
description: localize('com_agents_recommended'),
};
}
if (displayCategory === 'all') {
return {
name: localize('com_agents_all'),
description: localize('com_agents_all_description'),
};
}
// Find the category in the API data
const categoryData = categoriesQuery.data?.find(
(cat) => cat.value === displayCategory,
);
if (categoryData) {
return {
name: categoryData.label?.startsWith('com_')
? localize(categoryData.label as TranslationKeys)
: categoryData.label,
description: categoryData.description?.startsWith('com_')
? localize(categoryData.description as TranslationKeys)
: categoryData.description || '',
};
}
// Fallback for unknown categories
return {
name:
displayCategory.charAt(0).toUpperCase() + displayCategory.slice(1),
description: '',
};
};
const { name, description } = getCategoryData();
return (
<div className="text-left">
<h2 className="text-2xl font-bold text-text-primary">{name}</h2>
{description && (
<p className="mt-2 text-text-secondary">{description}</p>
)}
</div>
);
})()}
</div>
)}
{/* Agent grid */}
<AgentGrid
key={`grid-${displayCategory}`}
category={displayCategory}
searchQuery={searchQuery}
onSelectAgent={handleAgentSelect}
scrollElement={scrollContainerRef.current}
/>
</div>
{/* Next content pane, only during transition */}
{isTransitioning && nextCategory && (
<div
className={cn(
isTransitioning &&
(animationDirection === 'right'
? 'motion-safe:animate-slide-out-left'
: 'motion-safe:animate-slide-out-right'),
'absolute inset-0',
animationDirection === 'right'
? 'motion-safe:animate-slide-in-right'
: 'motion-safe:animate-slide-in-left',
)}
key={`pane-current-${displayCategory}`}
key={`pane-next-${nextCategory}-${animationDirection}`}
>
{/* Category header - only show when not searching */}
{!searchQuery && (
@@ -373,13 +448,13 @@ const AgentMarketplace: React.FC<AgentMarketplaceProps> = ({ className = '' }) =
{(() => {
// Get category data for display
const getCategoryData = () => {
if (displayCategory === 'promoted') {
if (nextCategory === 'promoted') {
return {
name: localize('com_agents_top_picks'),
description: localize('com_agents_recommended'),
};
}
if (displayCategory === 'all') {
if (nextCategory === 'all') {
return {
name: localize('com_agents_all'),
description: localize('com_agents_all_description'),
@@ -388,7 +463,7 @@ const AgentMarketplace: React.FC<AgentMarketplaceProps> = ({ className = '' }) =
// Find the category in the API data
const categoryData = categoriesQuery.data?.find(
(cat) => cat.value === displayCategory,
(cat) => cat.value === nextCategory,
);
if (categoryData) {
return {
@@ -396,7 +471,9 @@ const AgentMarketplace: React.FC<AgentMarketplaceProps> = ({ className = '' }) =
? localize(categoryData.label as TranslationKeys)
: categoryData.label,
description: categoryData.description?.startsWith('com_')
? localize(categoryData.description as TranslationKeys)
? localize(
categoryData.description as Parameters<typeof localize>[0],
)
: categoryData.description || '',
};
}
@@ -404,8 +481,8 @@ const AgentMarketplace: React.FC<AgentMarketplaceProps> = ({ className = '' }) =
// Fallback for unknown categories
return {
name:
displayCategory.charAt(0).toUpperCase() +
displayCategory.slice(1),
(nextCategory || '').charAt(0).toUpperCase() +
(nextCategory || '').slice(1),
description: '',
};
};
@@ -426,113 +503,30 @@ const AgentMarketplace: React.FC<AgentMarketplaceProps> = ({ className = '' }) =
{/* Agent grid */}
<AgentGrid
key={`grid-${displayCategory}`}
category={displayCategory}
key={`grid-${nextCategory}`}
category={nextCategory}
searchQuery={searchQuery}
onSelectAgent={handleAgentSelect}
scrollElement={scrollContainerRef.current}
/>
</div>
)}
{/* Next content pane, only during transition */}
{isTransitioning && nextCategory && (
<div
className={cn(
'absolute inset-0',
animationDirection === 'right'
? 'motion-safe:animate-slide-in-right'
: 'motion-safe:animate-slide-in-left',
)}
key={`pane-next-${nextCategory}-${animationDirection}`}
>
{/* Category header - only show when not searching */}
{!searchQuery && (
<div className="mb-6 mt-6">
{(() => {
// Get category data for display
const getCategoryData = () => {
if (nextCategory === 'promoted') {
return {
name: localize('com_agents_top_picks'),
description: localize('com_agents_recommended'),
};
}
if (nextCategory === 'all') {
return {
name: localize('com_agents_all'),
description: localize('com_agents_all_description'),
};
}
// Find the category in the API data
const categoryData = categoriesQuery.data?.find(
(cat) => cat.value === nextCategory,
);
if (categoryData) {
return {
name: categoryData.label?.startsWith('com_')
? localize(categoryData.label as TranslationKeys)
: categoryData.label,
description: categoryData.description?.startsWith('com_')
? localize(
categoryData.description as Parameters<
typeof localize
>[0],
)
: categoryData.description || '',
};
}
// Fallback for unknown categories
return {
name:
(nextCategory || '').charAt(0).toUpperCase() +
(nextCategory || '').slice(1),
description: '',
};
};
const { name, description } = getCategoryData();
return (
<div className="text-left">
<h2 className="text-2xl font-bold text-text-primary">{name}</h2>
{description && (
<p className="mt-2 text-text-secondary">{description}</p>
)}
</div>
);
})()}
</div>
)}
{/* Agent grid */}
<AgentGrid
key={`grid-${nextCategory}`}
category={nextCategory}
searchQuery={searchQuery}
onSelectAgent={handleAgentSelect}
scrollElement={scrollContainerRef.current}
/>
</div>
)}
{/* Note: Using Tailwind keyframes for slide in/out animations */}
</div>
{/* Note: Using Tailwind keyframes for slide in/out animations */}
</div>
{/* Agent detail dialog */}
{isDetailOpen && selectedAgent && (
<AgentDetail
agent={selectedAgent}
isOpen={isDetailOpen}
onClose={handleDetailClose}
/>
)}
</div>
</main>
</SidePanelGroup>
</SidePanelProvider>
</MarketplaceProvider>
{/* Agent detail dialog */}
{isDetailOpen && selectedAgent && (
<AgentDetail
agent={selectedAgent}
isOpen={isDetailOpen}
onClose={handleDetailClose}
/>
)}
</div>
</main>
</SidePanelGroup>
</SidePanelProvider>
</div>
);
};

View File

@@ -194,7 +194,7 @@ describe('Virtual Scrolling Performance', () => {
// Performance check: rendering should be fast
const renderTime = endTime - startTime;
expect(renderTime).toBeLessThan(650);
expect(renderTime).toBeLessThan(720);
console.log(`Rendered 1000 agents in ${renderTime.toFixed(2)}ms`);
console.log(`Only ${renderedCards.length} DOM nodes created for 1000 agents`);

View File

@@ -368,7 +368,7 @@ function BadgeRow({
<CodeInterpreter />
<FileSearch />
<Artifacts />
<MCPSelect conversationId={conversationId} />
<MCPSelect />
</>
)}
{ghostBadge && (

View File

@@ -39,6 +39,7 @@ function AttachFileChat({
<AttachFileMenu
disabled={disableInputs}
conversationId={conversationId}
agentId={conversation?.agent_id}
endpointFileConfig={endpointFileConfig}
/>
);

View File

@@ -11,7 +11,13 @@ import {
SharePointIcon,
} from '@librechat/client';
import type { EndpointFileConfig } from 'librechat-data-provider';
import { useLocalize, useGetAgentsConfig, useFileHandling, useAgentCapabilities } from '~/hooks';
import {
useAgentToolPermissions,
useAgentCapabilities,
useGetAgentsConfig,
useFileHandling,
useLocalize,
} from '~/hooks';
import useSharePointFileHandling from '~/hooks/Files/useSharePointFileHandling';
import { SharePointPickerDialog } from '~/components/SharePoint';
import { useGetStartupConfig } from '~/data-provider';
@@ -21,11 +27,17 @@ import { cn } from '~/utils';
interface AttachFileMenuProps {
conversationId: string;
agentId?: string | null;
disabled?: boolean | null;
endpointFileConfig?: EndpointFileConfig;
}
const AttachFileMenu = ({ disabled, conversationId, endpointFileConfig }: AttachFileMenuProps) => {
const AttachFileMenu = ({
agentId,
disabled,
conversationId,
endpointFileConfig,
}: AttachFileMenuProps) => {
const localize = useLocalize();
const isUploadDisabled = disabled ?? false;
const inputRef = useRef<HTMLInputElement>(null);
@@ -52,6 +64,8 @@ const AttachFileMenu = ({ disabled, conversationId, endpointFileConfig }: Attach
* */
const capabilities = useAgentCapabilities(agentsConfig?.capabilities ?? defaultAgentCapabilities);
const { fileSearchAllowedByAgent, codeAllowedByAgent } = useAgentToolPermissions(agentId);
const handleUploadClick = (isImage?: boolean) => {
if (!inputRef.current) {
return;
@@ -86,7 +100,7 @@ const AttachFileMenu = ({ disabled, conversationId, endpointFileConfig }: Attach
});
}
if (capabilities.fileSearchEnabled) {
if (capabilities.fileSearchEnabled && fileSearchAllowedByAgent) {
items.push({
label: localize('com_ui_upload_file_search'),
onClick: () => {
@@ -101,7 +115,7 @@ const AttachFileMenu = ({ disabled, conversationId, endpointFileConfig }: Attach
});
}
if (capabilities.codeEnabled) {
if (capabilities.codeEnabled && codeAllowedByAgent) {
items.push({
label: localize('com_ui_upload_code_files'),
onClick: () => {
@@ -142,6 +156,8 @@ const AttachFileMenu = ({ disabled, conversationId, endpointFileConfig }: Attach
setToolResource,
setEphemeralAgent,
sharePointEnabled,
codeAllowedByAgent,
fileSearchAllowedByAgent,
setIsSharePointDialogOpen,
]);

View File

@@ -2,7 +2,13 @@ import React, { useMemo } from 'react';
import { OGDialog, OGDialogTemplate } from '@librechat/client';
import { ImageUpIcon, FileSearch, TerminalSquareIcon, FileType2Icon } from 'lucide-react';
import { EToolResources, defaultAgentCapabilities } from 'librechat-data-provider';
import { useLocalize, useGetAgentsConfig, useAgentCapabilities } from '~/hooks';
import {
useAgentToolPermissions,
useAgentCapabilities,
useGetAgentsConfig,
useLocalize,
} from '~/hooks';
import { useChatContext } from '~/Providers';
interface DragDropModalProps {
onOptionSelect: (option: EToolResources | undefined) => void;
@@ -26,6 +32,11 @@ const DragDropModal = ({ onOptionSelect, setShowModal, files, isVisible }: DragD
* Use definition for agents endpoint for ephemeral agents
* */
const capabilities = useAgentCapabilities(agentsConfig?.capabilities ?? defaultAgentCapabilities);
const { conversation } = useChatContext();
const { fileSearchAllowedByAgent, codeAllowedByAgent } = useAgentToolPermissions(
conversation?.agent_id,
);
const options = useMemo(() => {
const _options: FileOption[] = [
{
@@ -35,14 +46,14 @@ const DragDropModal = ({ onOptionSelect, setShowModal, files, isVisible }: DragD
condition: files.every((file) => file.type?.startsWith('image/')),
},
];
if (capabilities.fileSearchEnabled) {
if (capabilities.fileSearchEnabled && fileSearchAllowedByAgent) {
_options.push({
label: localize('com_ui_upload_file_search'),
value: EToolResources.file_search,
icon: <FileSearch className="icon-md" />,
});
}
if (capabilities.codeEnabled) {
if (capabilities.codeEnabled && codeAllowedByAgent) {
_options.push({
label: localize('com_ui_upload_code_files'),
value: EToolResources.execute_code,
@@ -58,7 +69,7 @@ const DragDropModal = ({ onOptionSelect, setShowModal, files, isVisible }: DragD
}
return _options;
}, [capabilities, files, localize]);
}, [capabilities, files, localize, fileSearchAllowedByAgent, codeAllowedByAgent]);
if (!isVisible) {
return null;

View File

@@ -1,62 +1,102 @@
export default function DragDropOverlay() {
return (
<div
className="bg-surface-primary/85 fixed inset-0 z-[9999] flex flex-col items-center justify-center
gap-2 text-text-primary
backdrop-blur-[4px] transition-all duration-200
ease-in-out animate-in fade-in
zoom-in-95 hover:backdrop-blur-sm"
>
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 132 108"
fill="none"
width="132"
height="108"
>
<g clipPath="url(#clip0_3605_64419)">
<path
fillRule="evenodd"
clipRule="evenodd"
d="M25.2025 29.3514C10.778 33.2165 8.51524 37.1357 11.8281 49.4995L13.4846 55.6814C16.7975 68.0453 20.7166 70.308 35.1411 66.443L43.3837 64.2344C57.8082 60.3694 60.0709 56.4502 56.758 44.0864L55.1016 37.9044C51.7887 25.5406 47.8695 23.2778 33.445 27.1428L29.3237 28.2471L25.2025 29.3514ZM18.1944 42.7244C18.8572 41.5764 20.325 41.1831 21.4729 41.8459L27.3517 45.24C28.4996 45.9027 28.8929 47.3706 28.2301 48.5185L24.836 54.3972C24.1733 55.5451 22.7054 55.9384 21.5575 55.2757C20.4096 54.613 20.0163 53.1451 20.6791 51.9972L22.8732 48.1969L19.0729 46.0028C17.925 45.3401 17.5317 43.8723 18.1944 42.7244ZM29.4091 56.3843C29.066 55.104 29.8258 53.7879 31.1062 53.4449L40.3791 50.9602C41.6594 50.6172 42.9754 51.377 43.3184 52.6573C43.6615 53.9376 42.9017 55.2536 41.6214 55.5967L32.3485 58.0813C31.0682 58.4244 29.7522 57.6646 29.4091 56.3843Z"
fill="#AFC1FF"
/>
</g>
<g clipPath="url(#clip1_3605_64419)">
<path
fillRule="evenodd"
clipRule="evenodd"
d="M86.8124 13.4036C81.0973 11.8722 78.5673 13.2649 77.0144 19.0603L68.7322 49.97C67.1793 55.7656 68.5935 58.2151 74.4696 59.7895L97.4908 65.958C103.367 67.5326 105.816 66.1184 107.406 60.1848L115.393 30.379C115.536 29.8456 115.217 29.2959 114.681 29.16C113.478 28.8544 112.435 28.6195 111.542 28.4183C106.243 27.2253 106.22 27.2201 109.449 20.7159C109.73 20.1507 109.426 19.4638 108.816 19.3004L86.8124 13.4036ZM87.2582 28.4311C86.234 28.1567 85.1812 28.7645 84.9067 29.7888C84.6323 30.813 85.2401 31.8658 86.2644 32.1403L101.101 36.1158C102.125 36.3902 103.178 35.7824 103.453 34.7581C103.727 33.7339 103.119 32.681 102.095 32.4066L87.2582 28.4311ZM82.9189 37.2074C83.1934 36.1831 84.2462 35.5753 85.2704 35.8497L100.107 39.8252C101.131 40.0996 101.739 41.1524 101.465 42.1767C101.19 43.201 100.137 43.8088 99.1132 43.5343L84.2766 39.5589C83.2523 39.2844 82.6445 38.2316 82.9189 37.2074ZM83.2826 43.2683C82.2584 42.9939 81.2056 43.6017 80.9311 44.626C80.6567 45.6502 81.2645 46.703 82.2888 46.9775L89.7071 48.9652C90.7313 49.2396 91.7841 48.6318 92.0586 47.6076C92.333 46.5833 91.7252 45.5305 90.7009 45.256L83.2826 43.2683Z"
fill="#7989FF"
/>
</g>
<path
fillRule="evenodd"
clipRule="evenodd"
d="M40.4004 71.8426C40.4004 57.2141 44.0575 53.5569 61.1242 53.5569H66.0004H70.8766C87.9432 53.5569 91.6004 57.2141 91.6004 71.8426V79.1569C91.6004 93.7855 87.9432 97.4426 70.8766 97.4426H61.1242C44.0575 97.4426 40.4004 93.7855 40.4004 79.1569V71.8426ZM78.8002 67.4995C78.8002 70.1504 76.6512 72.2995 74.0002 72.2995C71.3492 72.2995 69.2002 70.1504 69.2002 67.4995C69.2002 64.8485 71.3492 62.6995 74.0002 62.6995C76.6512 62.6995 78.8002 64.8485 78.8002 67.4995ZM60.7204 70.8597C60.2672 70.2553 59.5559 69.8997 58.8004 69.8997C58.045 69.8997 57.3337 70.2553 56.8804 70.8597L47.2804 83.6597C46.4851 84.72 46.7 86.2244 47.7604 87.0197C48.8208 87.8149 50.3251 87.6 51.1204 86.5397L58.8004 76.2997L66.4804 86.5397C66.8979 87.0962 67.5363 87.4443 68.2303 87.4936C68.9243 87.5429 69.6055 87.2887 70.0975 86.7967L74.8004 82.0938L79.5034 86.7967C80.4406 87.734 81.9602 87.734 82.8975 86.7967C83.8347 85.8595 83.8347 84.3399 82.8975 83.4026L76.4975 77.0026C75.5602 76.0653 74.0406 76.0653 73.1034 77.0026L68.6601 81.4459L60.7204 70.8597Z"
fill="#3C46FF"
/>
<defs>
<clipPath id="clip0_3605_64419">
<rect
width="56"
height="56"
fill="white"
transform="translate(0 26.9939) rotate(-15)"
/>
</clipPath>
<clipPath id="clip1_3605_64419">
<rect
width="64"
height="64"
fill="white"
transform="translate(69.5645 0.5) rotate(15)"
/>
</clipPath>
</defs>
</svg>
<h3>Add anything</h3>
<h4>Drop any file here to add it to the conversation</h4>
</div>
);
import { memo } from 'react';
import { useLocalize } from '~/hooks';
interface DragDropOverlayProps {
isActive: boolean;
}
const DragDropOverlay = memo(({ isActive }: DragDropOverlayProps) => {
const localize = useLocalize();
return (
<>
{/** Modal backdrop overlay */}
<div
className={`fixed inset-0 z-[9998] transition-opacity duration-200 ease-in-out ${
isActive
? 'pointer-events-auto visible opacity-100'
: 'pointer-events-none invisible opacity-0'
} `}
style={{
/** Semi-transparent black overlay that works in both themes */
backgroundColor: 'rgba(0, 0, 0, 0.4)',
willChange: 'opacity',
}}
/>
{/** Main content overlay */}
<div
className={`fixed inset-0 z-[9999] flex flex-col items-center justify-center gap-2 text-text-primary transition-all duration-200 ease-in-out ${
isActive
? 'pointer-events-auto visible opacity-100'
: 'pointer-events-none invisible opacity-0'
} `}
style={{
transform: isActive ? 'scale(1)' : 'scale(0.95)',
/** Use will-change to hint browser about upcoming changes */
willChange: 'opacity, transform',
}}
>
{/** Content area with subtle background */}
<div className="bg-surface-primary/95 flex flex-col items-center rounded-lg p-8 shadow-xl">
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 132 108"
fill="none"
width="132"
height="108"
style={{
transform: isActive ? 'translateY(0)' : 'translateY(-10px)',
transition: 'transform 0.2s ease-in-out',
}}
>
<g clipPath="url(#clip0_3605_64419)">
<path
fillRule="evenodd"
clipRule="evenodd"
d="M25.2025 29.3514C10.778 33.2165 8.51524 37.1357 11.8281 49.4995L13.4846 55.6814C16.7975 68.0453 20.7166 70.308 35.1411 66.443L43.3837 64.2344C57.8082 60.3694 60.0709 56.4502 56.758 44.0864L55.1016 37.9044C51.7887 25.5406 47.8695 23.2778 33.445 27.1428L29.3237 28.2471L25.2025 29.3514ZM18.1944 42.7244C18.8572 41.5764 20.325 41.1831 21.4729 41.8459L27.3517 45.24C28.4996 45.9027 28.8929 47.3706 28.2301 48.5185L24.836 54.3972C24.1733 55.5451 22.7054 55.9384 21.5575 55.2757C20.4096 54.613 20.0163 53.1451 20.6791 51.9972L22.8732 48.1969L19.0729 46.0028C17.925 45.3401 17.5317 43.8723 18.1944 42.7244ZM29.4091 56.3843C29.066 55.104 29.8258 53.7879 31.1062 53.4449L40.3791 50.9602C41.6594 50.6172 42.9754 51.377 43.3184 52.6573C43.6615 53.9376 42.9017 55.2536 41.6214 55.5967L32.3485 58.0813C31.0682 58.4244 29.7522 57.6646 29.4091 56.3843Z"
fill="#AFC1FF"
/>
</g>
<g clipPath="url(#clip1_3605_64419)">
<path
fillRule="evenodd"
clipRule="evenodd"
d="M86.8124 13.4036C81.0973 11.8722 78.5673 13.2649 77.0144 19.0603L68.7322 49.97C67.1793 55.7656 68.5935 58.2151 74.4696 59.7895L97.4908 65.958C103.367 67.5326 105.816 66.1184 107.406 60.1848L115.393 30.379C115.536 29.8456 115.217 29.2959 114.681 29.16C113.478 28.8544 112.435 28.6195 111.542 28.4183C106.243 27.2253 106.22 27.2201 109.449 20.7159C109.73 20.1507 109.426 19.4638 108.816 19.3004L86.8124 13.4036ZM87.2582 28.4311C86.234 28.1567 85.1812 28.7645 84.9067 29.7888C84.6323 30.813 85.2401 31.8658 86.2644 32.1403L101.101 36.1158C102.125 36.3902 103.178 35.7824 103.453 34.7581C103.727 33.7339 103.119 32.681 102.095 32.4066L87.2582 28.4311ZM82.9189 37.2074C83.1934 36.1831 84.2462 35.5753 85.2704 35.8497L100.107 39.8252C101.131 40.0996 101.739 41.1524 101.465 42.1767C101.19 43.201 100.137 43.8088 99.1132 43.5343L84.2766 39.5589C83.2523 39.2844 82.6445 38.2316 82.9189 37.2074ZM83.2826 43.2683C82.2584 42.9939 81.2056 43.6017 80.9311 44.626C80.6567 45.6502 81.2645 46.703 82.2888 46.9775L89.7071 48.9652C90.7313 49.2396 91.7841 48.6318 92.0586 47.6076C92.333 46.5833 91.7252 45.5305 90.7009 45.256L83.2826 43.2683Z"
fill="#7989FF"
/>
</g>
<path
fillRule="evenodd"
clipRule="evenodd"
d="M40.4004 71.8426C40.4004 57.2141 44.0575 53.5569 61.1242 53.5569H66.0004H70.8766C87.9432 53.5569 91.6004 57.2141 91.6004 71.8426V79.1569C91.6004 93.7855 87.9432 97.4426 70.8766 97.4426H61.1242C44.0575 97.4426 40.4004 93.7855 40.4004 79.1569V71.8426ZM78.8002 67.4995C78.8002 70.1504 76.6512 72.2995 74.0002 72.2995C71.3492 72.2995 69.2002 70.1504 69.2002 67.4995C69.2002 64.8485 71.3492 62.6995 74.0002 62.6995C76.6512 62.6995 78.8002 64.8485 78.8002 67.4995ZM60.7204 70.8597C60.2672 70.2553 59.5559 69.8997 58.8004 69.8997C58.045 69.8997 57.3337 70.2553 56.8804 70.8597L47.2804 83.6597C46.4851 84.72 46.7 86.2244 47.7604 87.0197C48.8208 87.8149 50.3251 87.6 51.1204 86.5397L58.8004 76.2997L66.4804 86.5397C66.8979 87.0962 67.5363 87.4443 68.2303 87.4936C68.9243 87.5429 69.6055 87.2887 70.0975 86.7967L74.8004 82.0938L79.5034 86.7967C80.4406 87.734 81.9602 87.734 82.8975 86.7967C83.8347 85.8595 83.8347 84.3399 82.8975 83.4026L76.4975 77.0026C75.5602 76.0653 74.0406 76.0653 73.1034 77.0026L68.6601 81.4459L60.7204 70.8597Z"
fill="#3C46FF"
/>
<defs>
<clipPath id="clip0_3605_64419">
<rect
width="56"
height="56"
fill="white"
transform="translate(0 26.9939) rotate(-15)"
/>
</clipPath>
<clipPath id="clip1_3605_64419">
<rect
width="64"
height="64"
fill="white"
transform="translate(69.5645 0.5) rotate(15)"
/>
</clipPath>
</defs>
</svg>
<h3 className="mt-4 text-lg font-semibold">{localize('com_ui_upload_files')}</h3>
<h4 className="text-sm text-text-secondary">{localize('com_ui_drag_drop')}</h4>
</div>
</div>
</>
);
});
DragDropOverlay.displayName = 'DragDropOverlay';
export default DragDropOverlay;

View File

@@ -17,7 +17,8 @@ export default function DragDropWrapper({ children, className }: DragDropWrapper
return (
<div ref={drop} className={cn('relative flex h-full w-full', className)}>
{children}
{isActive && <DragDropOverlay />}
{/** Always render overlay to avoid mount/unmount overhead */}
<DragDropOverlay isActive={isActive} />
<DragDropModal
files={draggedFiles}
isVisible={showModal}

View File

@@ -3,22 +3,19 @@ import { MultiSelect, MCPIcon } from '@librechat/client';
import MCPServerStatusIcon from '~/components/MCP/MCPServerStatusIcon';
import MCPConfigDialog from '~/components/MCP/MCPConfigDialog';
import { useBadgeRowContext } from '~/Providers';
import { useMCPServerManager } from '~/hooks';
type MCPSelectProps = { conversationId?: string | null };
function MCPSelectContent({ conversationId }: MCPSelectProps) {
function MCPSelectContent() {
const { conversationId, mcpServerManager } = useBadgeRowContext();
const {
configuredServers,
mcpValues,
isPinned,
placeholderText,
batchToggleServers,
getServerStatusIconProps,
getConfigDialogProps,
isInitializing,
localize,
} = useMCPServerManager({ conversationId });
mcpValues,
isInitializing,
placeholderText,
configuredServers,
batchToggleServers,
getConfigDialogProps,
getServerStatusIconProps,
} = mcpServerManager;
const renderSelectedValues = useCallback(
(values: string[], placeholder?: string) => {
@@ -71,14 +68,6 @@ function MCPSelectContent({ conversationId }: MCPSelectProps) {
[getServerStatusIconProps, isInitializing],
);
if ((!mcpValues || mcpValues.length === 0) && !isPinned) {
return null;
}
if (!configuredServers || configuredServers.length === 0) {
return null;
}
const configDialogProps = getConfigDialogProps();
return (
@@ -103,10 +92,15 @@ function MCPSelectContent({ conversationId }: MCPSelectProps) {
);
}
function MCPSelect(props: MCPSelectProps) {
const { mcpServerNames } = useBadgeRowContext();
if ((mcpServerNames?.length ?? 0) === 0) return null;
return <MCPSelectContent {...props} />;
function MCPSelect() {
const { mcpServerManager } = useBadgeRowContext();
const { configuredServers } = mcpServerManager;
if (!configuredServers || configuredServers.length === 0) {
return null;
}
return <MCPSelectContent />;
}
export default memo(MCPSelect);

View File

@@ -4,27 +4,27 @@ import { ChevronRight } from 'lucide-react';
import { PinIcon, MCPIcon } from '@librechat/client';
import MCPServerStatusIcon from '~/components/MCP/MCPServerStatusIcon';
import MCPConfigDialog from '~/components/MCP/MCPConfigDialog';
import { useMCPServerManager } from '~/hooks';
import { useBadgeRowContext } from '~/Providers';
import { cn } from '~/utils';
interface MCPSubMenuProps {
placeholder?: string;
conversationId?: string | null;
}
const MCPSubMenu = React.forwardRef<HTMLDivElement, MCPSubMenuProps>(
({ placeholder, conversationId, ...props }, ref) => {
({ placeholder, ...props }, ref) => {
const { mcpServerManager } = useBadgeRowContext();
const {
configuredServers,
mcpValues,
isPinned,
mcpValues,
setIsPinned,
isInitializing,
placeholderText,
configuredServers,
getConfigDialogProps,
toggleServerSelection,
getServerStatusIconProps,
getConfigDialogProps,
isInitializing,
} = useMCPServerManager({ conversationId });
} = mcpServerManager;
const menuStore = Ariakit.useMenuStore({
focusLoop: true,

View File

@@ -3,7 +3,7 @@ import { AutoSizer, List } from 'react-virtualized';
import { Spinner, useCombobox } from '@librechat/client';
import { useSetRecoilState, useRecoilValue } from 'recoil';
import { PermissionTypes, Permissions } from 'librechat-data-provider';
import type { TPromptGroup } from 'librechat-data-provider';
import type { TPromptGroup, AgentToolResources } from 'librechat-data-provider';
import type { PromptOption } from '~/common';
import { removeCharIfLast, detectVariables } from '~/utils';
import VariableDialog from '~/components/Prompts/Groups/VariableDialog';
@@ -51,7 +51,7 @@ function PromptsCommand({
}: {
index: number;
textAreaRef: React.MutableRefObject<HTMLTextAreaElement | null>;
submitPrompt: (textPrompt: string) => void;
submitPrompt: (textPrompt: string, toolResources?: AgentToolResources) => void;
}) {
const localize = useLocalize();
const hasAccess = useHasAccess({
@@ -95,7 +95,6 @@ function PromptsCommand({
if (!group) {
return;
}
const hasVariables = detectVariables(group.productionPrompt?.prompt ?? '');
if (hasVariables) {
if (e && e.key === 'Tab') {
@@ -105,7 +104,7 @@ function PromptsCommand({
setVariableDialogOpen(true);
return;
} else {
submitPrompt(group.productionPrompt?.prompt ?? '');
submitPrompt(group.productionPrompt?.prompt ?? '', group.productionPrompt?.tool_resources);
}
},
[setSearchValue, setOpen, setShowPromptsPopover, textAreaRef, promptsMap, submitPrompt],

View File

@@ -30,8 +30,7 @@ const ToolsDropdown = ({ disabled }: ToolsDropdownProps) => {
artifacts,
fileSearch,
agentsConfig,
mcpServerNames,
conversationId,
mcpServerManager,
codeApiKeyForm,
codeInterpreter,
searchApiKeyForm,
@@ -287,15 +286,18 @@ const ToolsDropdown = ({ disabled }: ToolsDropdownProps) => {
});
}
if (mcpServerNames && mcpServerNames.length > 0) {
const { configuredServers } = mcpServerManager;
if (configuredServers && configuredServers.length > 0) {
dropdownItems.push({
hideOnClick: false,
render: (props) => (
<MCPSubMenu {...props} placeholder={mcpPlaceholder} conversationId={conversationId} />
),
render: (props) => <MCPSubMenu {...props} placeholder={mcpPlaceholder} />,
});
}
if (dropdownItems.length === 0) {
return null;
}
const menuTrigger = (
<TooltipAnchor
render={

View File

@@ -1,6 +1,5 @@
import { useMemo } from 'react';
import type { FC } from 'react';
import { useRecoilValue } from 'recoil';
import { TooltipAnchor } from '@librechat/client';
import { Menu, MenuButton, MenuItems } from '@headlessui/react';
import { BookmarkFilledIcon, BookmarkIcon } from '@radix-ui/react-icons';
@@ -9,7 +8,6 @@ import { useGetConversationTags } from '~/data-provider';
import BookmarkNavItems from './BookmarkNavItems';
import { useLocalize } from '~/hooks';
import { cn } from '~/utils';
import store from '~/store';
type BookmarkNavProps = {
tags: string[];
@@ -20,7 +18,6 @@ type BookmarkNavProps = {
const BookmarkNav: FC<BookmarkNavProps> = ({ tags, setTags, isSmallScreen }: BookmarkNavProps) => {
const localize = useLocalize();
const { data } = useGetConversationTags();
const conversation = useRecoilValue(store.conversationByIndex(0));
const label = useMemo(
() => (tags.length > 0 ? tags.join(', ') : localize('com_ui_bookmarks')),
[tags, localize],
@@ -56,11 +53,9 @@ const BookmarkNav: FC<BookmarkNavProps> = ({ tags, setTags, isSmallScreen }: Boo
anchor="bottom"
className="absolute left-0 top-full z-[100] mt-1 w-60 translate-y-0 overflow-hidden rounded-lg bg-surface-secondary p-1.5 shadow-lg outline-none"
>
{data && conversation && (
{data && (
<BookmarkContext.Provider value={{ bookmarks: data.filter((tag) => tag.count > 0) }}>
<BookmarkNavItems
// Currently selected conversation
conversation={conversation}
// List of selected tags(string)
tags={tags}
// When a user selects a tag, this `setTags` function is called to refetch the list of conversations for the selected tag

View File

@@ -1,25 +1,16 @@
import { useEffect, useState, type FC } from 'react';
import { type FC } from 'react';
import { CrossCircledIcon } from '@radix-ui/react-icons';
import type { TConversation } from 'librechat-data-provider';
import { useBookmarkContext } from '~/Providers/BookmarkContext';
import { BookmarkItems, BookmarkItem } from '~/components/Bookmarks';
import { useLocalize } from '~/hooks';
const BookmarkNavItems: FC<{
conversation: TConversation;
tags: string[];
setTags: (tags: string[]) => void;
}> = ({ conversation, tags = [], setTags }) => {
const [currentConversation, setCurrentConversation] = useState<TConversation>();
}> = ({ tags = [], setTags }) => {
const { bookmarks } = useBookmarkContext();
const localize = useLocalize();
useEffect(() => {
if (!currentConversation) {
setCurrentConversation(conversation);
}
}, [conversation, currentConversation]);
const getUpdatedSelected = (tag: string) => {
if (tags.some((selectedTag) => selectedTag === tag)) {
return tags.filter((selectedTag) => selectedTag !== tag);

View File

@@ -0,0 +1,166 @@
import * as Ariakit from '@ariakit/react';
import React, { useRef, useState, useMemo, useCallback } from 'react';
import { EToolResources, defaultAgentCapabilities } from 'librechat-data-provider';
import { FileSearch, ImageUpIcon, TerminalSquareIcon, FileType2Icon } from 'lucide-react';
import { FileUpload, DropdownPopup, AttachmentIcon, SharePointIcon } from '@librechat/client';
import {
useLocalize,
useAgentCapabilities,
useGetAgentsConfig,
useSharePointFileHandling,
} from '~/hooks';
import { SharePointPickerDialog } from '~/components/SharePoint';
import { useGetStartupConfig } from '~/data-provider';
import { MenuItemProps } from '~/common';
interface AttachFileButtonProps {
handleFileChange?: (event: React.ChangeEvent<HTMLInputElement>, toolResource?: string) => void;
disabled?: boolean | null;
}
const AttachFileButton = ({ handleFileChange, disabled }: AttachFileButtonProps) => {
const localize = useLocalize();
const isUploadDisabled = disabled ?? false;
const inputRef = useRef<HTMLInputElement>(null);
const [isPopoverActive, setIsPopoverActive] = useState(false);
const [toolResource, setToolResource] = useState<EToolResources | undefined>();
const [isSharePointDialogOpen, setIsSharePointDialogOpen] = useState(false);
const { handleSharePointFiles, isProcessing, downloadProgress } = useSharePointFileHandling({
toolResource,
});
const { data: startupConfig } = useGetStartupConfig();
const sharePointEnabled = startupConfig?.sharePointFilePickerEnabled;
const { agentsConfig } = useGetAgentsConfig();
const capabilities = useAgentCapabilities(agentsConfig?.capabilities ?? defaultAgentCapabilities);
const handleUploadClick = useCallback((isImage?: boolean) => {
if (!inputRef.current) {
return;
}
inputRef.current.value = '';
inputRef.current.accept = isImage === true ? 'image/*' : '';
inputRef.current.click();
inputRef.current.accept = '';
}, []);
const dropdownItems = useMemo(() => {
const createMenuItems = (onAction: (isImage?: boolean) => void) => {
const items: MenuItemProps[] = [
{
label: localize('com_ui_upload_image_input'),
onClick: () => {
setToolResource(EToolResources.image_edit);
onAction(true);
},
icon: <ImageUpIcon className="icon-md" />,
},
];
if (capabilities.ocrEnabled) {
items.push({
label: localize('com_ui_upload_ocr_text'),
onClick: () => {
setToolResource(EToolResources.ocr);
onAction();
},
icon: <FileType2Icon className="icon-md" />,
});
}
if (capabilities.fileSearchEnabled) {
items.push({
label: localize('com_ui_upload_file_search'),
onClick: () => {
setToolResource(EToolResources.file_search);
onAction();
},
icon: <FileSearch className="icon-md" />,
});
}
if (capabilities.codeEnabled) {
items.push({
label: localize('com_ui_upload_code_files'),
onClick: () => {
setToolResource(EToolResources.execute_code);
onAction();
},
icon: <TerminalSquareIcon className="icon-md" />,
});
}
return items;
};
const localItems = createMenuItems(handleUploadClick);
if (sharePointEnabled) {
const sharePointItems = createMenuItems(() => {
setIsSharePointDialogOpen(true);
});
localItems.push({
label: localize('com_files_upload_sharepoint'),
onClick: () => {},
icon: <SharePointIcon className="icon-md" />,
subItems: sharePointItems,
});
return localItems;
}
return localItems;
}, [capabilities, localize, handleUploadClick, sharePointEnabled, setIsSharePointDialogOpen]);
const menuTrigger = (
<Ariakit.MenuButton
disabled={isUploadDisabled}
id="attach-file-button-menu"
aria-label="Attach File Options"
className="flex items-center gap-2 rounded-md border border-border-medium bg-surface-primary px-3 py-2 text-sm font-medium text-text-primary transition-colors hover:bg-surface-hover focus:outline-none focus:ring-2 focus:ring-primary focus:ring-opacity-50"
>
<AttachmentIcon className="h-4 w-4" />
{localize('com_ui_attach_files')}
</Ariakit.MenuButton>
);
const handleSharePointFilesSelected = async (sharePointFiles: any[]) => {
try {
await handleSharePointFiles(sharePointFiles);
setIsSharePointDialogOpen(false);
} catch (error) {
console.error('SharePoint file processing error:', error);
}
};
return (
<>
<FileUpload
ref={inputRef}
handleFileChange={(e) => {
handleFileChange?.(e, toolResource);
}}
>
<DropdownPopup
menuId="attach-file-button"
className="overflow-visible"
isOpen={isPopoverActive}
setIsOpen={setIsPopoverActive}
modal={true}
unmountOnHide={true}
trigger={menuTrigger}
items={dropdownItems}
iconClassName="mr-0"
/>
</FileUpload>
<SharePointPickerDialog
isOpen={isSharePointDialogOpen}
onOpenChange={setIsSharePointDialogOpen}
onFilesSelected={handleSharePointFilesSelected}
isDownloading={isProcessing}
downloadProgress={downloadProgress}
/>
</>
);
};
export default React.memo(AttachFileButton);

View File

@@ -7,7 +7,7 @@ import {
DropdownMenuContent,
DropdownMenuTrigger,
} from '@librechat/client';
import { PermissionBits } from 'librechat-data-provider';
import { PermissionBits, ResourceType } from 'librechat-data-provider';
import type { TPromptGroup } from 'librechat-data-provider';
import { useLocalize, useSubmitMessage, useCustomLink, useResourcePermissions } from '~/hooks';
import VariableDialog from '~/components/Prompts/Groups/VariableDialog';
@@ -34,9 +34,18 @@ function ChatGroupItem({
);
// Check permissions for the promptGroup
const { hasPermission } = useResourcePermissions('promptGroup', group._id || '');
const { hasPermission } = useResourcePermissions(ResourceType.PROMPTGROUP, group._id || '');
const canEdit = hasPermission(PermissionBits.EDIT);
const hasFiles = useMemo(() => {
const toolResources = group.productionPrompt?.tool_resources;
if (!toolResources) return false;
return Object.values(toolResources).some(
(resource) => resource?.file_ids && resource.file_ids.length > 0,
);
}, [group.productionPrompt?.tool_resources]);
const onCardClick: React.MouseEventHandler<HTMLButtonElement> = () => {
const text = group.productionPrompt?.prompt;
if (!text?.trim()) {
@@ -48,7 +57,7 @@ function ChatGroupItem({
return;
}
submitPrompt(text);
submitPrompt(text, group.productionPrompt?.tool_resources);
};
return (
@@ -57,6 +66,7 @@ function ChatGroupItem({
name={group.name}
category={group.category ?? ''}
onClick={onCardClick}
hasFiles={hasFiles}
snippet={
typeof group.oneliner === 'string' && group.oneliner.length > 0
? group.oneliner

View File

@@ -3,11 +3,12 @@ import { useNavigate } from 'react-router-dom';
import { Button, TextareaAutosize, Input } from '@librechat/client';
import { useForm, Controller, FormProvider } from 'react-hook-form';
import { LocalStorageKeys, PermissionTypes, Permissions } from 'librechat-data-provider';
import type { AgentToolResources } from 'librechat-data-provider';
import PromptVariablesAndFiles from '~/components/Prompts/PromptVariablesAndFiles';
import CategorySelector from '~/components/Prompts/Groups/CategorySelector';
import { useLocalize, useHasAccess, usePromptFileHandling } from '~/hooks';
import VariablesDropdown from '~/components/Prompts/VariablesDropdown';
import PromptVariables from '~/components/Prompts/PromptVariables';
import Description from '~/components/Prompts/Description';
import { useLocalize, useHasAccess } from '~/hooks';
import Command from '~/components/Prompts/Command';
import { useCreatePrompt } from '~/data-provider';
import { cn } from '~/utils';
@@ -19,6 +20,7 @@ type CreateFormValues = {
category: string;
oneliner?: string;
command?: string;
tool_resources?: AgentToolResources;
};
const defaultPrompt: CreateFormValues = {
@@ -37,6 +39,14 @@ const CreatePromptForm = ({
}) => {
const localize = useLocalize();
const navigate = useNavigate();
const {
promptFiles: files,
setFiles,
handleFileChange,
getToolResources,
} = usePromptFileHandling();
const hasAccess = useHasAccess({
permissionType: PermissionTypes.PROMPTS,
permission: Permissions.CREATE,
@@ -88,8 +98,15 @@ const CreatePromptForm = ({
if ((command?.length ?? 0) > 0) {
groupData.command = command;
}
const promptData = { ...rest };
const toolResources = getToolResources();
if (toolResources) {
promptData.tool_resources = toolResources;
}
createPromptMutation.mutate({
prompt: rest,
prompt: promptData,
group: groupData,
});
};
@@ -161,7 +178,13 @@ const CreatePromptForm = ({
/>
</div>
</div>
<PromptVariables promptText={promptText} />
<PromptVariablesAndFiles
promptText={promptText}
files={files}
onFilesChange={setFiles}
handleFileChange={handleFileChange}
disabled={isSubmitting}
/>
<Description
onValueChange={(value) => methods.setValue('oneliner', value)}
tabIndex={0}

View File

@@ -93,7 +93,11 @@ function DashGroupItemComponent({ group, instanceProjectId }: DashGroupItemProps
>
<div className="flex w-full items-center justify-between">
<div className="flex items-center gap-2 truncate pr-2">
<CategoryIcon category={group.category ?? ''} className="icon-lg" aria-hidden="true" />
<CategoryIcon
category={group.category ?? ''}
className="icon-lg flex-shrink-0"
aria-hidden="true"
/>
<Label className="text-md cursor-pointer truncate font-semibold text-text-primary">
{group.name}

View File

@@ -1,5 +1,6 @@
import React from 'react';
import { Label } from '@librechat/client';
import { Paperclip } from 'lucide-react';
import CategoryIcon from '~/components/Prompts/Groups/CategoryIcon';
export default function ListCard({
@@ -8,12 +9,14 @@ export default function ListCard({
snippet,
onClick,
children,
hasFiles,
}: {
category: string;
name: string;
snippet: string;
onClick?: React.MouseEventHandler<HTMLDivElement | HTMLButtonElement>;
children?: React.ReactNode;
hasFiles?: boolean;
}) {
const handleKeyDown = (event: React.KeyboardEvent<HTMLDivElement | HTMLButtonElement>) => {
if (event.key === 'Enter' || event.key === ' ') {
@@ -35,7 +38,7 @@ export default function ListCard({
>
<div className="flex w-full justify-between gap-2">
<div className="flex flex-row gap-2">
<CategoryIcon category={category} className="icon-md" aria-hidden="true" />
<CategoryIcon category={category} className="icon-md flex-shrink-0" aria-hidden="true" />
<Label
id={`card-title-${name}`}
className="break-word select-none text-balance text-sm font-semibold text-text-primary"
@@ -43,6 +46,7 @@ export default function ListCard({
>
{name}
</Label>
{hasFiles && <Paperclip className="icon-xs mt-1 flex-shrink-0 text-text-secondary" />}
</div>
<div>{children}</div>
</div>

View File

@@ -133,7 +133,7 @@ export default function VariableForm({
text = text.replace(regex, value);
});
submitPrompt(text);
submitPrompt(text, group.productionPrompt?.tool_resources);
onClose();
};

View File

@@ -7,9 +7,10 @@ import supersub from 'remark-supersub';
import { Label } from '@librechat/client';
import rehypeHighlight from 'rehype-highlight';
import { replaceSpecialVars } from 'librechat-data-provider';
import type { TPromptGroup } from 'librechat-data-provider';
import type { TPromptGroup, AgentToolResources } from 'librechat-data-provider';
import { codeNoExecution } from '~/components/Chat/Messages/Content/MarkdownComponents';
import { useLocalize, useAuthContext } from '~/hooks';
import PromptFilesPreview from './PromptFilesPreview';
import CategoryIcon from './Groups/CategoryIcon';
import PromptVariables from './PromptVariables';
import { PromptVariableGfm } from './Markdown';
@@ -25,6 +26,17 @@ const PromptDetails = ({ group }: { group?: TPromptGroup }) => {
return replaceSpecialVars({ text: initialText, user });
}, [group?.productionPrompt?.prompt, user]);
const toolResources = useMemo((): AgentToolResources | undefined => {
return group?.productionPrompt?.tool_resources;
}, [group?.productionPrompt?.tool_resources]);
const hasFiles = useMemo(() => {
if (!toolResources) return false;
return Object.values(toolResources).some(
(resource) => resource?.file_ids && resource.file_ids.length > 0,
);
}, [toolResources]);
if (!group) {
return null;
}
@@ -72,6 +84,7 @@ const PromptDetails = ({ group }: { group?: TPromptGroup }) => {
</div>
</div>
<PromptVariables promptText={mainText} showInfo={false} />
{hasFiles && toolResources && <PromptFilesPreview toolResources={toolResources} />}
<Description initialValue={group.oneliner} disabled={true} />
<Command initialValue={group.command} disabled={true} />
</div>

View File

@@ -0,0 +1,141 @@
import { useEffect } from 'react';
import { useToastContext } from '@librechat/client';
import type { ExtendedFile } from '~/common';
import FileContainer from '~/components/Chat/Input/Files/FileContainer';
import Image from '~/components/Chat/Input/Files/Image';
import { useLocalize } from '~/hooks';
export default function PromptFile({
files: _files,
setFiles,
abortUpload,
setFilesLoading,
fileFilter,
isRTL = false,
Wrapper,
}: {
files: Map<string, ExtendedFile> | undefined;
abortUpload?: () => void;
setFiles: React.Dispatch<React.SetStateAction<Map<string, ExtendedFile>>>;
setFilesLoading: React.Dispatch<React.SetStateAction<boolean>>;
fileFilter?: (file: ExtendedFile) => boolean;
isRTL?: boolean;
Wrapper?: React.FC<{ children: React.ReactNode }>;
}) {
const localize = useLocalize();
const { showToast } = useToastContext();
const files = Array.from(_files?.values() ?? []).filter((file) =>
fileFilter ? fileFilter(file) : true,
);
useEffect(() => {
if (files.length === 0) {
setFilesLoading(false);
return;
}
if (files.some((file) => file.progress < 1)) {
setFilesLoading(true);
return;
}
if (files.every((file) => file.progress === 1)) {
setFilesLoading(false);
}
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [files]);
if (files.length === 0) {
return null;
}
const renderFiles = () => {
const rowStyle = isRTL
? {
display: 'flex',
flexDirection: 'row-reverse',
flexWrap: 'wrap',
gap: '4px',
width: '100%',
maxWidth: '100%',
}
: {
display: 'flex',
flexWrap: 'wrap',
gap: '4px',
width: '100%',
maxWidth: '100%',
};
return (
<div style={rowStyle as React.CSSProperties}>
{files
.reduce(
(acc, current) => {
if (!acc.map.has(current.file_id)) {
acc.map.set(current.file_id, true);
acc.uniqueFiles.push(current);
}
return acc;
},
{ map: new Map(), uniqueFiles: [] as ExtendedFile[] },
)
.uniqueFiles.map((file: ExtendedFile, index: number) => {
const handleDelete = () => {
showToast({
message: localize('com_ui_deleting_file'),
status: 'info',
});
if (abortUpload && file.progress < 1) {
abortUpload();
}
if (file.preview && file.preview.startsWith('blob:')) {
URL.revokeObjectURL(file.preview);
}
setFiles((currentFiles) => {
const updatedFiles = new Map(currentFiles);
updatedFiles.delete(file.file_id);
if (file.temp_file_id) {
updatedFiles.delete(file.temp_file_id);
}
return updatedFiles;
});
};
const isImage = file.type?.startsWith('image') ?? false;
return (
<div
key={index}
style={{
flexBasis: '70px',
flexGrow: 0,
flexShrink: 0,
}}
>
{isImage ? (
<Image
url={file.preview ?? file.filepath}
onDelete={handleDelete}
progress={file.progress}
source={file.source}
/>
) : (
<FileContainer file={file} onDelete={handleDelete} />
)}
</div>
);
})}
</div>
);
};
if (Wrapper) {
return <Wrapper>{renderFiles()}</Wrapper>;
}
return renderFiles();
}

View File

@@ -0,0 +1,80 @@
import { useMemo } from 'react';
import { FileText } from 'lucide-react';
import ReactMarkdown from 'react-markdown';
import { Separator } from '@librechat/client';
import type { ExtendedFile } from '~/common';
import AttachFileButton from '~/components/Prompts/Files/AttachFileButton';
import PromptFile from '~/components/Prompts/PromptFile';
import { useLocalize } from '~/hooks';
const PromptFiles = ({
files,
onFilesChange,
handleFileChange,
disabled,
}: {
files: ExtendedFile[];
onFilesChange?: (files: ExtendedFile[]) => void;
handleFileChange?: (event: React.ChangeEvent<HTMLInputElement>, toolResource?: string) => void;
disabled?: boolean;
}) => {
const localize = useLocalize();
const filesMap = useMemo(() => {
const map = new Map<string, ExtendedFile>();
files.forEach((file) => {
const key = file.file_id || file.temp_file_id || '';
if (key) {
map.set(key, file);
}
});
return map;
}, [files]);
return (
<div className="flex h-full flex-col rounded-xl border border-border-light bg-transparent p-4 shadow-md">
<h3 className="flex items-center gap-2 py-2 text-lg font-semibold text-text-primary">
<FileText className="icon-sm" aria-hidden="true" />
{localize('com_ui_files')}
</h3>
<div className="flex flex-1 flex-col space-y-4">
<div className="flex-1">
{!files.length && (
<>
<div className="text-sm text-text-secondary">
<ReactMarkdown className="markdown prose dark:prose-invert">
{localize('com_ui_files_info')}
</ReactMarkdown>
</div>
<Separator className="my-3 text-text-primary" />
</>
)}
{files.length > 0 && (
<div className="mb-3">
<PromptFile
files={filesMap}
setFiles={(newMapOrUpdater) => {
const newMap =
typeof newMapOrUpdater === 'function'
? newMapOrUpdater(filesMap)
: newMapOrUpdater;
const newFiles = Array.from(newMap.values()) as ExtendedFile[];
onFilesChange?.(newFiles);
}}
setFilesLoading={() => {}}
Wrapper={({ children }) => <div className="flex flex-wrap gap-2">{children}</div>}
/>
</div>
)}
</div>
<div className="flex justify-start text-text-secondary">
<AttachFileButton handleFileChange={handleFileChange} disabled={disabled} />
</div>
</div>
</div>
);
};
export default PromptFiles;

View File

@@ -0,0 +1,121 @@
import React, { useMemo } from 'react';
import { Paperclip, FileText, Image, FileType } from 'lucide-react';
import type { AgentToolResources } from 'librechat-data-provider';
import { useGetFiles } from '~/data-provider';
import { useLocalize } from '~/hooks';
interface PromptFilesPreviewProps {
toolResources: AgentToolResources;
}
const PromptFilesPreview: React.FC<PromptFilesPreviewProps> = ({ toolResources }) => {
const localize = useLocalize();
const { data: allFiles } = useGetFiles();
const fileMap = useMemo(() => {
const map: Record<string, any> = {};
if (Array.isArray(allFiles)) {
allFiles.forEach((file) => {
if (file.file_id) {
map[file.file_id] = file;
}
});
}
return map;
}, [allFiles]);
const attachedFiles = useMemo(() => {
const files: Array<{ file: any; toolResource: string }> = [];
Object.entries(toolResources).forEach(([toolResource, resource]) => {
if (resource?.file_ids) {
resource.file_ids.forEach((fileId) => {
const dbFile = fileMap[fileId];
if (dbFile) {
files.push({ file: dbFile, toolResource });
}
});
}
});
return files;
}, [toolResources, fileMap]);
const getFileIcon = (type: string) => {
if (type?.startsWith('image/')) {
return <Image className="h-4 w-4" />;
}
if (type?.includes('text') || type?.includes('document')) {
return <FileText className="h-4 w-4" />;
}
return <FileType className="h-4 w-4" />;
};
const getToolResourceLabel = (toolResource: string) => {
if (toolResource === 'file_search') {
return localize('com_ui_upload_file_search');
}
if (toolResource === 'execute_code') {
return localize('com_ui_upload_code_files');
}
if (toolResource === 'ocr') {
return localize('com_ui_upload_ocr_text');
}
if (toolResource === 'image_edit') {
return localize('com_ui_upload_image_input');
}
return toolResource;
};
if (attachedFiles.length === 0) {
return null;
}
return (
<div>
<h2 className="flex items-center justify-between rounded-t-lg border border-border-light py-2 pl-4 text-base font-semibold text-text-primary">
<div className="flex items-center gap-2">
<Paperclip className="h-4 w-4" />
{localize('com_ui_files')} ({attachedFiles.length})
</div>
</h2>
<div className="rounded-b-lg border border-border-light p-4">
<div className="space-y-3">
{attachedFiles.map(({ file, toolResource }, index) => (
<div
key={`${file.file_id}-${index}`}
className="flex items-center justify-between rounded-lg border border-border-light p-3 transition-colors hover:bg-surface-tertiary"
>
<div className="flex items-center gap-3">
<div className="flex h-8 w-8 items-center justify-center rounded-lg bg-surface-secondary text-text-secondary">
{getFileIcon(file.type)}
</div>
<div className="flex flex-col">
<span className="text-sm font-medium text-text-primary" title={file.filename}>
{file.filename}
</span>
<div className="flex items-center gap-2 text-xs text-text-secondary">
<span>{getToolResourceLabel(toolResource)}</span>
{file.bytes && (
<>
<span></span>
<span>{(file.bytes / 1024).toFixed(1)} KB</span>
</>
)}
</div>
</div>
</div>
{file.type?.startsWith('image/') && file.width && file.height && (
<div className="text-xs text-text-secondary">
{file.width} × {file.height}
</div>
)}
</div>
))}
</div>
</div>
</div>
);
};
export default PromptFilesPreview;

View File

@@ -12,7 +12,13 @@ import {
PermissionBits,
PermissionTypes,
} from 'librechat-data-provider';
import type { TCreatePrompt, TPrompt, TPromptGroup } from 'librechat-data-provider';
import type {
TCreatePrompt,
TPrompt,
TPromptGroup,
AgentToolResources,
} from 'librechat-data-provider';
import type { ExtendedFile } from '~/common';
import {
useGetPrompts,
useGetPromptGroup,
@@ -20,11 +26,11 @@ import {
useUpdatePromptGroup,
useMakePromptProduction,
} from '~/data-provider';
import { useResourcePermissions, useHasAccess, useLocalize } from '~/hooks';
import { useResourcePermissions, useHasAccess, useLocalize, usePromptFileHandling } from '~/hooks';
import PromptVariablesAndFiles from './PromptVariablesAndFiles';
import CategorySelector from './Groups/CategorySelector';
import { usePromptGroupsContext } from '~/Providers';
import NoPromptGroup from './Groups/NoPromptGroup';
import PromptVariables from './PromptVariables';
import { cn, findPromptGroup } from '~/utils';
import PromptVersions from './PromptVersions';
import { PromptsEditorMode } from '~/common';
@@ -119,7 +125,12 @@ const RightPanel = React.memo(
makeProductionMutation.mutate({
id: promptVersionId,
groupId,
productionPrompt: { prompt },
productionPrompt: {
prompt,
...(selectedPrompt.tool_resources && {
tool_resources: selectedPrompt.tool_resources,
}),
},
});
}}
disabled={
@@ -179,6 +190,21 @@ const PromptForm = () => {
const [showSidePanel, setShowSidePanel] = useState(false);
const sidePanelWidth = '320px';
const {
loadFromToolResources,
getToolResources,
promptFiles: hookPromptFiles,
handleFileChange,
setFiles,
} = usePromptFileHandling({
onFileChange: (updatedFiles) => {
if (canEdit && selectedPrompt) {
const currentPromptText = getValues('prompt');
onSave(currentPromptText, updatedFiles);
}
},
});
const { data: group, isLoading: isLoadingGroup } = useGetPromptGroup(promptId);
const { data: prompts = [], isLoading: isLoadingPrompts } = useGetPrompts(
{ groupId: promptId },
@@ -200,7 +226,7 @@ const PromptForm = () => {
category: group ? group.category : '',
},
});
const { handleSubmit, setValue, reset, watch } = methods;
const { handleSubmit, setValue, reset, watch, getValues } = methods;
const promptText = watch('prompt');
const selectedPrompt = useMemo(
@@ -237,7 +263,10 @@ const PromptForm = () => {
makeProductionMutation.mutate({
id: data.prompt._id,
groupId: data.prompt.groupId,
productionPrompt: { prompt: data.prompt.prompt },
productionPrompt: {
prompt: data.prompt.prompt,
...(data.prompt.tool_resources && { tool_resources: data.prompt.tool_resources }),
},
});
}
@@ -249,8 +278,30 @@ const PromptForm = () => {
},
});
const getToolResourcesFromFiles = useCallback((files: ExtendedFile[]) => {
if (files.length === 0) {
return undefined;
}
const toolResources: AgentToolResources = {};
files.forEach((file) => {
if (!file.file_id || !file.tool_resource) return; // Skip files that haven't been uploaded yet
if (!toolResources[file.tool_resource]) {
toolResources[file.tool_resource] = { file_ids: [] };
}
if (!toolResources[file.tool_resource]!.file_ids!.includes(file.file_id)) {
toolResources[file.tool_resource]!.file_ids!.push(file.file_id);
}
});
return Object.keys(toolResources).length > 0 ? toolResources : undefined;
}, []);
const onSave = useCallback(
(value: string) => {
(value: string, updatedFiles?: ExtendedFile[]) => {
if (!canEdit) {
return;
}
@@ -268,22 +319,36 @@ const PromptForm = () => {
return;
}
const toolResources = updatedFiles
? getToolResourcesFromFiles(updatedFiles)
: getToolResources();
const tempPrompt: TCreatePrompt = {
prompt: {
type: selectedPrompt.type ?? 'text',
groupId: groupId,
prompt: value,
...(toolResources && { tool_resources: toolResources }),
},
};
if (value === selectedPrompt.prompt) {
const promptTextChanged = value !== selectedPrompt.prompt;
const toolResourcesChanged =
JSON.stringify(toolResources) !== JSON.stringify(selectedPrompt.tool_resources);
if (!promptTextChanged && !toolResourcesChanged) {
return;
}
// We're adding to an existing group, so use the addPromptToGroup mutation
addPromptToGroupMutation.mutate({ ...tempPrompt, groupId });
},
[selectedPrompt, group, addPromptToGroupMutation, canEdit],
[
selectedPrompt,
group,
addPromptToGroupMutation,
canEdit,
getToolResources,
getToolResourcesFromFiles,
],
);
const handleLoadingComplete = useCallback(() => {
@@ -307,7 +372,13 @@ const PromptForm = () => {
useEffect(() => {
setValue('prompt', selectedPrompt ? selectedPrompt.prompt : '', { shouldDirty: false });
setValue('category', group ? group.category : '', { shouldDirty: false });
}, [selectedPrompt, group, setValue]);
if (selectedPrompt?.tool_resources) {
loadFromToolResources(selectedPrompt.tool_resources);
} else {
loadFromToolResources(undefined);
}
}, [selectedPrompt, group, setValue, loadFromToolResources]);
useEffect(() => {
const handleResize = () => {
@@ -447,7 +518,19 @@ const PromptForm = () => {
isEditing={isEditing}
setIsEditing={(value) => canEdit && setIsEditing(value)}
/>
<PromptVariables promptText={promptText} />
<PromptVariablesAndFiles
promptText={promptText}
files={hookPromptFiles}
onFilesChange={(files) => {
setFiles(files);
if (canEdit && selectedPrompt) {
const currentPromptText = getValues('prompt');
onSave(currentPromptText, files);
}
}}
handleFileChange={handleFileChange}
disabled={!canEdit}
/>
<Description
initialValue={group.oneliner ?? ''}
onValueChange={canEdit ? handleUpdateOneliner : undefined}

View File

@@ -0,0 +1,43 @@
import React from 'react';
import type { ExtendedFile } from '~/common';
import PromptVariables from './PromptVariables';
import PromptFiles from './PromptFiles';
interface PromptVariablesAndFilesProps {
promptText: string;
files?: ExtendedFile[];
onFilesChange?: (files: ExtendedFile[]) => void;
handleFileChange?: (event: React.ChangeEvent<HTMLInputElement>, toolResource?: string) => void;
disabled?: boolean;
showVariablesInfo?: boolean;
}
const PromptVariablesAndFiles: React.FC<PromptVariablesAndFilesProps> = ({
promptText,
files = [],
onFilesChange,
handleFileChange,
disabled,
showVariablesInfo = true,
}) => {
return (
<div className="grid grid-cols-1 gap-4 lg:grid-cols-2 lg:items-stretch">
{/* Variables Section */}
<div className="w-full">
<PromptVariables promptText={promptText} showInfo={showVariablesInfo} />
</div>
{/* Files Section */}
<div className="w-full">
<PromptFiles
files={files}
onFilesChange={onFilesChange}
handleFileChange={handleFileChange}
disabled={disabled}
/>
</div>
</div>
);
};
export default PromptVariablesAndFiles;

View File

@@ -8,3 +8,5 @@ export { default as DashGroupItem } from './Groups/DashGroupItem';
export { default as EmptyPromptPreview } from './EmptyPromptPreview';
export { default as PromptSidePanel } from './Groups/GroupSidePanel';
export { default as CreatePromptForm } from './Groups/CreatePromptForm';
export { default as PromptVariablesAndFiles } from './PromptVariablesAndFiles';
export { default as PromptFiles } from './PromptFiles';

View File

@@ -47,11 +47,7 @@ export default function AgentPanel() {
const { onSelect: onSelectAgent } = useSelectAgent();
const modelsQuery = useGetModelsQuery();
// Basic agent query for initial permission check
const basicAgentQuery = useGetAgentByIdQuery(current_agent_id ?? '', {
enabled: !!(current_agent_id ?? '') && current_agent_id !== Constants.EPHEMERAL_AGENT_ID,
});
const basicAgentQuery = useGetAgentByIdQuery(current_agent_id);
const { hasPermission, isLoading: permissionsLoading } = useResourcePermissions(
ResourceType.AGENT,

View File

@@ -26,10 +26,6 @@ function AgentPanelSwitchWithContext() {
}
}, [setCurrentAgentId, conversation?.agent_id]);
if (!conversation?.endpoint) {
return null;
}
if (activePanel === Panel.actions) {
return <ActionsPanel />;
}

View File

@@ -91,6 +91,14 @@ export default function ApiKeyDialog({
text: localize('com_ui_web_search_reranker_jina_key'),
},
},
jinaApiUrl: {
placeholder: localize('com_ui_web_search_jina_url'),
type: 'text' as const,
link: {
url: 'https://api.jina.ai/v1/rerank',
text: localize('com_ui_web_search_reranker_jina_url_help'),
},
},
},
},
{

View File

@@ -16,14 +16,7 @@ export default function VersionPanel() {
const selectedAgentId = agent_id ?? '';
const {
data: agent,
isLoading,
error,
refetch,
} = useGetAgentByIdQuery(selectedAgentId, {
enabled: !!selectedAgentId && selectedAgentId !== '',
});
const { data: agent, isLoading, error, refetch } = useGetAgentByIdQuery(selectedAgentId);
const revertAgentVersion = useRevertAgentVersionMutation({
onSuccess: () => {

View File

@@ -1,5 +1,5 @@
import React, { useState, useMemo, useCallback } from 'react';
import { ChevronLeft } from 'lucide-react';
import { ChevronLeft, Trash2 } from 'lucide-react';
import { useQueryClient } from '@tanstack/react-query';
import { Button, useToastContext } from '@librechat/client';
import { Constants, QueryKeys } from 'librechat-data-provider';
@@ -123,6 +123,7 @@ function MCPPanelContent() {
}
const serverStatus = connectionStatus?.[selectedServerNameForEditing];
const isConnected = serverStatus?.connectionState === 'connected';
return (
<div className="h-auto max-w-full space-y-4 overflow-x-hidden py-2">
@@ -159,6 +160,17 @@ function MCPPanelContent() {
Object.keys(serverBeingEdited.config.customUserVars).length > 0
}
/>
{serverStatus?.requiresOAuth && isConnected && (
<Button
className="w-full"
size="sm"
variant="destructive"
onClick={() => handleConfigRevoke(selectedServerNameForEditing)}
>
<Trash2 className="h-4 w-4" />
{localize('com_ui_oauth_revoke')}
</Button>
)}
</div>
);
} else {

View File

@@ -1,5 +1,11 @@
import { useQuery, useInfiniteQuery, useQueryClient } from '@tanstack/react-query';
import { QueryKeys, dataService, EModelEndpoint, PermissionBits } from 'librechat-data-provider';
import {
Constants,
QueryKeys,
dataService,
EModelEndpoint,
PermissionBits,
} from 'librechat-data-provider';
import type {
QueryObserverResult,
UseQueryOptions,
@@ -64,20 +70,27 @@ export const useListAgentsQuery = <TData = t.AgentListResponse>(
* Hook for retrieving basic details about a single agent (VIEW permission)
*/
export const useGetAgentByIdQuery = (
agent_id: string,
agent_id: string | null | undefined,
config?: UseQueryOptions<t.Agent>,
): QueryObserverResult<t.Agent> => {
const isValidAgentId = !!(
agent_id &&
agent_id !== '' &&
agent_id !== Constants.EPHEMERAL_AGENT_ID
);
return useQuery<t.Agent>(
[QueryKeys.agent, agent_id],
() =>
dataService.getAgentById({
agent_id,
agent_id: agent_id as string,
}),
{
refetchOnWindowFocus: false,
refetchOnReconnect: false,
refetchOnMount: false,
retry: false,
enabled: isValidAgentId && (config?.enabled ?? true),
...config,
},
);

View File

@@ -118,6 +118,8 @@ export const useCreatePrompt = (
},
);
queryClient.invalidateQueries([QueryKeys.files]);
if (group) {
queryClient.setQueryData<t.PromptGroupListData>(
[QueryKeys.promptGroups, name, category, pageSize],
@@ -163,6 +165,8 @@ export const useAddPromptToGroup = (
},
);
queryClient.invalidateQueries([QueryKeys.files]);
if (onSuccess) {
onSuccess(response, variables, context);
}

View File

@@ -0,0 +1,180 @@
import { renderHook } from '@testing-library/react';
import { Tools } from 'librechat-data-provider';
import useAgentToolPermissions from '../useAgentToolPermissions';
// Mock the dependencies
jest.mock('~/data-provider', () => ({
useGetAgentByIdQuery: jest.fn(),
}));
jest.mock('~/Providers', () => ({
useAgentsMapContext: jest.fn(),
}));
const mockUseGetAgentByIdQuery = jest.requireMock('~/data-provider').useGetAgentByIdQuery;
const mockUseAgentsMapContext = jest.requireMock('~/Providers').useAgentsMapContext;
describe('useAgentToolPermissions', () => {
beforeEach(() => {
jest.clearAllMocks();
});
describe('when no agentId is provided', () => {
it('should allow all tools for ephemeral agents', () => {
mockUseAgentsMapContext.mockReturnValue({});
mockUseGetAgentByIdQuery.mockReturnValue({ data: undefined });
const { result } = renderHook(() => useAgentToolPermissions(null));
expect(result.current.fileSearchAllowedByAgent).toBe(true);
expect(result.current.codeAllowedByAgent).toBe(true);
expect(result.current.tools).toBeUndefined();
});
it('should allow all tools when agentId is undefined', () => {
mockUseAgentsMapContext.mockReturnValue({});
mockUseGetAgentByIdQuery.mockReturnValue({ data: undefined });
const { result } = renderHook(() => useAgentToolPermissions(undefined));
expect(result.current.fileSearchAllowedByAgent).toBe(true);
expect(result.current.codeAllowedByAgent).toBe(true);
expect(result.current.tools).toBeUndefined();
});
it('should allow all tools when agentId is empty string', () => {
mockUseAgentsMapContext.mockReturnValue({});
mockUseGetAgentByIdQuery.mockReturnValue({ data: undefined });
const { result } = renderHook(() => useAgentToolPermissions(''));
expect(result.current.fileSearchAllowedByAgent).toBe(true);
expect(result.current.codeAllowedByAgent).toBe(true);
expect(result.current.tools).toBeUndefined();
});
});
describe('when agentId is provided but agent not found', () => {
it('should disallow all tools', () => {
mockUseAgentsMapContext.mockReturnValue({});
mockUseGetAgentByIdQuery.mockReturnValue({ data: undefined });
const { result } = renderHook(() => useAgentToolPermissions('non-existent-agent'));
expect(result.current.fileSearchAllowedByAgent).toBe(false);
expect(result.current.codeAllowedByAgent).toBe(false);
expect(result.current.tools).toBeUndefined();
});
});
describe('when agent is found with tools', () => {
it('should allow tools that are included in the agent tools array', () => {
const agentId = 'test-agent';
const agent = {
id: agentId,
tools: [Tools.file_search],
};
mockUseAgentsMapContext.mockReturnValue({ [agentId]: agent });
mockUseGetAgentByIdQuery.mockReturnValue({ data: undefined });
const { result } = renderHook(() => useAgentToolPermissions(agentId));
expect(result.current.fileSearchAllowedByAgent).toBe(true);
expect(result.current.codeAllowedByAgent).toBe(false);
expect(result.current.tools).toEqual([Tools.file_search]);
});
it('should allow both tools when both are included', () => {
const agentId = 'test-agent';
const agent = {
id: agentId,
tools: [Tools.file_search, Tools.execute_code],
};
mockUseAgentsMapContext.mockReturnValue({ [agentId]: agent });
mockUseGetAgentByIdQuery.mockReturnValue({ data: undefined });
const { result } = renderHook(() => useAgentToolPermissions(agentId));
expect(result.current.fileSearchAllowedByAgent).toBe(true);
expect(result.current.codeAllowedByAgent).toBe(true);
expect(result.current.tools).toEqual([Tools.file_search, Tools.execute_code]);
});
it('should use data from API query when available', () => {
const agentId = 'test-agent';
const agentMapData = {
id: agentId,
tools: [Tools.file_search],
};
const agentApiData = {
id: agentId,
tools: [Tools.execute_code, Tools.file_search],
};
mockUseAgentsMapContext.mockReturnValue({ [agentId]: agentMapData });
mockUseGetAgentByIdQuery.mockReturnValue({ data: agentApiData });
const { result } = renderHook(() => useAgentToolPermissions(agentId));
// API data should take precedence
expect(result.current.fileSearchAllowedByAgent).toBe(true);
expect(result.current.codeAllowedByAgent).toBe(true);
expect(result.current.tools).toEqual([Tools.execute_code, Tools.file_search]);
});
it('should fallback to agent map data when API data is not available', () => {
const agentId = 'test-agent';
const agentMapData = {
id: agentId,
tools: [Tools.execute_code],
};
mockUseAgentsMapContext.mockReturnValue({ [agentId]: agentMapData });
mockUseGetAgentByIdQuery.mockReturnValue({ data: undefined });
const { result } = renderHook(() => useAgentToolPermissions(agentId));
expect(result.current.fileSearchAllowedByAgent).toBe(false);
expect(result.current.codeAllowedByAgent).toBe(true);
expect(result.current.tools).toEqual([Tools.execute_code]);
});
});
describe('when agent has no tools', () => {
it('should disallow all tools with empty array', () => {
const agentId = 'test-agent';
const agent = {
id: agentId,
tools: [],
};
mockUseAgentsMapContext.mockReturnValue({ [agentId]: agent });
mockUseGetAgentByIdQuery.mockReturnValue({ data: undefined });
const { result } = renderHook(() => useAgentToolPermissions(agentId));
expect(result.current.fileSearchAllowedByAgent).toBe(false);
expect(result.current.codeAllowedByAgent).toBe(false);
expect(result.current.tools).toEqual([]);
});
it('should disallow all tools with undefined tools', () => {
const agentId = 'test-agent';
const agent = {
id: agentId,
tools: undefined,
};
mockUseAgentsMapContext.mockReturnValue({ [agentId]: agent });
mockUseGetAgentByIdQuery.mockReturnValue({ data: undefined });
const { result } = renderHook(() => useAgentToolPermissions(agentId));
expect(result.current.fileSearchAllowedByAgent).toBe(false);
expect(result.current.codeAllowedByAgent).toBe(false);
expect(result.current.tools).toBeUndefined();
});
});
});

View File

@@ -5,3 +5,4 @@ export type { ProcessedAgentCategory } from './useAgentCategories';
export { default as useAgentCapabilities } from './useAgentCapabilities';
export { default as useGetAgentsConfig } from './useGetAgentsConfig';
export { default as useAgentDefaultPermissionLevel } from './useAgentDefaultPermissionLevel';
export { default as useAgentToolPermissions } from './useAgentToolPermissions';

View File

@@ -0,0 +1,58 @@
import { useMemo } from 'react';
import { Tools } from 'librechat-data-provider';
import { useGetAgentByIdQuery } from '~/data-provider';
import { useAgentsMapContext } from '~/Providers';
interface AgentToolPermissionsResult {
fileSearchAllowedByAgent: boolean;
codeAllowedByAgent: boolean;
tools: string[] | undefined;
}
/**
* Hook to determine whether specific tools are allowed for a given agent.
*
* @param agentId - The ID of the agent. If null/undefined/empty, returns true for all tools (ephemeral agent behavior)
* @returns Object with boolean flags for file_search and execute_code permissions, plus the tools array
*/
export default function useAgentToolPermissions(
agentId: string | null | undefined,
): AgentToolPermissionsResult {
const agentsMap = useAgentsMapContext();
const selectedAgent = useMemo(() => {
return agentId != null && agentId !== '' ? agentsMap?.[agentId] : undefined;
}, [agentId, agentsMap]);
const { data: agentData } = useGetAgentByIdQuery(agentId);
const tools = useMemo(
() =>
(agentData?.tools as string[] | undefined) || (selectedAgent?.tools as string[] | undefined),
[agentData?.tools, selectedAgent?.tools],
);
const fileSearchAllowedByAgent = useMemo(() => {
// If no agentId, allow for ephemeral agents
if (!agentId) return true;
// If agentId exists but agent not found, disallow
if (!selectedAgent) return false;
// Check if the agent has the file_search tool
return tools?.includes(Tools.file_search) ?? false;
}, [agentId, selectedAgent, tools]);
const codeAllowedByAgent = useMemo(() => {
// If no agentId, allow for ephemeral agents
if (!agentId) return true;
// If agentId exists but agent not found, disallow
if (!selectedAgent) return false;
// Check if the agent has the execute_code tool
return tools?.includes(Tools.execute_code) ?? false;
}, [agentId, selectedAgent, tools]);
return {
fileSearchAllowedByAgent,
codeAllowedByAgent,
tools,
};
}

View File

@@ -22,9 +22,7 @@ export default function useSelectAgent() {
conversation?.agent_id ?? null,
);
const agentQuery = useGetAgentByIdQuery(selectedAgentId ?? '', {
enabled: !!(selectedAgentId ?? '') && selectedAgentId !== Constants.EPHEMERAL_AGENT_ID,
});
const agentQuery = useGetAgentByIdQuery(selectedAgentId);
const updateConversation = useCallback(
(agent: Partial<Agent>, template: Partial<TPreset | TConversation>) => {

View File

@@ -79,6 +79,7 @@ export default function useChatFunctions({
parentMessageId = null,
conversationId = null,
messageId = null,
toolResources,
},
{
editedContent = null,
@@ -204,6 +205,7 @@ export default function useChatFunctions({
messageId: isContinued && messageId != null && messageId ? messageId : intermediateId,
thread_id,
error: false,
...(toolResources && { tool_resources: toolResources }),
};
const submissionFiles = overrideFiles ?? targetParentMessage?.files;

View File

@@ -5,6 +5,7 @@ import { LocalStorageKeys } from 'librechat-data-provider';
import { useAvailablePluginsQuery } from 'librechat-data-provider/react-query';
import type { TStartupConfig, TPlugin, TUser } from 'librechat-data-provider';
import { mapPlugins, selectPlugins, processPlugins } from '~/utils';
import { cleanupTimestampedStorage } from '~/utils/timestamps';
import useSpeechSettingsInit from './useSpeechSettingsInit';
import store from '~/store';
@@ -34,6 +35,11 @@ export default function useAppStartup({
useSpeechSettingsInit(!!user);
/** Clean up old localStorage entries on startup */
useEffect(() => {
cleanupTimestampedStorage();
}, []);
/** Set the app title */
useEffect(() => {
const appTitle = startupConfig?.appTitle ?? '';

View File

@@ -1,9 +1,10 @@
import { useState, useMemo } from 'react';
import { useState, useMemo, useCallback, useRef } from 'react';
import { useDrop } from 'react-dnd';
import { NativeTypes } from 'react-dnd-html5-backend';
import { useQueryClient } from '@tanstack/react-query';
import { useRecoilValue, useSetRecoilState } from 'recoil';
import {
Tools,
QueryKeys,
Constants,
EModelEndpoint,
@@ -26,19 +27,6 @@ export default function useDragHelpers() {
ephemeralAgentByConvoId(conversation?.conversationId ?? Constants.NEW_CONVO),
);
const handleOptionSelect = (toolResource: EToolResources | undefined) => {
/** File search is not automatically enabled to simulate legacy behavior */
if (toolResource && toolResource !== EToolResources.file_search) {
setEphemeralAgent((prev) => ({
...prev,
[toolResource]: true,
}));
}
handleFiles(draggedFiles, toolResource);
setShowModal(false);
setDraggedFiles([]);
};
const isAssistants = useMemo(
() => isAssistantsEndpoint(conversation?.endpoint),
[conversation?.endpoint],
@@ -48,36 +36,95 @@ export default function useDragHelpers() {
overrideEndpoint: isAssistants ? undefined : EModelEndpoint.agents,
});
const handleOptionSelect = useCallback(
(toolResource: EToolResources | undefined) => {
/** File search is not automatically enabled to simulate legacy behavior */
if (toolResource && toolResource !== EToolResources.file_search) {
setEphemeralAgent((prev) => ({
...prev,
[toolResource]: true,
}));
}
handleFiles(draggedFiles, toolResource);
setShowModal(false);
setDraggedFiles([]);
},
[draggedFiles, handleFiles, setEphemeralAgent],
);
/** Use refs to avoid re-creating the drop handler */
const handleFilesRef = useRef(handleFiles);
const conversationRef = useRef(conversation);
handleFilesRef.current = handleFiles;
conversationRef.current = conversation;
const handleDrop = useCallback(
(item: { files: File[] }) => {
if (isAssistants) {
handleFilesRef.current(item.files);
return;
}
const endpointsConfig = queryClient.getQueryData<t.TEndpointsConfig>([QueryKeys.endpoints]);
const agentsConfig = endpointsConfig?.[EModelEndpoint.agents];
const capabilities = agentsConfig?.capabilities ?? defaultAgentCapabilities;
const fileSearchEnabled = capabilities.includes(AgentCapabilities.file_search) === true;
const codeEnabled = capabilities.includes(AgentCapabilities.execute_code) === true;
const ocrEnabled = capabilities.includes(AgentCapabilities.ocr) === true;
/** Get agent permissions at drop time */
const agentId = conversationRef.current?.agent_id;
let fileSearchAllowedByAgent = true;
let codeAllowedByAgent = true;
if (agentId && agentId !== Constants.EPHEMERAL_AGENT_ID) {
/** Agent data from cache */
const agent = queryClient.getQueryData<t.Agent>([QueryKeys.agent, agentId]);
if (agent) {
const agentTools = agent.tools as string[] | undefined;
fileSearchAllowedByAgent = agentTools?.includes(Tools.file_search) ?? false;
codeAllowedByAgent = agentTools?.includes(Tools.execute_code) ?? false;
} else {
/** If agent exists but not found, disallow */
fileSearchAllowedByAgent = false;
codeAllowedByAgent = false;
}
}
/** Determine if dragged files are all images (enables the base image option) */
const allImages = item.files.every((f) => f.type?.startsWith('image/'));
const shouldShowModal =
allImages ||
(fileSearchEnabled && fileSearchAllowedByAgent) ||
(codeEnabled && codeAllowedByAgent) ||
ocrEnabled;
if (!shouldShowModal) {
// Fallback: directly handle files without showing modal
handleFilesRef.current(item.files);
return;
}
setDraggedFiles(item.files);
setShowModal(true);
},
[isAssistants, queryClient],
);
const [{ canDrop, isOver }, drop] = useDrop(
() => ({
accept: [NativeTypes.FILE],
drop(item: { files: File[] }) {
console.log('drop', item.files);
if (isAssistants) {
handleFiles(item.files);
return;
}
const endpointsConfig = queryClient.getQueryData<t.TEndpointsConfig>([QueryKeys.endpoints]);
const agentsConfig = endpointsConfig?.[EModelEndpoint.agents];
const capabilities = agentsConfig?.capabilities ?? defaultAgentCapabilities;
const fileSearchEnabled = capabilities.includes(AgentCapabilities.file_search) === true;
const codeEnabled = capabilities.includes(AgentCapabilities.execute_code) === true;
const ocrEnabled = capabilities.includes(AgentCapabilities.ocr) === true;
if (!codeEnabled && !fileSearchEnabled && !ocrEnabled) {
handleFiles(item.files);
return;
}
setDraggedFiles(item.files);
setShowModal(true);
},
drop: handleDrop,
canDrop: () => true,
collect: (monitor: DropTargetMonitor) => ({
isOver: monitor.isOver(),
canDrop: monitor.canDrop(),
}),
collect: (monitor: DropTargetMonitor) => {
/** Optimize collect to reduce re-renders */
const isOver = monitor.isOver();
const canDrop = monitor.canDrop();
return { isOver, canDrop };
},
}),
[handleFiles],
[handleDrop],
);
return {

View File

@@ -125,13 +125,8 @@ export default function useQueryParams({
const queryClient = useQueryClient();
const { conversation, newConversation } = useChatContext();
// Extract agent_id from URL for proactive fetching
const urlAgentId = searchParams.get('agent_id') || '';
// Use the existing query hook to fetch agent if present in URL
const { data: urlAgent } = useGetAgentByIdQuery(urlAgentId, {
enabled: !!urlAgentId, // Only fetch if agent_id exists in URL
});
const { data: urlAgent } = useGetAgentByIdQuery(urlAgentId);
/**
* Applies settings from URL query parameters to create a new conversation.

View File

@@ -1,66 +1,50 @@
import { useRef, useCallback, useMemo } from 'react';
import { useCallback, useEffect } from 'react';
import { useAtom } from 'jotai';
import { useRecoilState } from 'recoil';
import { Constants, LocalStorageKeys } from 'librechat-data-provider';
import useLocalStorage from '~/hooks/useLocalStorageAlt';
import { ephemeralAgentByConvoId } from '~/store';
const storageCondition = (value: unknown, rawCurrentValue?: string | null) => {
if (rawCurrentValue) {
try {
const currentValue = rawCurrentValue?.trim() ?? '';
if (currentValue.length > 2) {
return true;
}
} catch (e) {
console.error(e);
}
}
return Array.isArray(value) && value.length > 0;
};
import { ephemeralAgentByConvoId, mcpValuesAtomFamily, mcpPinnedAtom } from '~/store';
import { setTimestamp } from '~/utils/timestamps';
export function useMCPSelect({ conversationId }: { conversationId?: string | null }) {
const key = conversationId ?? Constants.NEW_CONVO;
const [isPinned, setIsPinned] = useAtom(mcpPinnedAtom);
const [mcpValues, setMCPValuesRaw] = useAtom(mcpValuesAtomFamily(key));
const [ephemeralAgent, setEphemeralAgent] = useRecoilState(ephemeralAgentByConvoId(key));
const storageKey = `${LocalStorageKeys.LAST_MCP_}${key}`;
const mcpState = useMemo(() => {
return ephemeralAgent?.mcp ?? [];
}, [ephemeralAgent?.mcp]);
// Sync Jotai state with ephemeral agent state
useEffect(() => {
if (ephemeralAgent?.mcp && ephemeralAgent.mcp.length > 0) {
setMCPValuesRaw(ephemeralAgent.mcp);
}
}, [ephemeralAgent?.mcp, setMCPValuesRaw]);
const setSelectedValues = useCallback(
(values: string[] | null | undefined) => {
if (!values) {
return;
}
if (!Array.isArray(values)) {
return;
}
// Update ephemeral agent when Jotai state changes
useEffect(() => {
if (mcpValues.length > 0 && JSON.stringify(mcpValues) !== JSON.stringify(ephemeralAgent?.mcp)) {
setEphemeralAgent((prev) => ({
...prev,
mcp: values,
mcp: mcpValues,
}));
}
}, [mcpValues, ephemeralAgent?.mcp, setEphemeralAgent]);
useEffect(() => {
const mcpStorageKey = `${LocalStorageKeys.LAST_MCP_}${key}`;
if (mcpValues.length > 0) {
setTimestamp(mcpStorageKey);
}
}, [mcpValues, key]);
/** Stable memoized setter */
const setMCPValues = useCallback(
(value: string[]) => {
if (!Array.isArray(value)) {
return;
}
setMCPValuesRaw(value);
},
[setEphemeralAgent],
);
const [mcpValues, setMCPValuesRaw] = useLocalStorage<string[]>(
storageKey,
mcpState,
setSelectedValues,
storageCondition,
);
const setMCPValuesRawRef = useRef(setMCPValuesRaw);
setMCPValuesRawRef.current = setMCPValuesRaw;
/** Create a stable memoized setter to avoid re-creating it on every render and causing an infinite render loop */
const setMCPValues = useCallback((value: string[]) => {
setMCPValuesRawRef.current(value);
}, []);
const [isPinned, setIsPinned] = useLocalStorage<boolean>(
`${LocalStorageKeys.PIN_MCP_}${key}`,
true,
[setMCPValuesRaw],
);
return {

View File

@@ -25,9 +25,8 @@ export function useMCPServerManager({ conversationId }: { conversationId?: strin
const queryClient = useQueryClient();
const { showToast } = useToastContext();
const { mcpToolDetails } = useGetMCPTools();
const mcpSelect = useMCPSelect({ conversationId });
const { data: startupConfig } = useGetStartupConfig();
const { mcpValues, setMCPValues, isPinned, setIsPinned } = mcpSelect;
const { mcpValues, setMCPValues, isPinned, setIsPinned } = useMCPSelect({ conversationId });
const [isConfigModalOpen, setIsConfigModalOpen] = useState(false);
const [selectedToolForConfig, setSelectedToolForConfig] = useState<TPlugin | null>(null);

View File

@@ -1,9 +1,13 @@
import { v4 } from 'uuid';
import { useCallback } from 'react';
import { useCallback, useMemo } from 'react';
import { useRecoilValue, useSetRecoilState } from 'recoil';
import { Constants, replaceSpecialVars } from 'librechat-data-provider';
import type { AgentToolResources, TFile } from 'librechat-data-provider';
import { useChatContext, useChatFormContext, useAddedChatContext } from '~/Providers';
import useUpdateFiles from '~/hooks/Files/useUpdateFiles';
import { useAuthContext } from '~/hooks/AuthContext';
import { useGetFiles } from '~/data-provider';
import type { ExtendedFile } from '~/common';
import store from '~/store';
const appendIndex = (index: number, value?: string) => {
@@ -16,15 +20,67 @@ const appendIndex = (index: number, value?: string) => {
export default function useSubmitMessage() {
const { user } = useAuthContext();
const methods = useChatFormContext();
const { ask, index, getMessages, setMessages, latestMessage } = useChatContext();
const { ask, index, getMessages, setMessages, latestMessage, setFiles } = useChatContext();
const { addedIndex, ask: askAdditional, conversation: addedConvo } = useAddedChatContext();
const { data: allFiles = [] } = useGetFiles();
const { addFile } = useUpdateFiles(setFiles);
const autoSendPrompts = useRecoilValue(store.autoSendPrompts);
const activeConvos = useRecoilValue(store.allConversationsSelector);
const setActivePrompt = useSetRecoilState(store.activePromptByIndex(index));
const fileMap = useMemo(() => {
const map: Record<string, TFile> = {};
if (Array.isArray(allFiles)) {
allFiles.forEach((file) => {
if (file.file_id) {
map[file.file_id] = file;
}
});
}
return map;
}, [allFiles]);
const convertToolResourcesToFiles = useCallback(
(toolResources: AgentToolResources): ExtendedFile[] => {
const promptFiles: ExtendedFile[] = [];
Object.entries(toolResources).forEach(([toolResource, resource]) => {
if (resource?.file_ids) {
resource.file_ids.forEach((fileId) => {
const dbFile = fileMap[fileId];
if (dbFile) {
const extendedFile = {
file_id: dbFile.file_id,
temp_file_id: dbFile.file_id,
filename: dbFile.filename,
filepath: dbFile.filepath,
type: dbFile.type,
size: dbFile.bytes,
width: dbFile.width,
height: dbFile.height,
progress: 1, // Already uploaded
attached: true,
tool_resource: toolResource,
preview: dbFile.type?.startsWith('image/') ? dbFile.filepath : undefined,
};
promptFiles.push(extendedFile);
} else {
console.warn(`File not found in fileMap: ${fileId}`);
}
});
} else {
console.warn(`No file_ids in resource "${toolResource}"`);
}
});
return promptFiles;
},
[fileMap],
);
const submitMessage = useCallback(
(data?: { text: string }) => {
(data?: { text: string; toolResources?: AgentToolResources; files?: ExtendedFile[] }) => {
if (!data) {
return console.warn('No data provided to submitMessage');
}
@@ -46,12 +102,18 @@ export default function useSubmitMessage() {
const rootIndex = addedIndex - 1;
const clientTimestamp = new Date().toISOString();
ask({
text: data.text,
overrideConvoId: appendIndex(rootIndex, overrideConvoId),
overrideUserMessageId: appendIndex(rootIndex, overrideUserMessageId),
clientTimestamp,
});
ask(
{
text: data.text,
overrideConvoId: appendIndex(rootIndex, overrideConvoId),
overrideUserMessageId: appendIndex(rootIndex, overrideUserMessageId),
clientTimestamp,
toolResources: data.toolResources,
},
{
overrideFiles: data.files,
},
);
if (hasAdded) {
askAdditional(
@@ -60,8 +122,12 @@ export default function useSubmitMessage() {
overrideConvoId: appendIndex(addedIndex, overrideConvoId),
overrideUserMessageId: appendIndex(addedIndex, overrideUserMessageId),
clientTimestamp,
toolResources: data.toolResources,
},
{
overrideMessages: rootMessages,
overrideFiles: data.files,
},
{ overrideMessages: rootMessages },
);
}
methods.reset();
@@ -80,18 +146,36 @@ export default function useSubmitMessage() {
);
const submitPrompt = useCallback(
(text: string) => {
(text: string, toolResources?: AgentToolResources) => {
const parsedText = replaceSpecialVars({ text, user });
if (autoSendPrompts) {
submitMessage({ text: parsedText });
const promptFiles = toolResources ? convertToolResourcesToFiles(toolResources) : [];
submitMessage({ text: parsedText, toolResources, files: promptFiles });
return;
}
if (toolResources) {
const promptFiles = convertToolResourcesToFiles(toolResources);
promptFiles.forEach((file, _index) => {
addFile(file);
});
}
const currentText = methods.getValues('text');
const newText = currentText.trim().length > 1 ? `\n${parsedText}` : parsedText;
setActivePrompt(newText);
},
[autoSendPrompts, submitMessage, setActivePrompt, methods, user],
[
autoSendPrompts,
submitMessage,
setActivePrompt,
methods,
user,
addFile,
convertToolResourcesToFiles,
],
);
return { submitMessage, submitPrompt };

View File

@@ -15,6 +15,7 @@ export type SearchApiKeyFormData = {
firecrawlApiKey: string;
firecrawlApiUrl: string;
jinaApiKey: string;
jinaApiUrl: string;
cohereApiKey: string;
};
@@ -54,6 +55,7 @@ const useAuthSearchTool = (options?: { isEntityTool: boolean }) => {
firecrawlApiKey: data.firecrawlApiKey,
firecrawlApiUrl: data.firecrawlApiUrl,
jinaApiKey: data.jinaApiKey,
jinaApiUrl: data.jinaApiUrl,
cohereApiKey: data.cohereApiKey,
}).reduce(
(acc, [key, value]) => {

View File

@@ -5,23 +5,10 @@ import { Constants, LocalStorageKeys } from 'librechat-data-provider';
import type { VerifyToolAuthResponse } from 'librechat-data-provider';
import type { UseQueryOptions } from '@tanstack/react-query';
import { useVerifyAgentToolAuth } from '~/data-provider';
import { setTimestamp } from '~/utils/timestamps';
import useLocalStorage from '~/hooks/useLocalStorageAlt';
import { ephemeralAgentByConvoId } from '~/store';
const storageCondition = (value: unknown, rawCurrentValue?: string | null) => {
if (rawCurrentValue) {
try {
const currentValue = rawCurrentValue?.trim() ?? '';
if (currentValue === 'true' && value === false) {
return true;
}
} catch (e) {
console.error(e);
}
}
return value !== undefined && value !== null;
};
type ToolValue = boolean | string;
interface UseToolToggleOptions {
@@ -39,7 +26,7 @@ interface UseToolToggleOptions {
export function useToolToggle({
conversationId,
toolKey,
toolKey: _toolKey,
localStorageKey,
isAuthenticated: externalIsAuthenticated,
setIsDialogOpen,
@@ -62,13 +49,8 @@ export function useToolToggle({
[externalIsAuthenticated, authConfig, authQuery.data?.authenticated],
);
// Keep localStorage in sync
const [, setLocalStorageValue] = useLocalStorage<ToolValue>(
`${localStorageKey}${key}`,
false,
undefined,
storageCondition,
);
const toolKey = useMemo(() => _toolKey, [_toolKey]);
const storageKey = useMemo(() => `${localStorageKey}${key}`, [localStorageKey, key]);
// The actual current value comes from ephemeralAgent
const toolValue = useMemo(() => {
@@ -83,13 +65,14 @@ export function useToolToggle({
return toolValue === true;
}, [toolValue]);
// Sync to localStorage when ephemeralAgent changes
// Sync to localStorage with timestamps when ephemeralAgent changes
useEffect(() => {
const value = ephemeralAgent?.[toolKey];
if (value !== undefined) {
setLocalStorageValue(value);
localStorage.setItem(storageKey, JSON.stringify(value));
setTimestamp(storageKey);
}
}, [ephemeralAgent, toolKey, setLocalStorageValue]);
}, [ephemeralAgent, toolKey, storageKey]);
const [isPinned, setIsPinned] = useLocalStorage<boolean>(`${localStorageKey}pinned`, false);

View File

@@ -1,2 +1,3 @@
export { default as useCategories } from './useCategories';
export { default as usePromptGroupsNav } from './usePromptGroupsNav';
export { default as usePromptFileHandling } from './usePromptFileHandling';

View File

@@ -0,0 +1,398 @@
import { v4 } from 'uuid';
import { useToastContext } from '@librechat/client';
import { useState, useCallback, useMemo, useRef, useEffect } from 'react';
import { EModelEndpoint, EToolResources, FileSources } from 'librechat-data-provider';
import type { AgentToolResources, TFile } from 'librechat-data-provider';
import type { ExtendedFile } from '~/common';
import { useUploadFileMutation, useGetFiles } from '~/data-provider';
import { logger } from '~/utils';
interface UsePromptFileHandling {
fileSetter?: (files: ExtendedFile[]) => void;
initialFiles?: ExtendedFile[];
onFileChange?: (updatedFiles: ExtendedFile[]) => void;
}
export const usePromptFileHandling = (params?: UsePromptFileHandling) => {
const { showToast } = useToastContext();
const { data: allFiles = [] } = useGetFiles();
const fileMap = useMemo(() => {
const map: Record<string, TFile> = {};
if (Array.isArray(allFiles)) {
allFiles.forEach((file) => {
if (file.file_id) {
map[file.file_id] = file;
}
});
}
return map;
}, [allFiles]);
const [files, setFiles] = useState<ExtendedFile[]>(() => {
return params?.initialFiles || [];
});
const [, setFilesLoading] = useState(false);
const abortControllerRef = useRef<AbortController | null>(null);
const uploadFile = useUploadFileMutation({
onSuccess: (data) => {
logger.log('File uploaded successfully', data);
setFiles((prev) => {
return prev.map((file) => {
if (file.temp_file_id === data.temp_file_id) {
return {
...file,
file_id: data.file_id,
filepath: data.filepath,
progress: 1,
attached: true,
preview: data.filepath || file.preview,
filename: data.filename || file.filename,
type: data.type || file.type,
size: data.bytes || file.size,
width: data.width || file.width,
height: data.height || file.height,
source: data.source || file.source,
};
}
return file;
});
});
setFilesLoading(false);
showToast({
message: 'File uploaded successfully',
status: 'success',
});
const updatedFiles = files.map((file) => {
if (file.temp_file_id === data.temp_file_id) {
return {
...file,
file_id: data.file_id,
filepath: data.filepath,
progress: 1,
attached: true,
preview: data.filepath || file.preview,
filename: data.filename || file.filename,
type: data.type || file.type,
size: data.bytes || file.size,
width: data.width || file.width,
height: data.height || file.height,
source: data.source || file.source,
};
}
return file;
});
params?.onFileChange?.(updatedFiles);
},
onError: (error, body) => {
logger.error('File upload error:', error);
setFilesLoading(false);
const file_id = body.get('file_id');
if (file_id) {
setFiles((prev) => {
return prev.filter((file) => {
if (file.file_id === file_id || file.temp_file_id === file_id) {
if (file.preview && file.preview.startsWith('blob:')) {
URL.revokeObjectURL(file.preview);
}
return false;
}
return true;
});
});
}
let errorMessage = 'Failed to upload file';
if ((error as any)?.response?.data?.message) {
errorMessage = (error as any).response.data.message;
} else if ((error as any)?.message) {
errorMessage = (error as any).message;
}
showToast({
message: errorMessage,
status: 'error',
});
},
});
const promptFiles = files;
useEffect(() => {
if (params?.fileSetter) {
params.fileSetter(files);
}
}, [files, params]);
const loadImage = useCallback(
(extendedFile: ExtendedFile, preview: string) => {
const img = new Image();
img.onload = async () => {
extendedFile.width = img.width;
extendedFile.height = img.height;
extendedFile.progress = 0.6;
const updatedFile = {
...extendedFile,
};
setFiles((prev) =>
prev.map((file) => (file.file_id === extendedFile.file_id ? updatedFile : file)),
);
const formData = new FormData();
formData.append('endpoint', EModelEndpoint.agents);
formData.append(
'file',
extendedFile.file!,
encodeURIComponent(extendedFile.filename || ''),
);
formData.append('file_id', extendedFile.file_id);
formData.append('message_file', 'true');
formData.append('width', img.width.toString());
formData.append('height', img.height.toString());
if (extendedFile.tool_resource) {
formData.append('tool_resource', extendedFile.tool_resource.toString());
}
uploadFile.mutate(formData);
};
img.src = preview;
},
[uploadFile],
);
const handleFileChange = useCallback(
(event: React.ChangeEvent<HTMLInputElement>, toolResource?: EToolResources | string) => {
event.stopPropagation();
if (!event.target.files) return;
const fileList = Array.from(event.target.files);
setFilesLoading(true);
fileList.forEach(async (file) => {
const file_id = v4();
const temp_file_id = file_id;
const extendedFile: ExtendedFile = {
file_id,
temp_file_id,
type: file.type,
filename: file.name,
filepath: '',
progress: 0,
preview: file.type.startsWith('image/') ? URL.createObjectURL(file) : '',
size: file.size,
width: undefined,
height: undefined,
attached: false,
file,
tool_resource: typeof toolResource === 'string' ? toolResource : undefined,
};
setFiles((prev) => [...prev, extendedFile]);
if (file.type.startsWith('image/') && extendedFile.preview) {
loadImage(extendedFile, extendedFile.preview);
} else {
const formData = new FormData();
formData.append('endpoint', EModelEndpoint.agents);
formData.append('file', file, encodeURIComponent(file.name));
formData.append('file_id', file_id);
formData.append('message_file', 'true');
if (toolResource) {
formData.append('tool_resource', toolResource.toString());
}
uploadFile.mutate(formData);
}
});
event.target.value = '';
},
[uploadFile, loadImage],
);
const handleFileRemove = useCallback(
(fileId: string) => {
setFiles((prev) => {
return prev.filter((file) => {
if (file.file_id === fileId || file.temp_file_id === fileId) {
if (file.preview && file.preview.startsWith('blob:')) {
URL.revokeObjectURL(file.preview);
}
return false;
}
return true;
});
});
const updatedFiles = files.filter((file) => {
if (file.file_id === fileId || file.temp_file_id === fileId) {
return false;
}
return true;
});
params?.onFileChange?.(updatedFiles);
},
[files, params],
);
useEffect(() => {
if (params?.fileSetter) {
params.fileSetter(promptFiles);
}
}, [promptFiles, params]);
useEffect(() => {
return () => {
files.forEach((file) => {
if (file.preview && file.preview.startsWith('blob:')) {
URL.revokeObjectURL(file.preview);
}
});
};
}, [files]);
const getToolResources = useCallback((): AgentToolResources | undefined => {
if (promptFiles.length === 0) {
return undefined;
}
const toolResources: AgentToolResources = {};
promptFiles.forEach((file) => {
if (!file.file_id || !file.tool_resource) return;
if (!toolResources[file.tool_resource]) {
toolResources[file.tool_resource] = { file_ids: [] };
}
if (!toolResources[file.tool_resource]!.file_ids!.includes(file.file_id)) {
toolResources[file.tool_resource]!.file_ids!.push(file.file_id);
}
});
return Object.keys(toolResources).length > 0 ? toolResources : undefined;
}, [promptFiles]);
const loadFromToolResources = useCallback(
async (toolResources?: AgentToolResources) => {
if (!toolResources) {
setFiles([]);
return;
}
const filesArray: ExtendedFile[] = [];
for (const [toolResource, resource] of Object.entries(toolResources)) {
if (resource?.file_ids) {
for (const fileId of resource.file_ids) {
const dbFile = fileMap[fileId];
const source =
toolResource === EToolResources.file_search
? FileSources.vectordb
: (dbFile?.source ?? FileSources.local);
let file: ExtendedFile;
if (dbFile) {
file = {
file_id: dbFile.file_id,
temp_file_id: dbFile.file_id,
type: dbFile.type,
filename: dbFile.filename,
filepath: dbFile.filepath,
progress: 1,
preview: dbFile.filepath,
size: dbFile.bytes || 0,
width: dbFile.width,
height: dbFile.height,
attached: true,
tool_resource: toolResource,
metadata: dbFile.metadata,
source,
};
} else {
file = {
file_id: fileId,
temp_file_id: fileId,
type: 'application/octet-stream',
filename: `File ${fileId}`,
filepath: '',
progress: 1,
preview: '',
size: 0,
width: undefined,
height: undefined,
attached: true,
tool_resource: toolResource,
source,
};
}
filesArray.push(file);
}
}
}
setFiles(filesArray);
},
[fileMap],
);
const areFilesReady = useMemo(() => {
return promptFiles.every((file) => file.file_id && file.progress === 1);
}, [promptFiles]);
const fileStats = useMemo(() => {
const stats = {
total: promptFiles.length,
images: 0,
documents: 0,
uploading: 0,
};
promptFiles.forEach((file) => {
if (file.progress < 1) {
stats.uploading++;
} else if (file.type?.startsWith('image/')) {
stats.images++;
} else {
stats.documents++;
}
});
return stats;
}, [promptFiles]);
const abortUpload = useCallback(() => {
if (abortControllerRef.current) {
logger.log('files', 'Aborting upload');
abortControllerRef.current.abort('User aborted upload');
abortControllerRef.current = null;
}
}, []);
return {
handleFileChange,
abortUpload,
files,
setFiles,
promptFiles,
getToolResources,
loadFromToolResources,
areFilesReady,
fileStats,
handleFileRemove,
};
};
export default usePromptFileHandling;

View File

@@ -703,6 +703,7 @@
"com_ui_attach_error_openai": "Cannot attach Assistant files to other endpoints",
"com_ui_attach_error_size": "File size limit exceeded for endpoint:",
"com_ui_attach_error_type": "Unsupported file type for endpoint:",
"com_ui_attach_files": "Attach Files",
"com_ui_attach_remove": "Remove file",
"com_ui_attach_warn_endpoint": "Non-Assistant files may be ignored without a compatible tool",
"com_ui_attachment": "Attachment",
@@ -848,7 +849,7 @@
"com_ui_download_backup": "Download Backup Codes",
"com_ui_download_backup_tooltip": "Before you continue, download your backup codes. You will need them to regain access if you lose your authenticator device",
"com_ui_download_error": "Error downloading file. The file may have been deleted.",
"com_ui_drag_drop": "something needs to go here. was empty",
"com_ui_drag_drop": "Drop any file here to add it to the conversation",
"com_ui_dropdown_variables": "Dropdown variables:",
"com_ui_dropdown_variables_info": "Create custom dropdown menus for your prompts: `{{variable_name:option1|option2|option3}}`",
"com_ui_duplicate": "Duplicate",
@@ -897,6 +898,7 @@
"com_ui_file_token_limit": "File Token Limit",
"com_ui_file_token_limit_desc": "Set maximum token limit for file processing to control costs and resource usage",
"com_ui_files": "Files",
"com_ui_files_info": "Attach files to enhance your prompt with additional context",
"com_ui_filter_prompts": "Filter Prompts",
"com_ui_filter_prompts_name": "Filter prompts by name",
"com_ui_final_touch": "Final touch",
@@ -1046,6 +1048,7 @@
"com_ui_oauth_error_title": "Authentication Failed",
"com_ui_oauth_success_description": "Your authentication was successful. This window will close in",
"com_ui_oauth_success_title": "Authentication Successful",
"com_ui_oauth_revoke": "Revoke",
"com_ui_of": "of",
"com_ui_off": "Off",
"com_ui_offline": "Offline",
@@ -1251,6 +1254,7 @@
"com_ui_web_search_cohere_key": "Enter Cohere API Key",
"com_ui_web_search_firecrawl_url": "Firecrawl API URL (optional)",
"com_ui_web_search_jina_key": "Enter Jina API Key",
"com_ui_web_search_jina_url": "Jina API URL (optional)",
"com_ui_web_search_processing": "Processing results",
"com_ui_web_search_provider": "Search Provider",
"com_ui_web_search_provider_searxng": "SearXNG",
@@ -1262,6 +1266,7 @@
"com_ui_web_search_reranker_cohere_key": "Get your Cohere API key",
"com_ui_web_search_reranker_jina": "Jina AI",
"com_ui_web_search_reranker_jina_key": "Get your Jina API key",
"com_ui_web_search_reranker_jina_url_help": "Learn about Jina Rerank API",
"com_ui_web_search_scraper": "Scraper",
"com_ui_web_search_scraper_firecrawl": "Firecrawl API",
"com_ui_web_search_scraper_firecrawl_key": "Get your Firecrawl API key",

View File

@@ -298,16 +298,16 @@
"com_endpoint_openai_max_tokens": "Pēc izvēles “max_tokens” lauks, kas norāda maksimālo tokenu skaitu, ko var ģenerēt sarunas pabeigšanas laikā. Ievades tokenu un ģenerēto tokenu kopējo garumu ierobežo modeļa konteksta garums. Ja šis skaitlis pārsniedz maksimālo konteksta tokenu skaitu, var rasties kļūdas.",
"com_endpoint_openai_pres": "Skaitlis no -2,0 līdz 2,0. Pozitīvas vērtības soda jaunus tokenus, pamatojoties uz to, vai tie līdz šim parādās tekstā, palielinot modeļa iespējamību runāt par jaunām tēmām.",
"com_endpoint_openai_prompt_prefix_placeholder": "Iestatiet pielāgotas instrukcijas, kas jāiekļauj sistēmas ziņā. Noklusējuma vērtība: nav",
"com_endpoint_openai_reasoning_effort": "Tikai o1 un o3 modeļi: ierobežo spriešanas modeļu spriešanas piepūli. Spriešanas piepūles samazināšana var nodrošināt ātrākas atbildes un mazāk spriešanas tokenus izmantošanas atbildē.",
"com_endpoint_openai_reasoning_effort": "Spriešanas modeļi tikai: ierobežo spriešanas modeļu spriešanas piepūli. Spriešanas piepūles samazināšana var nodrošināt ātrākas atbildes un mazāk spriešanas tokenus izmantošanas atbildē.",
"com_endpoint_openai_reasoning_summary": "Tikai atbilžu API: modeļa veiktās spriešanas kopsavilkums. Tas var būt noderīgi atkļūdošanai un modeļa spriešanas procesa izpratnei. Iestatiet vērtību “nav”, “automātiski”, “kodolīgs” vai “detalizēts”.",
"com_endpoint_openai_resend": "Nosūtiet vēlreiz visus iepriekš pievienotos attēlus. Piezīme. Tas var ievērojami palielināt tokena izmaksas, un ar daudziem attēlu pielikumiem var rasties kļūdas.",
"com_endpoint_openai_resend_files": "Nosūtiet vēlreiz visus iepriekš pievienotos failus. Piezīme. Tas palielinās tokena izmaksas, un ar daudziem pielikumiem var rasties kļūdas.",
"com_endpoint_openai_stop": "Līdz 4 secībām, kurās API pārtrauks turpmāku tokenu ģenerēšanu.",
"com_endpoint_openai_stop": "Līdz 4 sekvencēm, kurās API pārtrauks turpmāku tokenu ģenerēšanu.",
"com_endpoint_openai_temp": "Augstākas vērtības = nejaušāks, savukārt zemākas vērtības = fokusētāks un deterministiskāks. Iesakām mainīt šo vai Top P, bet ne abus.",
"com_endpoint_openai_topp": "Alternatīva izlasei ar temperatūru, ko sauc par kodola izlasi, kur modelis ņem vērā tokenu rezultātus ar varbūtības masu top_p. Tātad 0,1 nozīmē, ka tiek ņemti vērā tikai tie tokeni, kas veido augšējo 10% varbūtības masu. Mēs iesakām mainīt šo vai temperatūru, bet ne abus.",
"com_endpoint_openai_use_responses_api": "Izmantot Response API sarunas pabeigšanas vietā, kas ietver paplašinātas OpenAI funkcijas. Nepieciešams o1-pro, o3-pro un spriešanas kopsavilkumu iespējošanai.",
"com_endpoint_openai_use_web_search": "Iespējojiet tīmekļa meklēšanas funkcionalitāti, izmantojot OpenAI iebūvētās meklēšanas iespējas. Tas ļauj modelim meklēt tīmeklī aktuālu informāciju un sniegt precīzākas, aktuālākas atbildes.",
"com_endpoint_openai_verbosity": "Ierobežo modeļa atbildes plašumu. Zemākas vērtības nodrošinās kodolīgākas atbildes, savukārt augstākas vērtības nodrošinās plašākas atbildes. Pašlaik atbalstītās vērtības ir zema, vidēja un augsta.",
"com_endpoint_openai_verbosity": "Ierobežo modeļa atbildes plašumu. Zemākas vērtības nodrošinās kodolīgākas atbildes, savukārt augstākas vērtības nodrošinās plašākas atbildes. Pašlaik atbalstītās vērtības ir zems, vidējs un augsts.",
"com_endpoint_output": "Izvade",
"com_endpoint_plug_image_detail": "Attēla detaļas",
"com_endpoint_plug_resend_files": "Atkārtoti nosūtīt failus",
@@ -337,8 +337,8 @@
"com_endpoint_prompt_prefix_assistants": "Papildu instrukcijas",
"com_endpoint_prompt_prefix_assistants_placeholder": "Iestatiet papildu norādījumus vai kontekstu virs Asistenta galvenajiem norādījumiem. Ja lauks ir tukšs, tas tiek ignorēts.",
"com_endpoint_prompt_prefix_placeholder": "Iestatiet pielāgotas instrukcijas vai kontekstu. Ja lauks ir tukšs, tas tiek ignorēts.",
"com_endpoint_reasoning_effort": "Domāšanas grūtums",
"com_endpoint_reasoning_summary": "Argumentācijas kopsavilkums",
"com_endpoint_reasoning_effort": "Spriešanas piepūle",
"com_endpoint_reasoning_summary": "Spriešanas kopsavilkums",
"com_endpoint_save_as_preset": "Saglabāt kā iestatījumu",
"com_endpoint_search": "Meklēt galapunktu pēc nosaukuma",
"com_endpoint_search_endpoint_models": "Meklēt {{0}} modeļos...",
@@ -346,7 +346,7 @@
"com_endpoint_search_var": "Meklēt {{0}}...",
"com_endpoint_set_custom_name": "Iestatiet pielāgotu nosaukumu, ja varat atrast šo iestatījumu",
"com_endpoint_skip_hover": "Iespējot pabeigšanas soļa izlaišanu, kurā tiek pārskatīta galīgā atbilde un ģenerētie soļi",
"com_endpoint_stop": "Apturēt secības",
"com_endpoint_stop": "Stop sekvences",
"com_endpoint_stop_placeholder": "Atdaliet vērtības, nospiežot taustiņu `Enter`",
"com_endpoint_temperature": "Temperatūra",
"com_endpoint_thinking": "Domāšana",
@@ -443,7 +443,7 @@
"com_nav_clear_cache_confirm_message": "Vai tiešām vēlaties notīrīt kešatmiņu?",
"com_nav_clear_conversation": "Skaidras sarunas",
"com_nav_clear_conversation_confirm_message": "Vai tiešām vēlaties dzēst visas saglabātās sarunas? Šī darbība ir neatgriezeniska.",
"com_nav_close_sidebar": "Aizvērt sānu joslu",
"com_nav_close_sidebar": "Aizvērt sāna joslu",
"com_nav_commands": "Komandas",
"com_nav_confirm_clear": "Apstiprināt dzēšanu",
"com_nav_conversation_mode": "Sarunas režīms",
@@ -478,7 +478,7 @@
"com_nav_font_size_xl": "Īpaši liels",
"com_nav_font_size_xs": "Īpaši mazs",
"com_nav_help_faq": "Palīdzība un bieži uzdotie jautājumi",
"com_nav_hide_panel": "Slēpt labo sāna paneli",
"com_nav_hide_panel": "Slēpt labā sāna paneli",
"com_nav_info_balance": "Bilance parāda, cik daudz tokenu kredītu jums ir atlicis izmantot. Tokenu kredīti tiek pārvērsti naudas vērtībā (piemēram, 1000 kredīti = 0,001 USD).",
"com_nav_info_code_artifacts": "Iespējo eksperimentāla koda artefaktu rādīšanu blakus sarunai",
"com_nav_info_code_artifacts_agent": "Iespējo koda artefaktu izmantošanu šim aģentam. Pēc noklusējuma tiek pievienotas papildu instrukcijas, kas attiecas uz artefaktu izmantošanu, ja vien nav iespējots \"Pielāgots uzvednes režīms\".",
@@ -534,7 +534,7 @@
"com_nav_latex_parsing": "LaTeX parsēšana ziņās (var ietekmēt veiktspēju)",
"com_nav_log_out": "Izrakstīties",
"com_nav_long_audio_warning": "Garāku tekstu apstrāde prasīs ilgāku laiku.",
"com_nav_maximize_chat_space": "Maksimāli izmantot sarunas telpas izmērus",
"com_nav_maximize_chat_space": "Maksimāli izmantot sarunu telpas izmērus",
"com_nav_mcp_configure_server": "Konfigurēt {{0}}",
"com_nav_mcp_status_connecting": "{{0}} - Savienojas",
"com_nav_mcp_vars_update_error": "Kļūda atjauninot MCP pielāgotos lietotāja parametrus: {{0}}",
@@ -554,7 +554,7 @@
"com_nav_profile_picture": "Profila attēls",
"com_nav_save_badges_state": "Saglabāt nozīmīšu stāvokli",
"com_nav_save_drafts": "Saglabāt melnrakstus lokāli",
"com_nav_scroll_button": "Pāriet uz pēdējo ierakstu poga",
"com_nav_scroll_button": "Rādīt pogu: pāriet uz pēdējo ierakstu",
"com_nav_search_placeholder": "Meklēt ziņas",
"com_nav_send_message": "Sūtīt ziņu",
"com_nav_setting_account": "Konts",
@@ -568,7 +568,7 @@
"com_nav_settings": "Iestatījumi",
"com_nav_shared_links": "Kopīgotās saites",
"com_nav_show_code": "Vienmēr rādīt kodu, izmantojot koda interpretētāju",
"com_nav_show_thinking": "Pēc noklusējuma atvērt domāšanas nolaižamos sarakstus",
"com_nav_show_thinking": "Pēc noklusējuma atvērt spriešanas sarakstu",
"com_nav_slash_command": "/-Komanda",
"com_nav_slash_command_description": "Ieslēgt komandu \"/\", lai atlasītu uzvedni izmantojot tastatūru",
"com_nav_speech_to_text": "Balss pārvēršana tekstā",
@@ -577,7 +577,7 @@
"com_nav_theme": "Tēma",
"com_nav_theme_dark": "Tumšs",
"com_nav_theme_light": "Gaišs",
"com_nav_theme_system": "Sistēma",
"com_nav_theme_system": "Sistēmas uzstādījums",
"com_nav_tool_dialog": "Asistenta rīki",
"com_nav_tool_dialog_agents": "Aģenta rīki",
"com_nav_tool_dialog_description": "Lai saglabātu rīku atlasi, ir jāsaglabā asistents.",
@@ -586,7 +586,7 @@
"com_nav_tool_search": "Meklēšanas rīki",
"com_nav_user": "LIETOTĀJS",
"com_nav_user_msg_markdown": "Atveidot lietotāja ziņas kā Markdown",
"com_nav_user_name_display": "Rādīt lietotājvārdu ziņās",
"com_nav_user_name_display": "Rādīt lietotājvārdu sarakstēs",
"com_nav_voice_select": "Balss",
"com_show_agent_settings": "Rādīt aģenta iestatījumus",
"com_show_completion_settings": "Rādīt pabeigšanas iestatījumus",
@@ -598,7 +598,7 @@
"com_sidepanel_hide_panel": "Slēpt paneli",
"com_sidepanel_manage_files": "Pārvaldīt failus",
"com_sidepanel_mcp_no_servers_with_vars": "Nav MCP serveru ar konfigurējamiem mainīgajiem.",
"com_sidepanel_parameters": "Parametri",
"com_sidepanel_parameters": "Modeļa parametri",
"com_sources_agent_file": "Avota dokuments",
"com_sources_agent_files": "Aģentu faili",
"com_sources_download_aria_label": "Lejupielādēt {{filename}}{{status}}",
@@ -766,7 +766,7 @@
"com_ui_command_placeholder": "Pēc izvēles: Ja tiks izmantota komanda uzvednei vai nosaukums, lūdzu ievadiet",
"com_ui_command_usage_placeholder": "Atlasiet uzvedni pēc komandas vai nosaukuma",
"com_ui_complete_setup": "Pabeigt iestatīšanu",
"com_ui_concise": "Īss",
"com_ui_concise": "Kodolīgs",
"com_ui_configure_mcp_variables_for": "Uzstādīt parametrus {{0}}",
"com_ui_confirm": "Apstiprināt",
"com_ui_confirm_action": "Apstiprināt darbību",
@@ -1083,8 +1083,8 @@
"com_ui_quality": "Kvalitāte",
"com_ui_read_aloud": "Lasīt skaļi",
"com_ui_redirecting_to_provider": "Pārvirzu uz {{0}}, lūdzu, uzgaidiet...",
"com_ui_reference_saved_memories": "References uz saglabātajām atmiņām",
"com_ui_reference_saved_memories_description": "Ļaut asistentam atsaukties uz jūsu saglabātajām atmiņām un izmantot tās atbildot",
"com_ui_reference_saved_memories": "References uz saglabātajām atmiņām par lietotāju",
"com_ui_reference_saved_memories_description": "Ļaut asistentam atsaukties uz saglabātajām atmiņām par lietotāju un izmantot tās atbildot",
"com_ui_refresh": "Atsvaidzināt",
"com_ui_refresh_link": "Atsvaidzināt saiti",
"com_ui_regenerate": "Atjaunot",
@@ -1196,7 +1196,7 @@
"com_ui_terms_and_conditions": "Noteikumi un nosacījumi",
"com_ui_terms_of_service": "Pakalpojumu sniegšanas noteikumi",
"com_ui_thinking": "Domā...",
"com_ui_thoughts": "Domas",
"com_ui_thoughts": "Spriešana",
"com_ui_token": "tokens",
"com_ui_token_exchange_method": "Tokenu apmaiņas metode",
"com_ui_token_url": "Tokena URL",
@@ -1253,6 +1253,7 @@
"com_ui_web_search_cohere_key": "Ievadiet Cohere API atslēgu",
"com_ui_web_search_firecrawl_url": "Firecrawl API URL (pēc izvēles)",
"com_ui_web_search_jina_key": "Ievadiet Jina API atslēgu",
"com_ui_web_search_jina_url": "Jina API URL (pēc izvēles)",
"com_ui_web_search_processing": "Rezultātu apstrāde",
"com_ui_web_search_provider": "Meklēšanas nodrošinātājs",
"com_ui_web_search_provider_searxng": "SearXNG",
@@ -1264,6 +1265,7 @@
"com_ui_web_search_reranker_cohere_key": "Iegūstiet savu Cohere API atslēgu",
"com_ui_web_search_reranker_jina": "Jina AI",
"com_ui_web_search_reranker_jina_key": "Iegūstiet savu Jina API atslēgu",
"com_ui_web_search_reranker_jina_url_help": "Uzzināt par Jina Rerank API",
"com_ui_web_search_scraper": "Scraper",
"com_ui_web_search_scraper_firecrawl": "Firecrawl API",
"com_ui_web_search_scraper_firecrawl_key": "Iegūstiet savu Firecrawl API atslēgu",

View File

@@ -6,8 +6,21 @@
"com_a11y_start": "AI 已開始回覆。",
"com_agents_agent_card_label": "{{name}} agent。{{description}}",
"com_agents_all": "全部 Agent",
"com_agents_all_category": "全部",
"com_agents_all_description": "瀏覽所有類別中的共用 agent",
"com_agents_by_librechat": "由 LibreChat 提供",
"com_agents_category_aftersales": "售後",
"com_agents_category_aftersales_description": "售後支援、維護與客戶服務 agent",
"com_agents_category_empty": "在 {{category}} 類別中找不到 agent",
"com_agents_category_finance": "財務",
"com_agents_category_finance_description": "財務分析、預算與會計 agent",
"com_agents_category_general": "通用",
"com_agents_category_general_description": "通用型 agent處理常見任務與詢問",
"com_agents_category_hr": "人資",
"com_agents_category_hr_description": "人力資源流程、政策與員工支援 agent",
"com_agents_category_it": "IT",
"com_agents_category_it_description": "IT 支援、技術排障與系統管理 agent",
"com_agents_category_rd": "研發",
"com_agents_category_tab_label": "{{category}} 類別,{{position}} / {{total}}",
"com_agents_category_tabs_label": "Agent 類別",
"com_agents_clear_search": "清除搜尋",

View File

@@ -8,6 +8,7 @@ import {
TwoFactorScreen,
RequestPasswordReset,
} from '~/components/Auth';
import { MarketplaceProvider } from '~/components/Agents/MarketplaceContext';
import AgentMarketplace from '~/components/Agents/Marketplace';
import { OAuthSuccess, OAuthError } from '~/components/OAuth';
import { AuthContextProvider } from '~/hooks/AuthContext';
@@ -112,11 +113,19 @@ export const router = createBrowserRouter(
},
{
path: 'agents',
element: <AgentMarketplace />,
element: (
<MarketplaceProvider>
<AgentMarketplace />
</MarketplaceProvider>
),
},
{
path: 'agents/:category',
element: <AgentMarketplace />,
element: (
<MarketplaceProvider>
<AgentMarketplace />
</MarketplaceProvider>
),
},
],
},

View File

@@ -13,6 +13,7 @@ import settings from './settings';
import misc from './misc';
import isTemporary from './temporary';
export * from './agents';
export * from './mcp';
export default {
...artifacts,

Some files were not shown because too many files have changed in this diff Show More