Compare commits

...

23 Commits

Author SHA1 Message Date
Dustin Healy
351f30254c feat: Implement MCP Server Management API and UI Components
- Added API endpoints for managing MCP servers: create, read, update, and delete functionalities.
- Introduced new UI components for MCP server configuration, including MCPFormPanel and MCPConfig.
- Updated existing types and data provider to support MCP operations.
- Enhanced the side panel to include MCP server management options.
- Refactored related components and hooks for better integration with the new MCP features.
- Added tests for the new MCP server API functionalities.
2025-06-29 17:55:09 -07:00
Danny Avila
20100e120b 🔑 feat: Set Google Service Key File Path (#8130) 2025-06-29 17:09:37 -04:00
Danny Avila
3f3cfefc52 🗒️ feat: Add Google Vertex AI Mistral OCR Strategy (#8125)
* Implemented new uploadGoogleVertexMistralOCR function for processing OCR using Google Vertex AI.
* Added vertexMistralOCRStrategy to handle file uploads.
* Updated FileSources and OCRStrategy enums to include vertexai_mistral_ocr.
* Introduced helper functions for JWT creation and Google service account configuration loading.
2025-06-28 13:26:03 -04:00
matt burnett
3e1591d404 🤖 fix: Remove versions and __v when Duplicating an Agent (#8115)
Revert "Add tests for agent duplication controller"

This reverts commit 3e7beb1cc336bcfe1c57411e9c151f5e6aa927e4.
2025-06-28 12:35:41 -04:00
Danny Avila
1060ae8040 🐛 fix: Assistants Endpoint Handling in createPayload Function (#8123)
* 📦 chore: bump librechat-data-provider version to 0.7.89

* 🐛 fix: Assistants endpoint handling in createPayload function
2025-06-28 12:33:43 -04:00
Danny Avila
dd67e463e4 📦 chore: bump pbkdf2 to v3.1.3 (#8091) 2025-06-26 19:19:04 -04:00
github-actions[bot]
d60ad61325 🌍 i18n: Update translation.json with latest translations (#8058)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-06-26 19:12:46 -04:00
Danny Avila
452151e408 🐛 fix: RAG API failing with OPENID_REUSE_TOKENS Enabled (#8090)
* feat: Implement Short-Lived JWT Token Generation for RAG API

* fix: Update import paths

* fix: Correct environment variable names for OpenID on behalf flow

* fix: Remove unnecessary spaces in OpenID on behalf flow userinfo scope

---------

Co-authored-by: Atef Bellaaj <slalom.bellaaj@external.daimlertruck.com>
2025-06-26 19:10:21 -04:00
Danny Avila
33b4a97b42 🔒 fix: Agents Config/Permission Checks after Streamline Change (#8089)
* refactor: access control logic to TypeScript

* chore: Change EndpointURLs to a constant object for improved type safety

* 🐛 fix: Enhance agent access control by adding skipAgentCheck functionality

* 🐛 fix: Add endpointFileConfig prop to AttachFileMenu and update file handling logic

* 🐛 fix: Update tool handling logic to support optional groupedTools and improve null checks, add dedicated tool dialog for Assistants

* chore: Export Accordion component from UI index for improved modularity

* feat: Add ActivePanelContext for managing active panel state across components

* chore: Replace string IDs with EModelEndpoint constants for assistants and agents in useSideNavLinks

* fix: Integrate access checks for agent creation and deletion routes in actions.js
2025-06-26 18:53:05 -04:00
Sebastien Bruel
9cdc62b655 📂 fix: Prevent Null Reference Errors in File Process (#8084) 2025-06-26 18:51:35 -04:00
Danny Avila
799f0e5810 🐛 fix: Move MemoryEntry and PluginAuth model retrieval inside methods for Runtime Usage 2025-06-25 20:58:34 -04:00
Danny Avila
cbda3cb529 🕐 feat: Configurable Retention Period for Temporary Chats (#8056)
* feat: Add configurable retention period for temporary chats

* Addressing eslint errors

* Fix: failing test due to missing registration

* Update: variable name and use hours instead of days for chat retention

* Addressing comments

* chore: fix import order in Conversation.js

* chore: import order in Message.js

* chore: fix import order in config.ts

* chore: move common methods to packages/api to reduce potential for circular dependencies

* refactor: update temp chat retention config type to Partial<TCustomConfig>

* refactor: remove unused config variable from AppService and update loadCustomConfig tests with logger mock

* refactor: handle model undefined edge case by moving Session model initialization inside methods

---------

Co-authored-by: Rakshit Tiwari <rak1729e@gmail.com>
2025-06-25 17:16:26 -04:00
Karol Potocki
3ab1bd65e5 🐛 fix: Support Bedrock Provider for MCP Image Content Rendering (#8047) 2025-06-25 15:38:24 -04:00
Marlon
c551ba21f5 📜 chore: Update .env.example (#8043)
Update recent Gemini model names and remove deprecated Gemini models from env.example
2025-06-25 15:31:24 -04:00
Danny Avila
c87422a1e0 🧠 feat: Thinking Budget, Include Thoughts, and Dynamic Thinking for Gemini 2.5 (#8055)
* feat: support thinking budget parameter for Gemini 2.5 series (#6949, #7542)

https://ai.google.dev/gemini-api/docs/thinking#set-budget

* refactor: update thinking budget minimum value to -1 for dynamic thinking

- see: https://ai.google.dev/gemini-api/docs/thinking#set-budget

* chore: bump @librechat/agents to v2.4.43

* refactor: rename LLMConfigOptions to OpenAIConfigOptions for clarity and consistency

- Updated type definitions and references in initialize.ts, llm.ts, and openai.ts to reflect the new naming convention.
- Ensured that the OpenAI configuration options are consistently used across the relevant files.

* refactor: port Google LLM methods to TypeScript Package

* chore: update @librechat/agents version to 2.4.43 in package-lock.json and package.json

* refactor: update thinking budget description for clarity and adjust placeholder in parameter settings

* refactor: enhance googleSettings default value for thinking budget to support dynamic adjustment

* chore: update @librechat/agents to v2.4.44 for Vertex Dynamic Thinking workaround

* refactor: rename google config function, update `createRun` types, use `reasoning` as `reasoningKey` for Google

* refactor: simplify placeholder handling in DynamicInput component

* refactor: enhance thinking budget description for clarity and allow automatic decision by setting to "-1"

* refactor: update text styling in OptionHover component for improved readability

* chore: update @librechat/agents dependency to v2.4.46 in package.json and package-lock.json

* chore: update @librechat/api version to 1.2.5 in package.json and package-lock.json

* refactor: enhance `clientOptions` handling by filtering `omitTitleOptions`, add `json` field for Google models

---------

Co-authored-by: ciffelia <15273128+ciffelia@users.noreply.github.com>
2025-06-25 15:14:33 -04:00
Dustin Healy
b169306096 🧪 ci: Add Tests for Custom Endpoint Header Resolution (#8045)
* Enhanced existing tests for the `resolveHeaders` function to cover all user field placeholders and messy scenarios.
* Added basic integration tests for custom endpoints initialization file
2025-06-24 21:11:06 -04:00
Rakshit Tiwari
42977ac0d0 🖼️ feat: Add Optional Client-Side Image Resizing to Prevent Upload Errors (#7909)
* feat: Add optional client-side image resizing to prevent upload errors

* Addressing comments from author

* Addressing eslint errors

* Fixing the naming to clientresize from clientsideresize
2025-06-24 10:43:29 -04:00
Dustin Healy
d9a0fe03ed 🔧 fix: User Placeholders in Headers for Custom Endpoints (#8030)
* hotfix(custom-endpoints): fix user placeholder resolution in headers

* fix: import

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-06-24 08:21:14 -04:00
Danny Avila
d39b99971f 🧠 fix: Agent Title Config & Resource Handling (#8028)
* 🔧 fix: enhance client options handling in AgentClient and set default recursion limit

- Updated the recursion limit to default to 25 if not specified in agentsEConfig.
- Enhanced client options in AgentClient to include model parameters such as apiKey and anthropicApiUrl from agentModelParams.
- Updated requestOptions in the anthropic endpoint to use reverseProxyUrl as anthropicApiUrl.

* Enhance LLM configuration tests with edge case handling

* chore add return type annotation for getCustomEndpointConfig function

* fix: update modelOptions handling to use optional chaining and default to empty object in multiple endpoint initializations

* chore: update @librechat/agents to version 2.4.42

* refactor: streamline agent endpoint configuration and enhance client options handling for title generations

- Introduced a new `getProviderConfig` function to centralize provider configuration logic.
- Updated `AgentClient` to utilize the new provider configuration, improving clarity and maintainability.
- Removed redundant code related to endpoint initialization and model parameter handling.
- Enhanced error logging for missing endpoint configurations.

* fix: add abort handling for image generation and editing in OpenAIImageTools

* ci: enhance getLLMConfig tests to verify fetchOptions and dispatcher properties

* fix: use optional chaining for endpointOption properties in getOptions

* fix: increase title generation timeout from 25s to 45s, pass `endpointOption` to `getOptions`

* fix: update file filtering logic in getToolFilesByIds to ensure text field is properly checked

* fix: add error handling for empty OCR results in uploadMistralOCR and uploadAzureMistralOCR

* fix: enhance error handling in file upload to include 'No OCR result' message

* chore: update error messages in uploadMistralOCR and uploadAzureMistralOCR

* fix: enhance filtering logic in getToolFilesByIds to include context checks for OCR resources to only include files directly attached to agent

---------

Co-authored-by: Matt Burnett <matt.burnett@shopify.com>
2025-06-23 19:44:24 -04:00
Marco Beretta
1b7e044bf5 🤩 style: DialogImage, Update Stylesheet, and Improve Accessibility (#8014)
* 🔧 fix: Adjust typography and border styles for improved readability in markdown components

* 🔧 fix: Enhance code block styling in markdown for better visibility and consistency

* 🔧 fix: Adjust margins and line heights for improved readability in markdown elements

* 🔧 fix: Adjust spacing for horizontal rules in markdown for improved consistency

* 🔧 fix: Refactor DialogImage component for improved quality styling and layout consistency

* 🔧 fix: Enhance zoom and pan functionality in DialogImage component with improved controls and user experience

* 🔧 fix: Improve zoom and pan functionality in DialogImage component with enhanced controls and reset zoom feature
2025-06-23 14:30:15 -04:00
Danny Avila
5c947be455 fix: Minor Menu Issues (#8026)
* fix: Enable portal support in ExportAndShareMenu component

* fix: MCPSubMenu with focus loop and improved button click handling

* chore: remove "tools" header in toolsdropdown
2025-06-23 14:29:21 -04:00
Dustin Healy
2b2f7fe289 feat: Configurable MCP Dropdown Placeholder (#7988)
* new env  variable for mcp label

* 🔄 refactor: Update MCPSelect placeholderText to draw from interface section of librechat.yaml rather than .env

* 🧹 chore: extract mcpServers schema for better maintainability

* 🔄 refactor: Update MCPSelect and useMCPSelect to utilize TPlugin type for better type consistency

* 🔄 refactor: Pass placeholder from startupConfig to MCPSubMenu for improved localization

* 🔄 refactor: Integrate startupConfig into BadgeRowContext and related components for enhanced configuration management

---------

Co-authored-by: mwbrandao <mariana.brandao@nos.pt>
Co-authored-by: Danny Avila <danny@librechat.ai>
2025-06-23 13:21:01 -04:00
Danny Avila
a058963a9f 👤 feat: User Placeholder Variables for Custom Endpoint Headers (#7993)
* 🔧 refactor: move `processMCPEnv` from `librechat-data-provider` and move to `@librechat/api`

* 🔧 refactor: Update resolveHeaders import paths

* 🔧 refactor: Enhance resolveHeaders to support user and custom variables

- Updated resolveHeaders function to accept user and custom user variables for placeholder replacement.
- Modified header resolution in multiple client and controller files to utilize the enhanced resolveHeaders functionality.
- Added comprehensive tests for resolveHeaders to ensure correct processing of user and custom variables.

* 🔧 fix: Update user ID placeholder processing in env.ts

* 🔧 fix: Remove arguments passing this.user rather than req.user

- Updated multiple client and controller files to call resolveHeaders without the user parameter

* 🔧 refactor: Enhance processUserPlaceholders to be more readable / less nested

* 🔧 refactor: Update processUserPlaceholders to pass all tests in mpc.spec.ts and env.spec.ts

* chore: remove legacy ChatGPTClient

* chore: remove LLM initialization code

* chore: initial deprecation removal of `gptPlugins`

* chore: remove cohere-ai dependency from package.json and package-lock.json

* chore: update brace-expansion to version 2.0.2 and add license information

* chore: remove PluginsClient test file

* chore: remove legacy

* ci: remove deprecated sendMessage/getCompletion/chatCompletion tests

---------

Co-authored-by: Dustin Healy <54083382+dustinhealy@users.noreply.github.com>
2025-06-23 12:39:27 -04:00
175 changed files with 5248 additions and 3860 deletions

View File

@@ -58,7 +58,7 @@ DEBUG_CONSOLE=false
# Endpoints #
#===================================================#
# ENDPOINTS=openAI,assistants,azureOpenAI,google,gptPlugins,anthropic
# ENDPOINTS=openAI,assistants,azureOpenAI,google,anthropic
PROXY=
@@ -142,10 +142,10 @@ GOOGLE_KEY=user_provided
# GOOGLE_AUTH_HEADER=true
# Gemini API (AI Studio)
# GOOGLE_MODELS=gemini-2.5-pro-preview-05-06,gemini-2.5-flash-preview-04-17,gemini-2.0-flash-001,gemini-2.0-flash-exp,gemini-2.0-flash-lite-001,gemini-1.5-pro-002,gemini-1.5-flash-002
# GOOGLE_MODELS=gemini-2.5-pro,gemini-2.5-flash,gemini-2.5-flash-lite-preview-06-17,gemini-2.0-flash,gemini-2.0-flash-lite
# Vertex AI
# GOOGLE_MODELS=gemini-2.5-pro-preview-05-06,gemini-2.5-flash-preview-04-17,gemini-2.0-flash-001,gemini-2.0-flash-exp,gemini-2.0-flash-lite-001,gemini-1.5-pro-002,gemini-1.5-flash-002
# GOOGLE_MODELS=gemini-2.5-pro,gemini-2.5-flash,gemini-2.5-flash-lite-preview-06-17,gemini-2.0-flash-001,gemini-2.0-flash-lite-001
# GOOGLE_TITLE_MODEL=gemini-2.0-flash-lite-001
@@ -453,8 +453,8 @@ OPENID_REUSE_TOKENS=
OPENID_JWKS_URL_CACHE_ENABLED=
OPENID_JWKS_URL_CACHE_TIME= # 600000 ms eq to 10 minutes leave empty to disable caching
#Set to true to trigger token exchange flow to acquire access token for the userinfo endpoint.
OPENID_ON_BEHALF_FLOW_FOR_USERINFRO_REQUIRED=
OPENID_ON_BEHALF_FLOW_USERINFRO_SCOPE = "user.read" # example for Scope Needed for Microsoft Graph API
OPENID_ON_BEHALF_FLOW_FOR_USERINFO_REQUIRED=
OPENID_ON_BEHALF_FLOW_USERINFO_SCOPE="user.read" # example for Scope Needed for Microsoft Graph API
# Set to true to use the OpenID Connect end session endpoint for logout
OPENID_USE_END_SESSION_ENDPOINT=
@@ -657,4 +657,4 @@ OPENWEATHER_API_KEY=
# Reranker (Required)
# JINA_API_KEY=your_jina_api_key
# or
# COHERE_API_KEY=your_cohere_api_key
# COHERE_API_KEY=your_cohere_api_key

View File

@@ -1,804 +0,0 @@
const { Keyv } = require('keyv');
const crypto = require('crypto');
const { CohereClient } = require('cohere-ai');
const { fetchEventSource } = require('@waylaidwanderer/fetch-event-source');
const { constructAzureURL, genAzureChatCompletion } = require('@librechat/api');
const { encoding_for_model: encodingForModel, get_encoding: getEncoding } = require('tiktoken');
const {
ImageDetail,
EModelEndpoint,
resolveHeaders,
CohereConstants,
mapModelToAzureConfig,
} = require('librechat-data-provider');
const { createContextHandlers } = require('./prompts');
const { createCoherePayload } = require('./llm');
const { extractBaseURL } = require('~/utils');
const BaseClient = require('./BaseClient');
const { logger } = require('~/config');
const CHATGPT_MODEL = 'gpt-3.5-turbo';
const tokenizersCache = {};
class ChatGPTClient extends BaseClient {
constructor(apiKey, options = {}, cacheOptions = {}) {
super(apiKey, options, cacheOptions);
cacheOptions.namespace = cacheOptions.namespace || 'chatgpt';
this.conversationsCache = new Keyv(cacheOptions);
this.setOptions(options);
}
setOptions(options) {
if (this.options && !this.options.replaceOptions) {
// nested options aren't spread properly, so we need to do this manually
this.options.modelOptions = {
...this.options.modelOptions,
...options.modelOptions,
};
delete options.modelOptions;
// now we can merge options
this.options = {
...this.options,
...options,
};
} else {
this.options = options;
}
if (this.options.openaiApiKey) {
this.apiKey = this.options.openaiApiKey;
}
const modelOptions = this.options.modelOptions || {};
this.modelOptions = {
...modelOptions,
// set some good defaults (check for undefined in some cases because they may be 0)
model: modelOptions.model || CHATGPT_MODEL,
temperature: typeof modelOptions.temperature === 'undefined' ? 0.8 : modelOptions.temperature,
top_p: typeof modelOptions.top_p === 'undefined' ? 1 : modelOptions.top_p,
presence_penalty:
typeof modelOptions.presence_penalty === 'undefined' ? 1 : modelOptions.presence_penalty,
stop: modelOptions.stop,
};
this.isChatGptModel = this.modelOptions.model.includes('gpt-');
const { isChatGptModel } = this;
this.isUnofficialChatGptModel =
this.modelOptions.model.startsWith('text-chat') ||
this.modelOptions.model.startsWith('text-davinci-002-render');
const { isUnofficialChatGptModel } = this;
// Davinci models have a max context length of 4097 tokens.
this.maxContextTokens = this.options.maxContextTokens || (isChatGptModel ? 4095 : 4097);
// I decided to reserve 1024 tokens for the response.
// The max prompt tokens is determined by the max context tokens minus the max response tokens.
// Earlier messages will be dropped until the prompt is within the limit.
this.maxResponseTokens = this.modelOptions.max_tokens || 1024;
this.maxPromptTokens =
this.options.maxPromptTokens || this.maxContextTokens - this.maxResponseTokens;
if (this.maxPromptTokens + this.maxResponseTokens > this.maxContextTokens) {
throw new Error(
`maxPromptTokens + max_tokens (${this.maxPromptTokens} + ${this.maxResponseTokens} = ${
this.maxPromptTokens + this.maxResponseTokens
}) must be less than or equal to maxContextTokens (${this.maxContextTokens})`,
);
}
this.userLabel = this.options.userLabel || 'User';
this.chatGptLabel = this.options.chatGptLabel || 'ChatGPT';
if (isChatGptModel) {
// Use these faux tokens to help the AI understand the context since we are building the chat log ourselves.
// Trying to use "<|im_start|>" causes the AI to still generate "<" or "<|" at the end sometimes for some reason,
// without tripping the stop sequences, so I'm using "||>" instead.
this.startToken = '||>';
this.endToken = '';
this.gptEncoder = this.constructor.getTokenizer('cl100k_base');
} else if (isUnofficialChatGptModel) {
this.startToken = '<|im_start|>';
this.endToken = '<|im_end|>';
this.gptEncoder = this.constructor.getTokenizer('text-davinci-003', true, {
'<|im_start|>': 100264,
'<|im_end|>': 100265,
});
} else {
// Previously I was trying to use "<|endoftext|>" but there seems to be some bug with OpenAI's token counting
// system that causes only the first "<|endoftext|>" to be counted as 1 token, and the rest are not treated
// as a single token. So we're using this instead.
this.startToken = '||>';
this.endToken = '';
try {
this.gptEncoder = this.constructor.getTokenizer(this.modelOptions.model, true);
} catch {
this.gptEncoder = this.constructor.getTokenizer('text-davinci-003', true);
}
}
if (!this.modelOptions.stop) {
const stopTokens = [this.startToken];
if (this.endToken && this.endToken !== this.startToken) {
stopTokens.push(this.endToken);
}
stopTokens.push(`\n${this.userLabel}:`);
stopTokens.push('<|diff_marker|>');
// I chose not to do one for `chatGptLabel` because I've never seen it happen
this.modelOptions.stop = stopTokens;
}
if (this.options.reverseProxyUrl) {
this.completionsUrl = this.options.reverseProxyUrl;
} else if (isChatGptModel) {
this.completionsUrl = 'https://api.openai.com/v1/chat/completions';
} else {
this.completionsUrl = 'https://api.openai.com/v1/completions';
}
return this;
}
static getTokenizer(encoding, isModelName = false, extendSpecialTokens = {}) {
if (tokenizersCache[encoding]) {
return tokenizersCache[encoding];
}
let tokenizer;
if (isModelName) {
tokenizer = encodingForModel(encoding, extendSpecialTokens);
} else {
tokenizer = getEncoding(encoding, extendSpecialTokens);
}
tokenizersCache[encoding] = tokenizer;
return tokenizer;
}
/** @type {getCompletion} */
async getCompletion(input, onProgress, onTokenProgress, abortController = null) {
if (!abortController) {
abortController = new AbortController();
}
let modelOptions = { ...this.modelOptions };
if (typeof onProgress === 'function') {
modelOptions.stream = true;
}
if (this.isChatGptModel) {
modelOptions.messages = input;
} else {
modelOptions.prompt = input;
}
if (this.useOpenRouter && modelOptions.prompt) {
delete modelOptions.stop;
}
const { debug } = this.options;
let baseURL = this.completionsUrl;
if (debug) {
console.debug();
console.debug(baseURL);
console.debug(modelOptions);
console.debug();
}
const opts = {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
};
if (this.isVisionModel) {
modelOptions.max_tokens = 4000;
}
/** @type {TAzureConfig | undefined} */
const azureConfig = this.options?.req?.app?.locals?.[EModelEndpoint.azureOpenAI];
const isAzure = this.azure || this.options.azure;
if (
(isAzure && this.isVisionModel && azureConfig) ||
(azureConfig && this.isVisionModel && this.options.endpoint === EModelEndpoint.azureOpenAI)
) {
const { modelGroupMap, groupMap } = azureConfig;
const {
azureOptions,
baseURL,
headers = {},
serverless,
} = mapModelToAzureConfig({
modelName: modelOptions.model,
modelGroupMap,
groupMap,
});
opts.headers = resolveHeaders(headers);
this.langchainProxy = extractBaseURL(baseURL);
this.apiKey = azureOptions.azureOpenAIApiKey;
const groupName = modelGroupMap[modelOptions.model].group;
this.options.addParams = azureConfig.groupMap[groupName].addParams;
this.options.dropParams = azureConfig.groupMap[groupName].dropParams;
// Note: `forcePrompt` not re-assigned as only chat models are vision models
this.azure = !serverless && azureOptions;
this.azureEndpoint =
!serverless && genAzureChatCompletion(this.azure, modelOptions.model, this);
if (serverless === true) {
this.options.defaultQuery = azureOptions.azureOpenAIApiVersion
? { 'api-version': azureOptions.azureOpenAIApiVersion }
: undefined;
this.options.headers['api-key'] = this.apiKey;
}
}
if (this.options.defaultQuery) {
opts.defaultQuery = this.options.defaultQuery;
}
if (this.options.headers) {
opts.headers = { ...opts.headers, ...this.options.headers };
}
if (isAzure) {
// Azure does not accept `model` in the body, so we need to remove it.
delete modelOptions.model;
baseURL = this.langchainProxy
? constructAzureURL({
baseURL: this.langchainProxy,
azureOptions: this.azure,
})
: this.azureEndpoint.split(/(?<!\/)\/(chat|completion)\//)[0];
if (this.options.forcePrompt) {
baseURL += '/completions';
} else {
baseURL += '/chat/completions';
}
opts.defaultQuery = { 'api-version': this.azure.azureOpenAIApiVersion };
opts.headers = { ...opts.headers, 'api-key': this.apiKey };
} else if (this.apiKey) {
opts.headers.Authorization = `Bearer ${this.apiKey}`;
}
if (process.env.OPENAI_ORGANIZATION) {
opts.headers['OpenAI-Organization'] = process.env.OPENAI_ORGANIZATION;
}
if (this.useOpenRouter) {
opts.headers['HTTP-Referer'] = 'https://librechat.ai';
opts.headers['X-Title'] = 'LibreChat';
}
/* hacky fixes for Mistral AI API:
- Re-orders system message to the top of the messages payload, as not allowed anywhere else
- If there is only one message and it's a system message, change the role to user
*/
if (baseURL.includes('https://api.mistral.ai/v1') && modelOptions.messages) {
const { messages } = modelOptions;
const systemMessageIndex = messages.findIndex((msg) => msg.role === 'system');
if (systemMessageIndex > 0) {
const [systemMessage] = messages.splice(systemMessageIndex, 1);
messages.unshift(systemMessage);
}
modelOptions.messages = messages;
if (messages.length === 1 && messages[0].role === 'system') {
modelOptions.messages[0].role = 'user';
}
}
if (this.options.addParams && typeof this.options.addParams === 'object') {
modelOptions = {
...modelOptions,
...this.options.addParams,
};
logger.debug('[ChatGPTClient] chatCompletion: added params', {
addParams: this.options.addParams,
modelOptions,
});
}
if (this.options.dropParams && Array.isArray(this.options.dropParams)) {
this.options.dropParams.forEach((param) => {
delete modelOptions[param];
});
logger.debug('[ChatGPTClient] chatCompletion: dropped params', {
dropParams: this.options.dropParams,
modelOptions,
});
}
if (baseURL.startsWith(CohereConstants.API_URL)) {
const payload = createCoherePayload({ modelOptions });
return await this.cohereChatCompletion({ payload, onTokenProgress });
}
if (baseURL.includes('v1') && !baseURL.includes('/completions') && !this.isChatCompletion) {
baseURL = baseURL.split('v1')[0] + 'v1/completions';
} else if (
baseURL.includes('v1') &&
!baseURL.includes('/chat/completions') &&
this.isChatCompletion
) {
baseURL = baseURL.split('v1')[0] + 'v1/chat/completions';
}
const BASE_URL = new URL(baseURL);
if (opts.defaultQuery) {
Object.entries(opts.defaultQuery).forEach(([key, value]) => {
BASE_URL.searchParams.append(key, value);
});
delete opts.defaultQuery;
}
const completionsURL = BASE_URL.toString();
opts.body = JSON.stringify(modelOptions);
if (modelOptions.stream) {
return new Promise(async (resolve, reject) => {
try {
let done = false;
await fetchEventSource(completionsURL, {
...opts,
signal: abortController.signal,
async onopen(response) {
if (response.status === 200) {
return;
}
if (debug) {
console.debug(response);
}
let error;
try {
const body = await response.text();
error = new Error(`Failed to send message. HTTP ${response.status} - ${body}`);
error.status = response.status;
error.json = JSON.parse(body);
} catch {
error = error || new Error(`Failed to send message. HTTP ${response.status}`);
}
throw error;
},
onclose() {
if (debug) {
console.debug('Server closed the connection unexpectedly, returning...');
}
// workaround for private API not sending [DONE] event
if (!done) {
onProgress('[DONE]');
resolve();
}
},
onerror(err) {
if (debug) {
console.debug(err);
}
// rethrow to stop the operation
throw err;
},
onmessage(message) {
if (debug) {
console.debug(message);
}
if (!message.data || message.event === 'ping') {
return;
}
if (message.data === '[DONE]') {
onProgress('[DONE]');
resolve();
done = true;
return;
}
onProgress(JSON.parse(message.data));
},
});
} catch (err) {
reject(err);
}
});
}
const response = await fetch(completionsURL, {
...opts,
signal: abortController.signal,
});
if (response.status !== 200) {
const body = await response.text();
const error = new Error(`Failed to send message. HTTP ${response.status} - ${body}`);
error.status = response.status;
try {
error.json = JSON.parse(body);
} catch {
error.body = body;
}
throw error;
}
return response.json();
}
/** @type {cohereChatCompletion} */
async cohereChatCompletion({ payload, onTokenProgress }) {
const cohere = new CohereClient({
token: this.apiKey,
environment: this.completionsUrl,
});
if (!payload.stream) {
const chatResponse = await cohere.chat(payload);
return chatResponse.text;
}
const chatStream = await cohere.chatStream(payload);
let reply = '';
for await (const message of chatStream) {
if (!message) {
continue;
}
if (message.eventType === 'text-generation' && message.text) {
onTokenProgress(message.text);
reply += message.text;
}
/*
Cohere API Chinese Unicode character replacement hotfix.
Should be un-commented when the following issue is resolved:
https://github.com/cohere-ai/cohere-typescript/issues/151
else if (message.eventType === 'stream-end' && message.response) {
reply = message.response.text;
}
*/
}
return reply;
}
async generateTitle(userMessage, botMessage) {
const instructionsPayload = {
role: 'system',
content: `Write an extremely concise subtitle for this conversation with no more than a few words. All words should be capitalized. Exclude punctuation.
||>Message:
${userMessage.message}
||>Response:
${botMessage.message}
||>Title:`,
};
const titleGenClientOptions = JSON.parse(JSON.stringify(this.options));
titleGenClientOptions.modelOptions = {
model: 'gpt-3.5-turbo',
temperature: 0,
presence_penalty: 0,
frequency_penalty: 0,
};
const titleGenClient = new ChatGPTClient(this.apiKey, titleGenClientOptions);
const result = await titleGenClient.getCompletion([instructionsPayload], null);
// remove any non-alphanumeric characters, replace multiple spaces with 1, and then trim
return result.choices[0].message.content
.replace(/[^a-zA-Z0-9' ]/g, '')
.replace(/\s+/g, ' ')
.trim();
}
async sendMessage(message, opts = {}) {
if (opts.clientOptions && typeof opts.clientOptions === 'object') {
this.setOptions(opts.clientOptions);
}
const conversationId = opts.conversationId || crypto.randomUUID();
const parentMessageId = opts.parentMessageId || crypto.randomUUID();
let conversation =
typeof opts.conversation === 'object'
? opts.conversation
: await this.conversationsCache.get(conversationId);
let isNewConversation = false;
if (!conversation) {
conversation = {
messages: [],
createdAt: Date.now(),
};
isNewConversation = true;
}
const shouldGenerateTitle = opts.shouldGenerateTitle && isNewConversation;
const userMessage = {
id: crypto.randomUUID(),
parentMessageId,
role: 'User',
message,
};
conversation.messages.push(userMessage);
// Doing it this way instead of having each message be a separate element in the array seems to be more reliable,
// especially when it comes to keeping the AI in character. It also seems to improve coherency and context retention.
const { prompt: payload, context } = await this.buildPrompt(
conversation.messages,
userMessage.id,
{
isChatGptModel: this.isChatGptModel,
promptPrefix: opts.promptPrefix,
},
);
if (this.options.keepNecessaryMessagesOnly) {
conversation.messages = context;
}
let reply = '';
let result = null;
if (typeof opts.onProgress === 'function') {
await this.getCompletion(
payload,
(progressMessage) => {
if (progressMessage === '[DONE]') {
return;
}
const token = this.isChatGptModel
? progressMessage.choices[0].delta.content
: progressMessage.choices[0].text;
// first event's delta content is always undefined
if (!token) {
return;
}
if (this.options.debug) {
console.debug(token);
}
if (token === this.endToken) {
return;
}
opts.onProgress(token);
reply += token;
},
opts.abortController || new AbortController(),
);
} else {
result = await this.getCompletion(
payload,
null,
opts.abortController || new AbortController(),
);
if (this.options.debug) {
console.debug(JSON.stringify(result));
}
if (this.isChatGptModel) {
reply = result.choices[0].message.content;
} else {
reply = result.choices[0].text.replace(this.endToken, '');
}
}
// avoids some rendering issues when using the CLI app
if (this.options.debug) {
console.debug();
}
reply = reply.trim();
const replyMessage = {
id: crypto.randomUUID(),
parentMessageId: userMessage.id,
role: 'ChatGPT',
message: reply,
};
conversation.messages.push(replyMessage);
const returnData = {
response: replyMessage.message,
conversationId,
parentMessageId: replyMessage.parentMessageId,
messageId: replyMessage.id,
details: result || {},
};
if (shouldGenerateTitle) {
conversation.title = await this.generateTitle(userMessage, replyMessage);
returnData.title = conversation.title;
}
await this.conversationsCache.set(conversationId, conversation);
if (this.options.returnConversation) {
returnData.conversation = conversation;
}
return returnData;
}
async buildPrompt(messages, { isChatGptModel = false, promptPrefix = null }) {
promptPrefix = (promptPrefix || this.options.promptPrefix || '').trim();
// Handle attachments and create augmentedPrompt
if (this.options.attachments) {
const attachments = await this.options.attachments;
const lastMessage = messages[messages.length - 1];
if (this.message_file_map) {
this.message_file_map[lastMessage.messageId] = attachments;
} else {
this.message_file_map = {
[lastMessage.messageId]: attachments,
};
}
const files = await this.addImageURLs(lastMessage, attachments);
this.options.attachments = files;
this.contextHandlers = createContextHandlers(this.options.req, lastMessage.text);
}
if (this.message_file_map) {
this.contextHandlers = createContextHandlers(
this.options.req,
messages[messages.length - 1].text,
);
}
// Calculate image token cost and process embedded files
messages.forEach((message, i) => {
if (this.message_file_map && this.message_file_map[message.messageId]) {
const attachments = this.message_file_map[message.messageId];
for (const file of attachments) {
if (file.embedded) {
this.contextHandlers?.processFile(file);
continue;
}
messages[i].tokenCount =
(messages[i].tokenCount || 0) +
this.calculateImageTokenCost({
width: file.width,
height: file.height,
detail: this.options.imageDetail ?? ImageDetail.auto,
});
}
}
});
if (this.contextHandlers) {
this.augmentedPrompt = await this.contextHandlers.createContext();
promptPrefix = this.augmentedPrompt + promptPrefix;
}
if (promptPrefix) {
// If the prompt prefix doesn't end with the end token, add it.
if (!promptPrefix.endsWith(`${this.endToken}`)) {
promptPrefix = `${promptPrefix.trim()}${this.endToken}\n\n`;
}
promptPrefix = `${this.startToken}Instructions:\n${promptPrefix}`;
}
const promptSuffix = `${this.startToken}${this.chatGptLabel}:\n`; // Prompt ChatGPT to respond.
const instructionsPayload = {
role: 'system',
content: promptPrefix,
};
const messagePayload = {
role: 'system',
content: promptSuffix,
};
let currentTokenCount;
if (isChatGptModel) {
currentTokenCount =
this.getTokenCountForMessage(instructionsPayload) +
this.getTokenCountForMessage(messagePayload);
} else {
currentTokenCount = this.getTokenCount(`${promptPrefix}${promptSuffix}`);
}
let promptBody = '';
const maxTokenCount = this.maxPromptTokens;
const context = [];
// Iterate backwards through the messages, adding them to the prompt until we reach the max token count.
// Do this within a recursive async function so that it doesn't block the event loop for too long.
const buildPromptBody = async () => {
if (currentTokenCount < maxTokenCount && messages.length > 0) {
const message = messages.pop();
const roleLabel =
message?.isCreatedByUser || message?.role?.toLowerCase() === 'user'
? this.userLabel
: this.chatGptLabel;
const messageString = `${this.startToken}${roleLabel}:\n${
message?.text ?? message?.message
}${this.endToken}\n`;
let newPromptBody;
if (promptBody || isChatGptModel) {
newPromptBody = `${messageString}${promptBody}`;
} else {
// Always insert prompt prefix before the last user message, if not gpt-3.5-turbo.
// This makes the AI obey the prompt instructions better, which is important for custom instructions.
// After a bunch of testing, it doesn't seem to cause the AI any confusion, even if you ask it things
// like "what's the last thing I wrote?".
newPromptBody = `${promptPrefix}${messageString}${promptBody}`;
}
context.unshift(message);
const tokenCountForMessage = this.getTokenCount(messageString);
const newTokenCount = currentTokenCount + tokenCountForMessage;
if (newTokenCount > maxTokenCount) {
if (promptBody) {
// This message would put us over the token limit, so don't add it.
return false;
}
// This is the first message, so we can't add it. Just throw an error.
throw new Error(
`Prompt is too long. Max token count is ${maxTokenCount}, but prompt is ${newTokenCount} tokens long.`,
);
}
promptBody = newPromptBody;
currentTokenCount = newTokenCount;
// wait for next tick to avoid blocking the event loop
await new Promise((resolve) => setImmediate(resolve));
return buildPromptBody();
}
return true;
};
await buildPromptBody();
const prompt = `${promptBody}${promptSuffix}`;
if (isChatGptModel) {
messagePayload.content = prompt;
// Add 3 tokens for Assistant Label priming after all messages have been counted.
currentTokenCount += 3;
}
// Use up to `this.maxContextTokens` tokens (prompt + response), but try to leave `this.maxTokens` tokens for the response.
this.modelOptions.max_tokens = Math.min(
this.maxContextTokens - currentTokenCount,
this.maxResponseTokens,
);
if (isChatGptModel) {
return { prompt: [instructionsPayload, messagePayload], context };
}
return { prompt, context, promptTokens: currentTokenCount };
}
getTokenCount(text) {
return this.gptEncoder.encode(text, 'all').length;
}
/**
* Algorithm adapted from "6. Counting tokens for chat API calls" of
* https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb
*
* An additional 3 tokens need to be added for assistant label priming after all messages have been counted.
*
* @param {Object} message
*/
getTokenCountForMessage(message) {
// Note: gpt-3.5-turbo and gpt-4 may update over time. Use default for these as well as for unknown models
let tokensPerMessage = 3;
let tokensPerName = 1;
if (this.modelOptions.model === 'gpt-3.5-turbo-0301') {
tokensPerMessage = 4;
tokensPerName = -1;
}
let numTokens = tokensPerMessage;
for (let [key, value] of Object.entries(message)) {
numTokens += this.getTokenCount(value);
if (key === 'name') {
numTokens += tokensPerName;
}
}
return numTokens;
}
}
module.exports = ChatGPTClient;

View File

@@ -1,7 +1,7 @@
const { google } = require('googleapis');
const { Tokenizer } = require('@librechat/api');
const { concat } = require('@langchain/core/utils/stream');
const { ChatVertexAI } = require('@langchain/google-vertexai');
const { Tokenizer, getSafetySettings } = require('@librechat/api');
const { ChatGoogleGenerativeAI } = require('@langchain/google-genai');
const { GoogleGenerativeAI: GenAI } = require('@google/generative-ai');
const { HumanMessage, SystemMessage } = require('@langchain/core/messages');
@@ -12,13 +12,13 @@ const {
endpointSettings,
parseTextParts,
EModelEndpoint,
googleSettings,
ContentTypes,
VisionModes,
ErrorTypes,
Constants,
AuthKeys,
} = require('librechat-data-provider');
const { getSafetySettings } = require('~/server/services/Endpoints/google/llm');
const { encodeAndFormat } = require('~/server/services/Files/images');
const { spendTokens } = require('~/models/spendTokens');
const { getModelMaxTokens } = require('~/utils');
@@ -166,6 +166,16 @@ class GoogleClient extends BaseClient {
);
}
// Add thinking configuration
this.modelOptions.thinkingConfig = {
thinkingBudget:
(this.modelOptions.thinking ?? googleSettings.thinking.default)
? this.modelOptions.thinkingBudget
: 0,
};
delete this.modelOptions.thinking;
delete this.modelOptions.thinkingBudget;
this.sender =
this.options.sender ??
getResponseSender({

View File

@@ -5,6 +5,7 @@ const {
isEnabled,
Tokenizer,
createFetch,
resolveHeaders,
constructAzureURL,
genAzureChatCompletion,
createStreamEventHandlers,
@@ -15,7 +16,6 @@ const {
ContentTypes,
parseTextParts,
EModelEndpoint,
resolveHeaders,
KnownEndpoints,
openAISettings,
ImageDetailCost,
@@ -37,7 +37,6 @@ const { addSpaceIfNeeded, sleep } = require('~/server/utils');
const { spendTokens } = require('~/models/spendTokens');
const { handleOpenAIErrors } = require('./tools/util');
const { createLLM, RunManager } = require('./llm');
const ChatGPTClient = require('./ChatGPTClient');
const { summaryBuffer } = require('./memory');
const { runTitleChain } = require('./chains');
const { tokenSplit } = require('./document');
@@ -47,12 +46,6 @@ const { logger } = require('~/config');
class OpenAIClient extends BaseClient {
constructor(apiKey, options = {}) {
super(apiKey, options);
this.ChatGPTClient = new ChatGPTClient();
this.buildPrompt = this.ChatGPTClient.buildPrompt.bind(this);
/** @type {getCompletion} */
this.getCompletion = this.ChatGPTClient.getCompletion.bind(this);
/** @type {cohereChatCompletion} */
this.cohereChatCompletion = this.ChatGPTClient.cohereChatCompletion.bind(this);
this.contextStrategy = options.contextStrategy
? options.contextStrategy.toLowerCase()
: 'discard';
@@ -379,23 +372,12 @@ class OpenAIClient extends BaseClient {
return files;
}
async buildMessages(
messages,
parentMessageId,
{ isChatCompletion = false, promptPrefix = null },
opts,
) {
async buildMessages(messages, parentMessageId, { promptPrefix = null }, opts) {
let orderedMessages = this.constructor.getMessagesForConversation({
messages,
parentMessageId,
summary: this.shouldSummarize,
});
if (!isChatCompletion) {
return await this.buildPrompt(orderedMessages, {
isChatGptModel: isChatCompletion,
promptPrefix,
});
}
let payload;
let instructions;

View File

@@ -1,542 +0,0 @@
const OpenAIClient = require('./OpenAIClient');
const { CallbackManager } = require('@langchain/core/callbacks/manager');
const { BufferMemory, ChatMessageHistory } = require('langchain/memory');
const { addImages, buildErrorInput, buildPromptPrefix } = require('./output_parsers');
const { initializeCustomAgent, initializeFunctionsAgent } = require('./agents');
const { processFileURL } = require('~/server/services/Files/process');
const { EModelEndpoint } = require('librechat-data-provider');
const { checkBalance } = require('~/models/balanceMethods');
const { formatLangChainMessages } = require('./prompts');
const { extractBaseURL } = require('~/utils');
const { loadTools } = require('./tools/util');
const { logger } = require('~/config');
class PluginsClient extends OpenAIClient {
constructor(apiKey, options = {}) {
super(apiKey, options);
this.sender = options.sender ?? 'Assistant';
this.tools = [];
this.actions = [];
this.setOptions(options);
this.openAIApiKey = this.apiKey;
this.executor = null;
}
setOptions(options) {
this.agentOptions = { ...options.agentOptions };
this.functionsAgent = this.agentOptions?.agent === 'functions';
this.agentIsGpt3 = this.agentOptions?.model?.includes('gpt-3');
super.setOptions(options);
this.isGpt3 = this.modelOptions?.model?.includes('gpt-3');
if (this.options.reverseProxyUrl) {
this.langchainProxy = extractBaseURL(this.options.reverseProxyUrl);
}
}
getSaveOptions() {
return {
artifacts: this.options.artifacts,
chatGptLabel: this.options.chatGptLabel,
modelLabel: this.options.modelLabel,
promptPrefix: this.options.promptPrefix,
tools: this.options.tools,
...this.modelOptions,
agentOptions: this.agentOptions,
iconURL: this.options.iconURL,
greeting: this.options.greeting,
spec: this.options.spec,
};
}
saveLatestAction(action) {
this.actions.push(action);
}
getFunctionModelName(input) {
if (/-(?!0314)\d{4}/.test(input)) {
return input;
} else if (input.includes('gpt-3.5-turbo')) {
return 'gpt-3.5-turbo';
} else if (input.includes('gpt-4')) {
return 'gpt-4';
} else {
return 'gpt-3.5-turbo';
}
}
getBuildMessagesOptions(opts) {
return {
isChatCompletion: true,
promptPrefix: opts.promptPrefix,
abortController: opts.abortController,
};
}
async initialize({ user, message, onAgentAction, onChainEnd, signal }) {
const modelOptions = {
modelName: this.agentOptions.model,
temperature: this.agentOptions.temperature,
};
const model = this.initializeLLM({
...modelOptions,
context: 'plugins',
initialMessageCount: this.currentMessages.length + 1,
});
logger.debug(
`[PluginsClient] Agent Model: ${model.modelName} | Temp: ${model.temperature} | Functions: ${this.functionsAgent}`,
);
// Map Messages to Langchain format
const pastMessages = formatLangChainMessages(this.currentMessages.slice(0, -1), {
userName: this.options?.name,
});
logger.debug('[PluginsClient] pastMessages: ' + pastMessages.length);
// TODO: use readOnly memory, TokenBufferMemory? (both unavailable in LangChainJS)
const memory = new BufferMemory({
llm: model,
chatHistory: new ChatMessageHistory(pastMessages),
});
const { loadedTools } = await loadTools({
user,
model,
tools: this.options.tools,
functions: this.functionsAgent,
options: {
memory,
signal: this.abortController.signal,
openAIApiKey: this.openAIApiKey,
conversationId: this.conversationId,
fileStrategy: this.options.req.app.locals.fileStrategy,
processFileURL,
message,
},
useSpecs: true,
});
if (loadedTools.length === 0) {
return;
}
this.tools = loadedTools;
logger.debug('[PluginsClient] Requested Tools', this.options.tools);
logger.debug(
'[PluginsClient] Loaded Tools',
this.tools.map((tool) => tool.name),
);
const handleAction = (action, runId, callback = null) => {
this.saveLatestAction(action);
logger.debug('[PluginsClient] Latest Agent Action ', this.actions[this.actions.length - 1]);
if (typeof callback === 'function') {
callback(action, runId);
}
};
// initialize agent
const initializer = this.functionsAgent ? initializeFunctionsAgent : initializeCustomAgent;
let customInstructions = (this.options.promptPrefix ?? '').trim();
if (typeof this.options.artifactsPrompt === 'string' && this.options.artifactsPrompt) {
customInstructions = `${customInstructions ?? ''}\n${this.options.artifactsPrompt}`.trim();
}
this.executor = await initializer({
model,
signal,
pastMessages,
tools: this.tools,
customInstructions,
verbose: this.options.debug,
returnIntermediateSteps: true,
customName: this.options.chatGptLabel,
currentDateString: this.currentDateString,
callbackManager: CallbackManager.fromHandlers({
async handleAgentAction(action, runId) {
handleAction(action, runId, onAgentAction);
},
async handleChainEnd(action) {
if (typeof onChainEnd === 'function') {
onChainEnd(action);
}
},
}),
});
logger.debug('[PluginsClient] Loaded agent.');
}
async executorCall(message, { signal, stream, onToolStart, onToolEnd }) {
let errorMessage = '';
const maxAttempts = 1;
for (let attempts = 1; attempts <= maxAttempts; attempts++) {
const errorInput = buildErrorInput({
message,
errorMessage,
actions: this.actions,
functionsAgent: this.functionsAgent,
});
const input = attempts > 1 ? errorInput : message;
logger.debug(`[PluginsClient] Attempt ${attempts} of ${maxAttempts}`);
if (errorMessage.length > 0) {
logger.debug('[PluginsClient] Caught error, input: ' + JSON.stringify(input));
}
try {
this.result = await this.executor.call({ input, signal }, [
{
async handleToolStart(...args) {
await onToolStart(...args);
},
async handleToolEnd(...args) {
await onToolEnd(...args);
},
async handleLLMEnd(output) {
const { generations } = output;
const { text } = generations[0][0];
if (text && typeof stream === 'function') {
await stream(text);
}
},
},
]);
break; // Exit the loop if the function call is successful
} catch (err) {
logger.error('[PluginsClient] executorCall error:', err);
if (attempts === maxAttempts) {
const { run } = this.runManager.getRunByConversationId(this.conversationId);
const defaultOutput = `Encountered an error while attempting to respond: ${err.message}`;
this.result.output = run && run.error ? run.error : defaultOutput;
this.result.errorMessage = run && run.error ? run.error : err.message;
this.result.intermediateSteps = this.actions;
break;
}
}
}
}
/**
*
* @param {TMessage} responseMessage
* @param {Partial<TMessage>} saveOptions
* @param {string} user
* @returns
*/
async handleResponseMessage(responseMessage, saveOptions, user) {
const { output, errorMessage, ...result } = this.result;
logger.debug('[PluginsClient][handleResponseMessage] Output:', {
output,
errorMessage,
...result,
});
const { error } = responseMessage;
if (!error) {
responseMessage.tokenCount = this.getTokenCountForResponse(responseMessage);
responseMessage.completionTokens = this.getTokenCount(responseMessage.text);
}
// Record usage only when completion is skipped as it is already recorded in the agent phase.
if (!this.agentOptions.skipCompletion && !error) {
await this.recordTokenUsage(responseMessage);
}
const databasePromise = this.saveMessageToDatabase(responseMessage, saveOptions, user);
delete responseMessage.tokenCount;
return { ...responseMessage, ...result, databasePromise };
}
async sendMessage(message, opts = {}) {
/** @type {Promise<TMessage>} */
let userMessagePromise;
/** @type {{ filteredTools: string[], includedTools: string[] }} */
const { filteredTools = [], includedTools = [] } = this.options.req.app.locals;
if (includedTools.length > 0) {
const tools = this.options.tools.filter((plugin) => includedTools.includes(plugin));
this.options.tools = tools;
} else {
const tools = this.options.tools.filter((plugin) => !filteredTools.includes(plugin));
this.options.tools = tools;
}
// If a message is edited, no tools can be used.
const completionMode = this.options.tools.length === 0 || opts.isEdited;
if (completionMode) {
this.setOptions(opts);
return super.sendMessage(message, opts);
}
logger.debug('[PluginsClient] sendMessage', { userMessageText: message, opts });
const {
user,
conversationId,
responseMessageId,
saveOptions,
userMessage,
onAgentAction,
onChainEnd,
onToolStart,
onToolEnd,
} = await this.handleStartMethods(message, opts);
if (opts.progressCallback) {
opts.onProgress = opts.progressCallback.call(null, {
...(opts.progressOptions ?? {}),
parentMessageId: userMessage.messageId,
messageId: responseMessageId,
});
}
this.currentMessages.push(userMessage);
let {
prompt: payload,
tokenCountMap,
promptTokens,
} = await this.buildMessages(
this.currentMessages,
userMessage.messageId,
this.getBuildMessagesOptions({
promptPrefix: null,
abortController: this.abortController,
}),
);
if (tokenCountMap) {
logger.debug('[PluginsClient] tokenCountMap', { tokenCountMap });
if (tokenCountMap[userMessage.messageId]) {
userMessage.tokenCount = tokenCountMap[userMessage.messageId];
logger.debug('[PluginsClient] userMessage.tokenCount', userMessage.tokenCount);
}
this.handleTokenCountMap(tokenCountMap);
}
this.result = {};
if (payload) {
this.currentMessages = payload;
}
if (!this.skipSaveUserMessage) {
userMessagePromise = this.saveMessageToDatabase(userMessage, saveOptions, user);
if (typeof opts?.getReqData === 'function') {
opts.getReqData({
userMessagePromise,
});
}
}
const balance = this.options.req?.app?.locals?.balance;
if (balance?.enabled) {
await checkBalance({
req: this.options.req,
res: this.options.res,
txData: {
user: this.user,
tokenType: 'prompt',
amount: promptTokens,
debug: this.options.debug,
model: this.modelOptions.model,
endpoint: EModelEndpoint.openAI,
},
});
}
const responseMessage = {
endpoint: EModelEndpoint.gptPlugins,
iconURL: this.options.iconURL,
messageId: responseMessageId,
conversationId,
parentMessageId: userMessage.messageId,
isCreatedByUser: false,
model: this.modelOptions.model,
sender: this.sender,
promptTokens,
};
await this.initialize({
user,
message,
onAgentAction,
onChainEnd,
signal: this.abortController.signal,
onProgress: opts.onProgress,
});
// const stream = async (text) => {
// await this.generateTextStream.call(this, text, opts.onProgress, { delay: 1 });
// };
await this.executorCall(message, {
signal: this.abortController.signal,
// stream,
onToolStart,
onToolEnd,
});
// If message was aborted mid-generation
if (this.result?.errorMessage?.length > 0 && this.result?.errorMessage?.includes('cancel')) {
responseMessage.text = 'Cancelled.';
return await this.handleResponseMessage(responseMessage, saveOptions, user);
}
// If error occurred during generation (likely token_balance)
if (this.result?.errorMessage?.length > 0) {
responseMessage.error = true;
responseMessage.text = this.result.output;
return await this.handleResponseMessage(responseMessage, saveOptions, user);
}
if (this.agentOptions.skipCompletion && this.result.output && this.functionsAgent) {
const partialText = opts.getPartialText();
const trimmedPartial = opts.getPartialText().replaceAll(':::plugin:::\n', '');
responseMessage.text =
trimmedPartial.length === 0 ? `${partialText}${this.result.output}` : partialText;
addImages(this.result.intermediateSteps, responseMessage);
await this.generateTextStream(this.result.output, opts.onProgress, { delay: 5 });
return await this.handleResponseMessage(responseMessage, saveOptions, user);
}
if (this.agentOptions.skipCompletion && this.result.output) {
responseMessage.text = this.result.output;
addImages(this.result.intermediateSteps, responseMessage);
await this.generateTextStream(this.result.output, opts.onProgress, { delay: 5 });
return await this.handleResponseMessage(responseMessage, saveOptions, user);
}
logger.debug('[PluginsClient] Completion phase: this.result', this.result);
const promptPrefix = buildPromptPrefix({
result: this.result,
message,
functionsAgent: this.functionsAgent,
});
logger.debug('[PluginsClient]', { promptPrefix });
payload = await this.buildCompletionPrompt({
messages: this.currentMessages,
promptPrefix,
});
logger.debug('[PluginsClient] buildCompletionPrompt Payload', payload);
responseMessage.text = await this.sendCompletion(payload, opts);
return await this.handleResponseMessage(responseMessage, saveOptions, user);
}
async buildCompletionPrompt({ messages, promptPrefix: _promptPrefix }) {
logger.debug('[PluginsClient] buildCompletionPrompt messages', messages);
const orderedMessages = messages;
let promptPrefix = _promptPrefix.trim();
// If the prompt prefix doesn't end with the end token, add it.
if (!promptPrefix.endsWith(`${this.endToken}`)) {
promptPrefix = `${promptPrefix.trim()}${this.endToken}\n\n`;
}
promptPrefix = `${this.startToken}Instructions:\n${promptPrefix}`;
const promptSuffix = `${this.startToken}${this.chatGptLabel ?? 'Assistant'}:\n`;
const instructionsPayload = {
role: 'system',
content: promptPrefix,
};
const messagePayload = {
role: 'system',
content: promptSuffix,
};
if (this.isGpt3) {
instructionsPayload.role = 'user';
messagePayload.role = 'user';
instructionsPayload.content += `\n${promptSuffix}`;
}
// testing if this works with browser endpoint
if (!this.isGpt3 && this.options.reverseProxyUrl) {
instructionsPayload.role = 'user';
}
let currentTokenCount =
this.getTokenCountForMessage(instructionsPayload) +
this.getTokenCountForMessage(messagePayload);
let promptBody = '';
const maxTokenCount = this.maxPromptTokens;
// Iterate backwards through the messages, adding them to the prompt until we reach the max token count.
// Do this within a recursive async function so that it doesn't block the event loop for too long.
const buildPromptBody = async () => {
if (currentTokenCount < maxTokenCount && orderedMessages.length > 0) {
const message = orderedMessages.pop();
const isCreatedByUser = message.isCreatedByUser || message.role?.toLowerCase() === 'user';
const roleLabel = isCreatedByUser ? this.userLabel : this.chatGptLabel;
let messageString = `${this.startToken}${roleLabel}:\n${
message.text ?? message.content ?? ''
}${this.endToken}\n`;
let newPromptBody = `${messageString}${promptBody}`;
const tokenCountForMessage = this.getTokenCount(messageString);
const newTokenCount = currentTokenCount + tokenCountForMessage;
if (newTokenCount > maxTokenCount) {
if (promptBody) {
// This message would put us over the token limit, so don't add it.
return false;
}
// This is the first message, so we can't add it. Just throw an error.
throw new Error(
`Prompt is too long. Max token count is ${maxTokenCount}, but prompt is ${newTokenCount} tokens long.`,
);
}
promptBody = newPromptBody;
currentTokenCount = newTokenCount;
// wait for next tick to avoid blocking the event loop
await new Promise((resolve) => setTimeout(resolve, 0));
return buildPromptBody();
}
return true;
};
await buildPromptBody();
const prompt = promptBody;
messagePayload.content = prompt;
// Add 2 tokens for metadata after all messages have been counted.
currentTokenCount += 2;
if (this.isGpt3 && messagePayload.content.length > 0) {
const context = 'Chat History:\n';
messagePayload.content = `${context}${prompt}`;
currentTokenCount += this.getTokenCount(context);
}
// Use up to `this.maxContextTokens` tokens (prompt + response), but try to leave `this.maxTokens` tokens for the response.
this.modelOptions.max_tokens = Math.min(
this.maxContextTokens - currentTokenCount,
this.maxResponseTokens,
);
if (this.isGpt3) {
messagePayload.content += promptSuffix;
return [instructionsPayload, messagePayload];
}
const result = [messagePayload, instructionsPayload];
if (this.functionsAgent && !this.isGpt3) {
result[1].content = `${result[1].content}\n${this.startToken}${this.chatGptLabel}:\nSure thing! Here is the output you requested:\n`;
}
return result.filter((message) => message.content.length > 0);
}
}
module.exports = PluginsClient;

View File

@@ -1,15 +1,11 @@
const ChatGPTClient = require('./ChatGPTClient');
const OpenAIClient = require('./OpenAIClient');
const PluginsClient = require('./PluginsClient');
const GoogleClient = require('./GoogleClient');
const TextStream = require('./TextStream');
const AnthropicClient = require('./AnthropicClient');
const toolUtils = require('./tools/util');
module.exports = {
ChatGPTClient,
OpenAIClient,
PluginsClient,
GoogleClient,
TextStream,
AnthropicClient,

View File

@@ -1,6 +1,7 @@
const axios = require('axios');
const { isEnabled } = require('~/server/utils');
const { logger } = require('~/config');
const { isEnabled } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { generateShortLivedToken } = require('~/server/services/AuthService');
const footer = `Use the context as your learned knowledge to better answer the user.
@@ -18,7 +19,7 @@ function createContextHandlers(req, userMessageContent) {
const queryPromises = [];
const processedFiles = [];
const processedIds = new Set();
const jwtToken = req.headers.authorization.split(' ')[1];
const jwtToken = generateShortLivedToken(req.user.id);
const useFullContext = isEnabled(process.env.RAG_USE_FULL_CONTEXT);
const query = async (file) => {

View File

@@ -531,44 +531,6 @@ describe('OpenAIClient', () => {
});
});
describe('sendMessage/getCompletion/chatCompletion', () => {
afterEach(() => {
delete process.env.AZURE_OPENAI_DEFAULT_MODEL;
delete process.env.AZURE_USE_MODEL_AS_DEPLOYMENT_NAME;
});
it('should call getCompletion and fetchEventSource when using a text/instruct model', async () => {
const model = 'text-davinci-003';
const onProgress = jest.fn().mockImplementation(() => ({}));
const testClient = new OpenAIClient('test-api-key', {
...defaultOptions,
modelOptions: { model },
});
const getCompletion = jest.spyOn(testClient, 'getCompletion');
await testClient.sendMessage('Hi mom!', { onProgress });
expect(getCompletion).toHaveBeenCalled();
expect(getCompletion.mock.calls.length).toBe(1);
expect(getCompletion.mock.calls[0][0]).toBe('||>User:\nHi mom!\n||>Assistant:\n');
expect(fetchEventSource).toHaveBeenCalled();
expect(fetchEventSource.mock.calls.length).toBe(1);
// Check if the first argument (url) is correct
const firstCallArgs = fetchEventSource.mock.calls[0];
const expectedURL = 'https://api.openai.com/v1/completions';
expect(firstCallArgs[0]).toBe(expectedURL);
const requestBody = JSON.parse(firstCallArgs[1].body);
expect(requestBody).toHaveProperty('model');
expect(requestBody.model).toBe(model);
});
});
describe('checkVisionRequest functionality', () => {
let client;
const attachments = [{ type: 'image/png' }];

View File

@@ -1,314 +0,0 @@
const crypto = require('crypto');
const { Constants } = require('librechat-data-provider');
const { HumanMessage, AIMessage } = require('@langchain/core/messages');
const PluginsClient = require('../PluginsClient');
jest.mock('~/db/connect');
jest.mock('~/models/Conversation', () => {
return function () {
return {
save: jest.fn(),
deleteConvos: jest.fn(),
};
};
});
const defaultAzureOptions = {
azureOpenAIApiInstanceName: 'your-instance-name',
azureOpenAIApiDeploymentName: 'your-deployment-name',
azureOpenAIApiVersion: '2020-07-01-preview',
};
describe('PluginsClient', () => {
let TestAgent;
let options = {
tools: [],
modelOptions: {
model: 'gpt-3.5-turbo',
temperature: 0,
max_tokens: 2,
},
agentOptions: {
model: 'gpt-3.5-turbo',
},
};
let parentMessageId;
let conversationId;
const fakeMessages = [];
const userMessage = 'Hello, ChatGPT!';
const apiKey = 'fake-api-key';
beforeEach(() => {
TestAgent = new PluginsClient(apiKey, options);
TestAgent.loadHistory = jest
.fn()
.mockImplementation((conversationId, parentMessageId = null) => {
if (!conversationId) {
TestAgent.currentMessages = [];
return Promise.resolve([]);
}
const orderedMessages = TestAgent.constructor.getMessagesForConversation({
messages: fakeMessages,
parentMessageId,
});
const chatMessages = orderedMessages.map((msg) =>
msg?.isCreatedByUser || msg?.role?.toLowerCase() === 'user'
? new HumanMessage(msg.text)
: new AIMessage(msg.text),
);
TestAgent.currentMessages = orderedMessages;
return Promise.resolve(chatMessages);
});
TestAgent.sendMessage = jest.fn().mockImplementation(async (message, opts = {}) => {
if (opts && typeof opts === 'object') {
TestAgent.setOptions(opts);
}
const conversationId = opts.conversationId || crypto.randomUUID();
const parentMessageId = opts.parentMessageId || Constants.NO_PARENT;
const userMessageId = opts.overrideParentMessageId || crypto.randomUUID();
this.pastMessages = await TestAgent.loadHistory(
conversationId,
TestAgent.options?.parentMessageId,
);
const userMessage = {
text: message,
sender: 'ChatGPT',
isCreatedByUser: true,
messageId: userMessageId,
parentMessageId,
conversationId,
};
const response = {
sender: 'ChatGPT',
text: 'Hello, User!',
isCreatedByUser: false,
messageId: crypto.randomUUID(),
parentMessageId: userMessage.messageId,
conversationId,
};
fakeMessages.push(userMessage);
fakeMessages.push(response);
return response;
});
});
test('initializes PluginsClient without crashing', () => {
expect(TestAgent).toBeInstanceOf(PluginsClient);
});
test('check setOptions function', () => {
expect(TestAgent.agentIsGpt3).toBe(true);
});
describe('sendMessage', () => {
test('sendMessage should return a response message', async () => {
const expectedResult = expect.objectContaining({
sender: 'ChatGPT',
text: expect.any(String),
isCreatedByUser: false,
messageId: expect.any(String),
parentMessageId: expect.any(String),
conversationId: expect.any(String),
});
const response = await TestAgent.sendMessage(userMessage);
parentMessageId = response.messageId;
conversationId = response.conversationId;
expect(response).toEqual(expectedResult);
});
test('sendMessage should work with provided conversationId and parentMessageId', async () => {
const userMessage = 'Second message in the conversation';
const opts = {
conversationId,
parentMessageId,
};
const expectedResult = expect.objectContaining({
sender: 'ChatGPT',
text: expect.any(String),
isCreatedByUser: false,
messageId: expect.any(String),
parentMessageId: expect.any(String),
conversationId: opts.conversationId,
});
const response = await TestAgent.sendMessage(userMessage, opts);
parentMessageId = response.messageId;
expect(response.conversationId).toEqual(conversationId);
expect(response).toEqual(expectedResult);
});
test('should return chat history', async () => {
const chatMessages = await TestAgent.loadHistory(conversationId, parentMessageId);
expect(TestAgent.currentMessages).toHaveLength(4);
expect(chatMessages[0].text).toEqual(userMessage);
});
});
describe('getFunctionModelName', () => {
let client;
beforeEach(() => {
client = new PluginsClient('dummy_api_key');
});
test('should return the input when it includes a dash followed by four digits', () => {
expect(client.getFunctionModelName('-1234')).toBe('-1234');
expect(client.getFunctionModelName('gpt-4-5678-preview')).toBe('gpt-4-5678-preview');
});
test('should return the input for all function-capable models (`0613` models and above)', () => {
expect(client.getFunctionModelName('gpt-4-0613')).toBe('gpt-4-0613');
expect(client.getFunctionModelName('gpt-4-32k-0613')).toBe('gpt-4-32k-0613');
expect(client.getFunctionModelName('gpt-3.5-turbo-0613')).toBe('gpt-3.5-turbo-0613');
expect(client.getFunctionModelName('gpt-3.5-turbo-16k-0613')).toBe('gpt-3.5-turbo-16k-0613');
expect(client.getFunctionModelName('gpt-3.5-turbo-1106')).toBe('gpt-3.5-turbo-1106');
expect(client.getFunctionModelName('gpt-4-1106-preview')).toBe('gpt-4-1106-preview');
expect(client.getFunctionModelName('gpt-4-1106')).toBe('gpt-4-1106');
});
test('should return the corresponding model if input is non-function capable (`0314` models)', () => {
expect(client.getFunctionModelName('gpt-4-0314')).toBe('gpt-4');
expect(client.getFunctionModelName('gpt-4-32k-0314')).toBe('gpt-4');
expect(client.getFunctionModelName('gpt-3.5-turbo-0314')).toBe('gpt-3.5-turbo');
expect(client.getFunctionModelName('gpt-3.5-turbo-16k-0314')).toBe('gpt-3.5-turbo');
});
test('should return "gpt-3.5-turbo" when the input includes "gpt-3.5-turbo"', () => {
expect(client.getFunctionModelName('test gpt-3.5-turbo model')).toBe('gpt-3.5-turbo');
});
test('should return "gpt-4" when the input includes "gpt-4"', () => {
expect(client.getFunctionModelName('testing gpt-4')).toBe('gpt-4');
});
test('should return "gpt-3.5-turbo" for input that does not meet any specific condition', () => {
expect(client.getFunctionModelName('random string')).toBe('gpt-3.5-turbo');
expect(client.getFunctionModelName('')).toBe('gpt-3.5-turbo');
});
});
describe('Azure OpenAI tests specific to Plugins', () => {
// TODO: add more tests for Azure OpenAI integration with Plugins
// let client;
// beforeEach(() => {
// client = new PluginsClient('dummy_api_key');
// });
test('should not call getFunctionModelName when azure options are set', () => {
const spy = jest.spyOn(PluginsClient.prototype, 'getFunctionModelName');
const model = 'gpt-4-turbo';
// note, without the azure change in PR #1766, `getFunctionModelName` is called twice
const testClient = new PluginsClient('dummy_api_key', {
agentOptions: {
model,
agent: 'functions',
},
azure: defaultAzureOptions,
});
expect(spy).not.toHaveBeenCalled();
expect(testClient.agentOptions.model).toBe(model);
spy.mockRestore();
});
});
describe('sendMessage with filtered tools', () => {
let TestAgent;
const apiKey = 'fake-api-key';
const mockTools = [{ name: 'tool1' }, { name: 'tool2' }, { name: 'tool3' }, { name: 'tool4' }];
beforeEach(() => {
TestAgent = new PluginsClient(apiKey, {
tools: mockTools,
modelOptions: {
model: 'gpt-3.5-turbo',
temperature: 0,
max_tokens: 2,
},
agentOptions: {
model: 'gpt-3.5-turbo',
},
});
TestAgent.options.req = {
app: {
locals: {},
},
};
TestAgent.sendMessage = jest.fn().mockImplementation(async () => {
const { filteredTools = [], includedTools = [] } = TestAgent.options.req.app.locals;
if (includedTools.length > 0) {
const tools = TestAgent.options.tools.filter((plugin) =>
includedTools.includes(plugin.name),
);
TestAgent.options.tools = tools;
} else {
const tools = TestAgent.options.tools.filter(
(plugin) => !filteredTools.includes(plugin.name),
);
TestAgent.options.tools = tools;
}
return {
text: 'Mocked response',
tools: TestAgent.options.tools,
};
});
});
test('should filter out tools when filteredTools is provided', async () => {
TestAgent.options.req.app.locals.filteredTools = ['tool1', 'tool3'];
const response = await TestAgent.sendMessage('Test message');
expect(response.tools).toHaveLength(2);
expect(response.tools).toEqual(
expect.arrayContaining([
expect.objectContaining({ name: 'tool2' }),
expect.objectContaining({ name: 'tool4' }),
]),
);
});
test('should only include specified tools when includedTools is provided', async () => {
TestAgent.options.req.app.locals.includedTools = ['tool2', 'tool4'];
const response = await TestAgent.sendMessage('Test message');
expect(response.tools).toHaveLength(2);
expect(response.tools).toEqual(
expect.arrayContaining([
expect.objectContaining({ name: 'tool2' }),
expect.objectContaining({ name: 'tool4' }),
]),
);
});
test('should prioritize includedTools over filteredTools', async () => {
TestAgent.options.req.app.locals.filteredTools = ['tool1', 'tool3'];
TestAgent.options.req.app.locals.includedTools = ['tool1', 'tool2'];
const response = await TestAgent.sendMessage('Test message');
expect(response.tools).toHaveLength(2);
expect(response.tools).toEqual(
expect.arrayContaining([
expect.objectContaining({ name: 'tool1' }),
expect.objectContaining({ name: 'tool2' }),
]),
);
});
test('should not modify tools when no filters are provided', async () => {
const response = await TestAgent.sendMessage('Test message');
expect(response.tools).toHaveLength(4);
expect(response.tools).toEqual(expect.arrayContaining(mockTools));
});
});
});

View File

@@ -107,6 +107,12 @@ const getImageEditPromptDescription = () => {
return process.env.IMAGE_EDIT_OAI_PROMPT_DESCRIPTION || DEFAULT_IMAGE_EDIT_PROMPT_DESCRIPTION;
};
function createAbortHandler() {
return function () {
logger.debug('[ImageGenOAI] Image generation aborted');
};
}
/**
* Creates OpenAI Image tools (generation and editing)
* @param {Object} fields - Configuration fields
@@ -201,10 +207,18 @@ function createOpenAIImageTools(fields = {}) {
}
let resp;
/** @type {AbortSignal} */
let derivedSignal = null;
/** @type {() => void} */
let abortHandler = null;
try {
const derivedSignal = runnableConfig?.signal
? AbortSignal.any([runnableConfig.signal])
: undefined;
if (runnableConfig?.signal) {
derivedSignal = AbortSignal.any([runnableConfig.signal]);
abortHandler = createAbortHandler();
derivedSignal.addEventListener('abort', abortHandler, { once: true });
}
resp = await openai.images.generate(
{
model: 'gpt-image-1',
@@ -228,6 +242,10 @@ function createOpenAIImageTools(fields = {}) {
logAxiosError({ error, message });
return returnValue(`Something went wrong when trying to generate the image. The OpenAI API may be unavailable:
Error Message: ${error.message}`);
} finally {
if (abortHandler && derivedSignal) {
derivedSignal.removeEventListener('abort', abortHandler);
}
}
if (!resp) {
@@ -409,10 +427,17 @@ Error Message: ${error.message}`);
headers['Authorization'] = `Bearer ${apiKey}`;
}
/** @type {AbortSignal} */
let derivedSignal = null;
/** @type {() => void} */
let abortHandler = null;
try {
const derivedSignal = runnableConfig?.signal
? AbortSignal.any([runnableConfig.signal])
: undefined;
if (runnableConfig?.signal) {
derivedSignal = AbortSignal.any([runnableConfig.signal]);
abortHandler = createAbortHandler();
derivedSignal.addEventListener('abort', abortHandler, { once: true });
}
/** @type {import('axios').AxiosRequestConfig} */
const axiosConfig = {
@@ -467,6 +492,10 @@ Error Message: ${error.message}`);
logAxiosError({ error, message });
return returnValue(`Something went wrong when trying to edit the image. The OpenAI API may be unavailable:
Error Message: ${error.message || 'Unknown error'}`);
} finally {
if (abortHandler && derivedSignal) {
derivedSignal.removeEventListener('abort', abortHandler);
}
}
},
{

View File

@@ -1,9 +1,10 @@
const { z } = require('zod');
const axios = require('axios');
const { tool } = require('@langchain/core/tools');
const { logger } = require('@librechat/data-schemas');
const { Tools, EToolResources } = require('librechat-data-provider');
const { generateShortLivedToken } = require('~/server/services/AuthService');
const { getFiles } = require('~/models/File');
const { logger } = require('~/config');
/**
*
@@ -59,7 +60,7 @@ const createFileSearchTool = async ({ req, files, entity_id }) => {
if (files.length === 0) {
return 'No files to search. Instruct the user to add files for the search.';
}
const jwtToken = req.headers.authorization.split(' ')[1];
const jwtToken = generateShortLivedToken(req.user.id);
if (!jwtToken) {
return 'There was an error authenticating the file search request.';
}

View File

@@ -1,7 +1,8 @@
const { logger } = require('@librechat/data-schemas');
const { isEnabled, math } = require('@librechat/api');
const { ViolationTypes } = require('librechat-data-provider');
const { isEnabled, math, removePorts } = require('~/server/utils');
const { deleteAllUserSessions } = require('~/models');
const { removePorts } = require('~/server/utils');
const getLogStores = require('./getLogStores');
const { BAN_VIOLATIONS, BAN_INTERVAL } = process.env ?? {};

View File

@@ -1,7 +1,7 @@
const { Keyv } = require('keyv');
const { isEnabled, math } = require('@librechat/api');
const { CacheKeys, ViolationTypes, Time } = require('librechat-data-provider');
const { logFile, violationFile } = require('./keyvFiles');
const { isEnabled, math } = require('~/server/utils');
const keyvRedis = require('./keyvRedis');
const keyvMongo = require('./keyvMongo');

View File

@@ -1,4 +1,6 @@
const { logger } = require('@librechat/data-schemas');
const { createTempChatExpirationDate } = require('@librechat/api');
const getCustomConfig = require('~/server/services/Config/loadCustomConfig');
const { getMessages, deleteMessages } = require('./Message');
const { Conversation } = require('~/db/models');
@@ -98,10 +100,15 @@ module.exports = {
update.conversationId = newConversationId;
}
if (req.body.isTemporary) {
const expiredAt = new Date();
expiredAt.setDate(expiredAt.getDate() + 30);
update.expiredAt = expiredAt;
if (req?.body?.isTemporary) {
try {
const customConfig = await getCustomConfig();
update.expiredAt = createTempChatExpirationDate(customConfig);
} catch (err) {
logger.error('Error creating temporary chat expiration date:', err);
logger.info(`---\`saveConvo\` context: ${metadata?.context}`);
update.expiredAt = null;
}
} else {
update.expiredAt = null;
}

View File

@@ -1,5 +1,5 @@
const { logger } = require('@librechat/data-schemas');
const { EToolResources } = require('librechat-data-provider');
const { EToolResources, FileContext } = require('librechat-data-provider');
const { File } = require('~/db/models');
/**
@@ -32,19 +32,19 @@ const getFiles = async (filter, _sortOptions, selectFields = { text: 0 }) => {
* @returns {Promise<Array<MongoFile>>} Files that match the criteria
*/
const getToolFilesByIds = async (fileIds, toolResourceSet) => {
if (!fileIds || !fileIds.length) {
if (!fileIds || !fileIds.length || !toolResourceSet?.size) {
return [];
}
try {
const filter = {
file_id: { $in: fileIds },
$or: [],
};
if (toolResourceSet.size) {
filter.$or = [];
if (toolResourceSet.has(EToolResources.ocr)) {
filter.$or.push({ text: { $exists: true, $ne: null }, context: FileContext.agents });
}
if (toolResourceSet.has(EToolResources.file_search)) {
filter.$or.push({ embedded: true });
}

View File

@@ -1,5 +1,7 @@
const { z } = require('zod');
const { logger } = require('@librechat/data-schemas');
const { createTempChatExpirationDate } = require('@librechat/api');
const getCustomConfig = require('~/server/services/Config/loadCustomConfig');
const { Message } = require('~/db/models');
const idSchema = z.string().uuid();
@@ -54,9 +56,14 @@ async function saveMessage(req, params, metadata) {
};
if (req?.body?.isTemporary) {
const expiredAt = new Date();
expiredAt.setDate(expiredAt.getDate() + 30);
update.expiredAt = expiredAt;
try {
const customConfig = await getCustomConfig();
update.expiredAt = createTempChatExpirationDate(customConfig);
} catch (err) {
logger.error('Error creating temporary chat expiration date:', err);
logger.info(`---\`saveMessage\` context: ${metadata?.context}`);
update.expiredAt = null;
}
} else {
update.expiredAt = null;
}

View File

@@ -48,14 +48,13 @@
"@langchain/google-genai": "^0.2.13",
"@langchain/google-vertexai": "^0.2.13",
"@langchain/textsplitters": "^0.1.0",
"@librechat/agents": "^2.4.41",
"@librechat/agents": "^2.4.46",
"@librechat/api": "*",
"@librechat/data-schemas": "*",
"@node-saml/passport-saml": "^5.0.0",
"@waylaidwanderer/fetch-event-source": "^3.0.1",
"axios": "^1.8.2",
"bcryptjs": "^2.4.3",
"cohere-ai": "^7.9.1",
"compression": "^1.7.4",
"connect-redis": "^7.1.0",
"cookie": "^0.7.2",

View File

@@ -169,9 +169,6 @@ function disposeClient(client) {
client.isGenerativeModel = null;
}
// Properties specific to OpenAIClient
if (client.ChatGPTClient) {
client.ChatGPTClient = null;
}
if (client.completionsUrl) {
client.completionsUrl = null;
}

View File

@@ -1,17 +1,17 @@
const cookies = require('cookie');
const jwt = require('jsonwebtoken');
const openIdClient = require('openid-client');
const { isEnabled } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const {
registerUser,
resetPassword,
setAuthTokens,
requestPasswordReset,
setOpenIDAuthTokens,
resetPassword,
setAuthTokens,
registerUser,
} = require('~/server/services/AuthService');
const { findUser, getUserById, deleteAllUserSessions, findSession } = require('~/models');
const { getOpenIdConfig } = require('~/strategies');
const { isEnabled } = require('~/server/utils');
const registrationController = async (req, res) => {
try {

View File

@@ -1,3 +1,5 @@
const { sendEvent } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { getResponseSender } = require('librechat-data-provider');
const {
handleAbortError,
@@ -10,9 +12,8 @@ const {
clientRegistry,
requestDataMap,
} = require('~/server/cleanup');
const { sendMessage, createOnProgress } = require('~/server/utils');
const { createOnProgress } = require('~/server/utils');
const { saveMessage } = require('~/models');
const { logger } = require('~/config');
const EditController = async (req, res, next, initializeClient) => {
let {
@@ -198,7 +199,7 @@ const EditController = async (req, res, next, initializeClient) => {
const finalUserMessage = reqDataContext.userMessage;
const finalResponseMessage = { ...response };
sendMessage(res, {
sendEvent(res, {
final: true,
conversation,
title: conversation.title,

View File

@@ -0,0 +1,195 @@
const { duplicateAgent } = require('../v1');
const { getAgent, createAgent } = require('~/models/Agent');
const { getActions } = require('~/models/Action');
const { nanoid } = require('nanoid');
jest.mock('~/models/Agent');
jest.mock('~/models/Action');
jest.mock('nanoid');
describe('duplicateAgent', () => {
let req, res;
beforeEach(() => {
req = {
params: { id: 'agent_123' },
user: { id: 'user_456' },
};
res = {
status: jest.fn().mockReturnThis(),
json: jest.fn(),
};
jest.clearAllMocks();
});
it('should duplicate an agent successfully', async () => {
const mockAgent = {
id: 'agent_123',
name: 'Test Agent',
description: 'Test Description',
instructions: 'Test Instructions',
provider: 'openai',
model: 'gpt-4',
tools: ['file_search'],
actions: [],
author: 'user_789',
versions: [{ name: 'Test Agent', version: 1 }],
__v: 0,
};
const mockNewAgent = {
id: 'agent_new_123',
name: 'Test Agent (1/2/23, 12:34)',
description: 'Test Description',
instructions: 'Test Instructions',
provider: 'openai',
model: 'gpt-4',
tools: ['file_search'],
actions: [],
author: 'user_456',
versions: [
{
name: 'Test Agent (1/2/23, 12:34)',
description: 'Test Description',
instructions: 'Test Instructions',
provider: 'openai',
model: 'gpt-4',
tools: ['file_search'],
actions: [],
createdAt: new Date(),
updatedAt: new Date(),
},
],
};
getAgent.mockResolvedValue(mockAgent);
getActions.mockResolvedValue([]);
nanoid.mockReturnValue('new_123');
createAgent.mockResolvedValue(mockNewAgent);
await duplicateAgent(req, res);
expect(getAgent).toHaveBeenCalledWith({ id: 'agent_123' });
expect(getActions).toHaveBeenCalledWith({ agent_id: 'agent_123' }, true);
expect(createAgent).toHaveBeenCalledWith(
expect.objectContaining({
id: 'agent_new_123',
author: 'user_456',
name: expect.stringContaining('Test Agent ('),
description: 'Test Description',
instructions: 'Test Instructions',
provider: 'openai',
model: 'gpt-4',
tools: ['file_search'],
actions: [],
}),
);
expect(createAgent).toHaveBeenCalledWith(
expect.not.objectContaining({
versions: expect.anything(),
__v: expect.anything(),
}),
);
expect(res.status).toHaveBeenCalledWith(201);
expect(res.json).toHaveBeenCalledWith({
agent: mockNewAgent,
actions: [],
});
});
it('should ensure duplicated agent has clean versions array without nested fields', async () => {
const mockAgent = {
id: 'agent_123',
name: 'Test Agent',
description: 'Test Description',
versions: [
{
name: 'Test Agent',
versions: [{ name: 'Nested' }],
__v: 1,
},
],
__v: 2,
};
const mockNewAgent = {
id: 'agent_new_123',
name: 'Test Agent (1/2/23, 12:34)',
description: 'Test Description',
versions: [
{
name: 'Test Agent (1/2/23, 12:34)',
description: 'Test Description',
createdAt: new Date(),
updatedAt: new Date(),
},
],
};
getAgent.mockResolvedValue(mockAgent);
getActions.mockResolvedValue([]);
nanoid.mockReturnValue('new_123');
createAgent.mockResolvedValue(mockNewAgent);
await duplicateAgent(req, res);
expect(mockNewAgent.versions).toHaveLength(1);
const firstVersion = mockNewAgent.versions[0];
expect(firstVersion).not.toHaveProperty('versions');
expect(firstVersion).not.toHaveProperty('__v');
expect(mockNewAgent).not.toHaveProperty('__v');
expect(res.status).toHaveBeenCalledWith(201);
});
it('should return 404 if agent not found', async () => {
getAgent.mockResolvedValue(null);
await duplicateAgent(req, res);
expect(res.status).toHaveBeenCalledWith(404);
expect(res.json).toHaveBeenCalledWith({
error: 'Agent not found',
status: 'error',
});
});
it('should handle tool_resources.ocr correctly', async () => {
const mockAgent = {
id: 'agent_123',
name: 'Test Agent',
tool_resources: {
ocr: { enabled: true, config: 'test' },
other: { should: 'not be copied' },
},
};
getAgent.mockResolvedValue(mockAgent);
getActions.mockResolvedValue([]);
nanoid.mockReturnValue('new_123');
createAgent.mockResolvedValue({ id: 'agent_new_123' });
await duplicateAgent(req, res);
expect(createAgent).toHaveBeenCalledWith(
expect.objectContaining({
tool_resources: {
ocr: { enabled: true, config: 'test' },
},
}),
);
});
it('should handle errors gracefully', async () => {
getAgent.mockRejectedValue(new Error('Database error'));
await duplicateAgent(req, res);
expect(res.status).toHaveBeenCalledWith(500);
expect(res.json).toHaveBeenCalledWith({ error: 'Database error' });
});
});

View File

@@ -4,11 +4,13 @@ const {
sendEvent,
createRun,
Tokenizer,
checkAccess,
memoryInstructions,
createMemoryProcessor,
} = require('@librechat/api');
const {
Callback,
Providers,
GraphEvents,
formatMessage,
formatAgentMessages,
@@ -31,22 +33,29 @@ const {
} = require('librechat-data-provider');
const { DynamicStructuredTool } = require('@langchain/core/tools');
const { getBufferString, HumanMessage } = require('@langchain/core/messages');
const {
getCustomEndpointConfig,
createGetMCPAuthMap,
checkCapability,
} = require('~/server/services/Config');
const { createGetMCPAuthMap, checkCapability } = require('~/server/services/Config');
const { addCacheControl, createContextHandlers } = require('~/app/clients/prompts');
const { initializeAgent } = require('~/server/services/Endpoints/agents/agent');
const { spendTokens, spendStructuredTokens } = require('~/models/spendTokens');
const { getFormattedMemories, deleteMemory, setMemory } = require('~/models');
const { encodeAndFormat } = require('~/server/services/Files/images/encode');
const initOpenAI = require('~/server/services/Endpoints/openAI/initialize');
const { checkAccess } = require('~/server/middleware/roles/access');
const { getProviderConfig } = require('~/server/services/Endpoints');
const BaseClient = require('~/app/clients/BaseClient');
const { getRoleByName } = require('~/models/Role');
const { loadAgent } = require('~/models/Agent');
const { getMCPManager } = require('~/config');
const omitTitleOptions = new Set([
'stream',
'thinking',
'streaming',
'clientOptions',
'thinkingConfig',
'thinkingBudget',
'includeThoughts',
'maxOutputTokens',
]);
/**
* @param {ServerRequest} req
* @param {Agent} agent
@@ -393,7 +402,12 @@ class AgentClient extends BaseClient {
if (user.personalization?.memories === false) {
return;
}
const hasAccess = await checkAccess(user, PermissionTypes.MEMORIES, [Permissions.USE]);
const hasAccess = await checkAccess({
user,
permissionType: PermissionTypes.MEMORIES,
permissions: [Permissions.USE],
getRoleByName,
});
if (!hasAccess) {
logger.debug(
@@ -677,7 +691,7 @@ class AgentClient extends BaseClient {
hide_sequential_outputs: this.options.agent.hide_sequential_outputs,
user: this.options.req.user,
},
recursionLimit: agentsEConfig?.recursionLimit,
recursionLimit: agentsEConfig?.recursionLimit ?? 25,
signal: abortController.signal,
streamMode: 'values',
version: 'v2',
@@ -983,23 +997,26 @@ class AgentClient extends BaseClient {
throw new Error('Run not initialized');
}
const { handleLLMEnd, collected: collectedMetadata } = createMetadataAggregator();
const endpoint = this.options.agent.endpoint;
const { req, res } = this.options;
const { req, res, agent } = this.options;
const endpoint = agent.endpoint;
/** @type {import('@librechat/agents').ClientOptions} */
let clientOptions = {
maxTokens: 75,
model: agent.model_parameters.model,
};
let endpointConfig = req.app.locals[endpoint];
const { getOptions, overrideProvider, customEndpointConfig } =
await getProviderConfig(endpoint);
/** @type {TEndpoint | undefined} */
const endpointConfig = req.app.locals[endpoint] ?? customEndpointConfig;
if (!endpointConfig) {
try {
endpointConfig = await getCustomEndpointConfig(endpoint);
} catch (err) {
logger.error(
'[api/server/controllers/agents/client.js #titleConvo] Error getting custom endpoint config',
err,
);
}
logger.warn(
'[api/server/controllers/agents/client.js #titleConvo] Error getting endpoint config',
);
}
if (
endpointConfig &&
endpointConfig.titleModel &&
@@ -1007,30 +1024,50 @@ class AgentClient extends BaseClient {
) {
clientOptions.model = endpointConfig.titleModel;
}
const options = await getOptions({
req,
res,
optionsOnly: true,
overrideEndpoint: endpoint,
overrideModel: clientOptions.model,
endpointOption: { model_parameters: clientOptions },
});
let provider = options.provider ?? overrideProvider ?? agent.provider;
if (
endpoint === EModelEndpoint.azureOpenAI &&
clientOptions.model &&
this.options.agent.model_parameters.model !== clientOptions.model
options.llmConfig?.azureOpenAIApiInstanceName == null
) {
clientOptions =
(
await initOpenAI({
req,
res,
optionsOnly: true,
overrideModel: clientOptions.model,
overrideEndpoint: endpoint,
endpointOption: {
model_parameters: clientOptions,
},
})
)?.llmConfig ?? clientOptions;
provider = Providers.OPENAI;
}
if (/\b(o\d)\b/i.test(clientOptions.model) && clientOptions.maxTokens != null) {
/** @type {import('@librechat/agents').ClientOptions} */
clientOptions = { ...options.llmConfig };
if (options.configOptions) {
clientOptions.configuration = options.configOptions;
}
// Ensure maxTokens is set for non-o1 models
if (!/\b(o\d)\b/i.test(clientOptions.model) && !clientOptions.maxTokens) {
clientOptions.maxTokens = 75;
} else if (/\b(o\d)\b/i.test(clientOptions.model) && clientOptions.maxTokens != null) {
delete clientOptions.maxTokens;
}
clientOptions = Object.assign(
Object.fromEntries(
Object.entries(clientOptions).filter(([key]) => !omitTitleOptions.has(key)),
),
);
if (provider === Providers.GOOGLE) {
clientOptions.json = true;
}
try {
const titleResult = await this.run.generateTitle({
provider,
inputText: text,
contentParts: this.contentParts,
clientOptions,
@@ -1048,8 +1085,10 @@ class AgentClient extends BaseClient {
let input_tokens, output_tokens;
if (item.usage) {
input_tokens = item.usage.input_tokens || item.usage.inputTokens;
output_tokens = item.usage.output_tokens || item.usage.outputTokens;
input_tokens =
item.usage.prompt_tokens || item.usage.input_tokens || item.usage.inputTokens;
output_tokens =
item.usage.completion_tokens || item.usage.output_tokens || item.usage.outputTokens;
} else if (item.tokenUsage) {
input_tokens = item.tokenUsage.promptTokens;
output_tokens = item.tokenUsage.completionTokens;

View File

@@ -1,10 +1,10 @@
// errorHandler.js
const { logger } = require('~/config');
const getLogStores = require('~/cache/getLogStores');
const { logger } = require('@librechat/data-schemas');
const { CacheKeys, ViolationTypes } = require('librechat-data-provider');
const { sendResponse } = require('~/server/middleware/error');
const { recordUsage } = require('~/server/services/Threads');
const { getConvo } = require('~/models/Conversation');
const { sendResponse } = require('~/server/utils');
const getLogStores = require('~/cache/getLogStores');
/**
* @typedef {Object} ErrorHandlerContext
@@ -75,7 +75,7 @@ const createErrorHandler = ({ req, res, getContext, originPath = '/assistants/ch
} else if (/Files.*are invalid/.test(error.message)) {
const errorMessage = `Files are invalid, or may not have uploaded yet.${
endpoint === 'azureAssistants'
? ' If using Azure OpenAI, files are only available in the region of the assistant\'s model at the time of upload.'
? " If using Azure OpenAI, files are only available in the region of the assistant's model at the time of upload."
: ''
}`;
return sendResponse(req, res, messageData, errorMessage);

View File

@@ -1,106 +0,0 @@
const { HttpsProxyAgent } = require('https-proxy-agent');
const { resolveHeaders } = require('librechat-data-provider');
const { createLLM } = require('~/app/clients/llm');
/**
* Initializes and returns a Language Learning Model (LLM) instance.
*
* @param {Object} options - Configuration options for the LLM.
* @param {string} options.model - The model identifier.
* @param {string} options.modelName - The specific name of the model.
* @param {number} options.temperature - The temperature setting for the model.
* @param {number} options.presence_penalty - The presence penalty for the model.
* @param {number} options.frequency_penalty - The frequency penalty for the model.
* @param {number} options.max_tokens - The maximum number of tokens for the model output.
* @param {boolean} options.streaming - Whether to use streaming for the model output.
* @param {Object} options.context - The context for the conversation.
* @param {number} options.tokenBuffer - The token buffer size.
* @param {number} options.initialMessageCount - The initial message count.
* @param {string} options.conversationId - The ID of the conversation.
* @param {string} options.user - The user identifier.
* @param {string} options.langchainProxy - The langchain proxy URL.
* @param {boolean} options.useOpenRouter - Whether to use OpenRouter.
* @param {Object} options.options - Additional options.
* @param {Object} options.options.headers - Custom headers for the request.
* @param {string} options.options.proxy - Proxy URL.
* @param {Object} options.options.req - The request object.
* @param {Object} options.options.res - The response object.
* @param {boolean} options.options.debug - Whether to enable debug mode.
* @param {string} options.apiKey - The API key for authentication.
* @param {Object} options.azure - Azure-specific configuration.
* @param {Object} options.abortController - The AbortController instance.
* @returns {Object} The initialized LLM instance.
*/
function initializeLLM(options) {
const {
model,
modelName,
temperature,
presence_penalty,
frequency_penalty,
max_tokens,
streaming,
user,
langchainProxy,
useOpenRouter,
options: { headers, proxy },
apiKey,
azure,
} = options;
const modelOptions = {
modelName: modelName || model,
temperature,
presence_penalty,
frequency_penalty,
user,
};
if (max_tokens) {
modelOptions.max_tokens = max_tokens;
}
const configOptions = {};
if (langchainProxy) {
configOptions.basePath = langchainProxy;
}
if (useOpenRouter) {
configOptions.basePath = 'https://openrouter.ai/api/v1';
configOptions.baseOptions = {
headers: {
'HTTP-Referer': 'https://librechat.ai',
'X-Title': 'LibreChat',
},
};
}
if (headers && typeof headers === 'object' && !Array.isArray(headers)) {
configOptions.baseOptions = {
headers: resolveHeaders({
...headers,
...configOptions?.baseOptions?.headers,
}),
};
}
if (proxy) {
configOptions.httpAgent = new HttpsProxyAgent(proxy);
configOptions.httpsAgent = new HttpsProxyAgent(proxy);
}
const llm = createLLM({
modelOptions,
configOptions,
openAIApiKey: apiKey,
azure,
streaming,
});
return llm;
}
module.exports = {
initializeLLM,
};

View File

@@ -1,3 +1,5 @@
const { sendEvent } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { Constants } = require('librechat-data-provider');
const {
handleAbortError,
@@ -5,9 +7,7 @@ const {
cleanupAbortController,
} = require('~/server/middleware');
const { disposeClient, clientRegistry, requestDataMap } = require('~/server/cleanup');
const { sendMessage } = require('~/server/utils');
const { saveMessage } = require('~/models');
const { logger } = require('~/config');
const AgentController = async (req, res, next, initializeClient, addTitle) => {
let {
@@ -206,7 +206,7 @@ const AgentController = async (req, res, next, initializeClient, addTitle) => {
// Create a new response object with minimal copies
const finalResponse = { ...response };
sendMessage(res, {
sendEvent(res, {
final: true,
conversation,
title: conversation.title,

View File

@@ -242,6 +242,8 @@ const duplicateAgentHandler = async (req, res) => {
createdAt: _createdAt,
updatedAt: _updatedAt,
tool_resources: _tool_resources = {},
versions: _versions,
__v: _v,
...cloneData
} = agent;
cloneData.name = `${agent.name} (${new Date().toLocaleString('en-US', {

View File

@@ -1,4 +1,7 @@
const { v4 } = require('uuid');
const { sleep } = require('@librechat/agents');
const { sendEvent } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const {
Time,
Constants,
@@ -19,20 +22,20 @@ const {
addThreadMetadata,
saveAssistantMessage,
} = require('~/server/services/Threads');
const { sendResponse, sendMessage, sleep, countTokens } = require('~/server/utils');
const { runAssistant, createOnTextProgress } = require('~/server/services/AssistantService');
const validateAuthor = require('~/server/middleware/assistants/validateAuthor');
const { formatMessage, createVisionPrompt } = require('~/app/clients/prompts');
const { createRun, StreamRunManager } = require('~/server/services/Runs');
const { addTitle } = require('~/server/services/Endpoints/assistants');
const { createRunBody } = require('~/server/services/createRunBody');
const { sendResponse } = require('~/server/middleware/error');
const { getTransactions } = require('~/models/Transaction');
const { checkBalance } = require('~/models/balanceMethods');
const { getConvo } = require('~/models/Conversation');
const getLogStores = require('~/cache/getLogStores');
const { countTokens } = require('~/server/utils');
const { getModelMaxTokens } = require('~/utils');
const { getOpenAIClient } = require('./helpers');
const { logger } = require('~/config');
/**
* @route POST /
@@ -471,7 +474,7 @@ const chatV1 = async (req, res) => {
await Promise.all(promises);
const sendInitialResponse = () => {
sendMessage(res, {
sendEvent(res, {
sync: true,
conversationId,
// messages: previousMessages,
@@ -587,7 +590,7 @@ const chatV1 = async (req, res) => {
iconURL: endpointOption.iconURL,
};
sendMessage(res, {
sendEvent(res, {
final: true,
conversation,
requestMessage: {

View File

@@ -1,4 +1,7 @@
const { v4 } = require('uuid');
const { sleep } = require('@librechat/agents');
const { sendEvent } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const {
Time,
Constants,
@@ -22,15 +25,14 @@ const { createErrorHandler } = require('~/server/controllers/assistants/errors')
const validateAuthor = require('~/server/middleware/assistants/validateAuthor');
const { createRun, StreamRunManager } = require('~/server/services/Runs');
const { addTitle } = require('~/server/services/Endpoints/assistants');
const { sendMessage, sleep, countTokens } = require('~/server/utils');
const { createRunBody } = require('~/server/services/createRunBody');
const { getTransactions } = require('~/models/Transaction');
const { checkBalance } = require('~/models/balanceMethods');
const { getConvo } = require('~/models/Conversation');
const getLogStores = require('~/cache/getLogStores');
const { countTokens } = require('~/server/utils');
const { getModelMaxTokens } = require('~/utils');
const { getOpenAIClient } = require('./helpers');
const { logger } = require('~/config');
/**
* @route POST /
@@ -309,7 +311,7 @@ const chatV2 = async (req, res) => {
await Promise.all(promises);
const sendInitialResponse = () => {
sendMessage(res, {
sendEvent(res, {
sync: true,
conversationId,
// messages: previousMessages,
@@ -432,7 +434,7 @@ const chatV2 = async (req, res) => {
iconURL: endpointOption.iconURL,
};
sendMessage(res, {
sendEvent(res, {
final: true,
conversation,
requestMessage: {

View File

@@ -1,10 +1,10 @@
// errorHandler.js
const { sendResponse } = require('~/server/utils');
const { logger } = require('~/config');
const getLogStores = require('~/cache/getLogStores');
const { logger } = require('@librechat/data-schemas');
const { CacheKeys, ViolationTypes, ContentTypes } = require('librechat-data-provider');
const { getConvo } = require('~/models/Conversation');
const { recordUsage, checkMessageGaps } = require('~/server/services/Threads');
const { sendResponse } = require('~/server/middleware/error');
const { getConvo } = require('~/models/Conversation');
const getLogStores = require('~/cache/getLogStores');
/**
* @typedef {Object} ErrorHandlerContext
@@ -78,7 +78,7 @@ const createErrorHandler = ({ req, res, getContext, originPath = '/assistants/ch
} else if (/Files.*are invalid/.test(error.message)) {
const errorMessage = `Files are invalid, or may not have uploaded yet.${
endpoint === 'azureAssistants'
? ' If using Azure OpenAI, files are only available in the region of the assistant\'s model at the time of upload.'
? " If using Azure OpenAI, files are only available in the region of the assistant's model at the time of upload."
: ''
}`;
return sendResponse(req, res, messageData, errorMessage);

View File

@@ -1,5 +1,7 @@
const { nanoid } = require('nanoid');
const { EnvVar } = require('@librechat/agents');
const { checkAccess } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const {
Tools,
AuthType,
@@ -13,9 +15,8 @@ const { processCodeOutput } = require('~/server/services/Files/Code/process');
const { createToolCall, getToolCallsByConvo } = require('~/models/ToolCall');
const { loadAuthValues } = require('~/server/services/Tools/credentials');
const { loadTools } = require('~/app/clients/tools/util');
const { checkAccess } = require('~/server/middleware');
const { getRoleByName } = require('~/models/Role');
const { getMessage } = require('~/models/Message');
const { logger } = require('~/config');
const fieldsMap = {
[Tools.execute_code]: [EnvVar.CODE_API_KEY],
@@ -79,6 +80,7 @@ const verifyToolAuth = async (req, res) => {
throwError: false,
});
} catch (error) {
logger.error('Error loading auth values', error);
res.status(200).json({ authenticated: false, message: AuthType.USER_PROVIDED });
return;
}
@@ -132,7 +134,12 @@ const callTool = async (req, res) => {
logger.debug(`[${toolId}/call] User: ${req.user.id}`);
let hasAccess = true;
if (toolAccessPermType[toolId]) {
hasAccess = await checkAccess(req.user, toolAccessPermType[toolId], [Permissions.USE]);
hasAccess = await checkAccess({
user: req.user,
permissionType: toolAccessPermType[toolId],
permissions: [Permissions.USE],
getRoleByName,
});
}
if (!hasAccess) {
logger.warn(

View File

@@ -1,13 +1,13 @@
// abortMiddleware.js
const { logger } = require('@librechat/data-schemas');
const { countTokens, isEnabled, sendEvent } = require('@librechat/api');
const { isAssistantsEndpoint, ErrorTypes } = require('librechat-data-provider');
const { sendMessage, sendError, countTokens, isEnabled } = require('~/server/utils');
const { truncateText, smartTruncateText } = require('~/app/clients/prompts');
const clearPendingReq = require('~/cache/clearPendingReq');
const { sendError } = require('~/server/middleware/error');
const { spendTokens } = require('~/models/spendTokens');
const abortControllers = require('./abortControllers');
const { saveMessage, getConvo } = require('~/models');
const { abortRun } = require('./abortRun');
const { logger } = require('~/config');
const abortDataMap = new WeakMap();
@@ -101,7 +101,7 @@ async function abortMessage(req, res) {
cleanupAbortController(abortKey);
if (res.headersSent && finalEvent) {
return sendMessage(res, finalEvent);
return sendEvent(res, finalEvent);
}
res.setHeader('Content-Type', 'application/json');
@@ -174,7 +174,7 @@ const createAbortController = (req, res, getAbortData, getReqData) => {
* @param {string} responseMessageId
*/
const onStart = (userMessage, responseMessageId) => {
sendMessage(res, { message: userMessage, created: true });
sendEvent(res, { message: userMessage, created: true });
const abortKey = userMessage?.conversationId ?? req.user.id;
getReqData({ abortKey });

View File

@@ -1,11 +1,11 @@
const { sendEvent } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { CacheKeys, RunStatus, isUUID } = require('librechat-data-provider');
const { initializeClient } = require('~/server/services/Endpoints/assistants');
const { checkMessageGaps, recordUsage } = require('~/server/services/Threads');
const { deleteMessages } = require('~/models/Message');
const { getConvo } = require('~/models/Conversation');
const getLogStores = require('~/cache/getLogStores');
const { sendMessage } = require('~/server/utils');
const { logger } = require('~/config');
const three_minutes = 1000 * 60 * 3;
@@ -34,7 +34,7 @@ async function abortRun(req, res) {
const [thread_id, run_id] = runValues.split(':');
if (!run_id) {
logger.warn('[abortRun] Couldn\'t find run for cancel request', { thread_id });
logger.warn("[abortRun] Couldn't find run for cancel request", { thread_id });
return res.status(204).send({ message: 'Run not found' });
} else if (run_id === 'cancelled') {
logger.warn('[abortRun] Run already cancelled', { thread_id });
@@ -93,7 +93,7 @@ async function abortRun(req, res) {
};
if (res.headersSent && finalEvent) {
return sendMessage(res, finalEvent);
return sendEvent(res, finalEvent);
}
res.json(finalEvent);

View File

@@ -7,7 +7,6 @@ const {
} = require('librechat-data-provider');
const azureAssistants = require('~/server/services/Endpoints/azureAssistants');
const assistants = require('~/server/services/Endpoints/assistants');
const gptPlugins = require('~/server/services/Endpoints/gptPlugins');
const { processFiles } = require('~/server/services/Files/process');
const anthropic = require('~/server/services/Endpoints/anthropic');
const bedrock = require('~/server/services/Endpoints/bedrock');
@@ -25,7 +24,6 @@ const buildFunction = {
[EModelEndpoint.bedrock]: bedrock.buildOptions,
[EModelEndpoint.azureOpenAI]: openAI.buildOptions,
[EModelEndpoint.anthropic]: anthropic.buildOptions,
[EModelEndpoint.gptPlugins]: gptPlugins.buildOptions,
[EModelEndpoint.assistants]: assistants.buildOptions,
[EModelEndpoint.azureAssistants]: azureAssistants.buildOptions,
};
@@ -60,15 +58,6 @@ async function buildEndpointOption(req, res, next) {
return handleError(res, { text: 'Model spec mismatch' });
}
if (
currentModelSpec.preset.endpoint !== EModelEndpoint.gptPlugins &&
currentModelSpec.preset.tools
) {
return handleError(res, {
text: `Only the "${EModelEndpoint.gptPlugins}" endpoint can have tools defined in the preset`,
});
}
try {
currentModelSpec.preset.spec = spec;
if (currentModelSpec.iconURL != null && currentModelSpec.iconURL !== '') {

View File

@@ -1,6 +1,7 @@
const crypto = require('crypto');
const { sendEvent } = require('@librechat/api');
const { getResponseSender, Constants } = require('librechat-data-provider');
const { sendMessage, sendError } = require('~/server/utils');
const { sendError } = require('~/server/middleware/error');
const { saveMessage } = require('~/models');
/**
@@ -36,7 +37,7 @@ const denyRequest = async (req, res, errorMessage) => {
isCreatedByUser: true,
text,
};
sendMessage(res, { message: userMessage, created: true });
sendEvent(res, { message: userMessage, created: true });
const shouldSaveMessage = _convoId && parentMessageId && parentMessageId !== Constants.NO_PARENT;

View File

@@ -1,31 +1,9 @@
const crypto = require('crypto');
const { logger } = require('@librechat/data-schemas');
const { parseConvo } = require('librechat-data-provider');
const { sendEvent, handleError } = require('@librechat/api');
const { saveMessage, getMessages } = require('~/models/Message');
const { getConvo } = require('~/models/Conversation');
const { logger } = require('~/config');
/**
* Sends error data in Server Sent Events format and ends the response.
* @param {object} res - The server response.
* @param {string} message - The error message.
*/
const handleError = (res, message) => {
res.write(`event: error\ndata: ${JSON.stringify(message)}\n\n`);
res.end();
};
/**
* Sends message data in Server Sent Events format.
* @param {Express.Response} res - - The server response.
* @param {string | Object} message - The message to be sent.
* @param {'message' | 'error' | 'cancel'} event - [Optional] The type of event. Default is 'message'.
*/
const sendMessage = (res, message, event = 'message') => {
if (typeof message === 'string' && message.length === 0) {
return;
}
res.write(`event: ${event}\ndata: ${JSON.stringify(message)}\n\n`);
};
/**
* Processes an error with provided options, saves the error message and sends a corresponding SSE response
@@ -91,7 +69,7 @@ const sendError = async (req, res, options, callback) => {
convo = parseConvo(errorMessage);
}
return sendMessage(res, {
return sendEvent(res, {
final: true,
requestMessage: query?.[0] ? query[0] : requestMessage,
responseMessage: errorMessage,
@@ -120,12 +98,10 @@ const sendResponse = (req, res, data, errorMessage) => {
if (errorMessage) {
return sendError(req, res, { ...data, text: errorMessage });
}
return sendMessage(res, data);
return sendEvent(res, data);
};
module.exports = {
sendResponse,
handleError,
sendMessage,
sendError,
sendResponse,
};

View File

@@ -1,78 +0,0 @@
const { getRoleByName } = require('~/models/Role');
const { logger } = require('~/config');
/**
* Core function to check if a user has one or more required permissions
*
* @param {object} user - The user object
* @param {PermissionTypes} permissionType - The type of permission to check
* @param {Permissions[]} permissions - The list of specific permissions to check
* @param {Record<Permissions, string[]>} [bodyProps] - An optional object where keys are permissions and values are arrays of properties to check
* @param {object} [checkObject] - The object to check properties against
* @returns {Promise<boolean>} Whether the user has the required permissions
*/
const checkAccess = async (user, permissionType, permissions, bodyProps = {}, checkObject = {}) => {
if (!user) {
return false;
}
const role = await getRoleByName(user.role);
if (role && role.permissions && role.permissions[permissionType]) {
const hasAnyPermission = permissions.some((permission) => {
if (role.permissions[permissionType][permission]) {
return true;
}
if (bodyProps[permission] && checkObject) {
return bodyProps[permission].some((prop) =>
Object.prototype.hasOwnProperty.call(checkObject, prop),
);
}
return false;
});
return hasAnyPermission;
}
return false;
};
/**
* Middleware to check if a user has one or more required permissions, optionally based on `req.body` properties.
*
* @param {PermissionTypes} permissionType - The type of permission to check.
* @param {Permissions[]} permissions - The list of specific permissions to check.
* @param {Record<Permissions, string[]>} [bodyProps] - An optional object where keys are permissions and values are arrays of `req.body` properties to check.
* @returns {(req: ServerRequest, res: ServerResponse, next: NextFunction) => Promise<void>} Express middleware function.
*/
const generateCheckAccess = (permissionType, permissions, bodyProps = {}) => {
return async (req, res, next) => {
try {
const hasAccess = await checkAccess(
req.user,
permissionType,
permissions,
bodyProps,
req.body,
);
if (hasAccess) {
return next();
}
logger.warn(
`[${permissionType}] Forbidden: Insufficient permissions for User ${req.user.id}: ${permissions.join(', ')}`,
);
return res.status(403).json({ message: 'Forbidden: Insufficient permissions' });
} catch (error) {
logger.error(error);
return res.status(500).json({ message: `Server error: ${error.message}` });
}
};
};
module.exports = {
checkAccess,
generateCheckAccess,
};

View File

@@ -1,8 +1,5 @@
const checkAdmin = require('./admin');
const { checkAccess, generateCheckAccess } = require('./access');
module.exports = {
checkAdmin,
checkAccess,
generateCheckAccess,
};

View File

@@ -1,14 +1,28 @@
const express = require('express');
const { nanoid } = require('nanoid');
const { actionDelimiter, SystemRoles, removeNullishValues } = require('librechat-data-provider');
const { logger } = require('@librechat/data-schemas');
const { generateCheckAccess } = require('@librechat/api');
const {
SystemRoles,
Permissions,
PermissionTypes,
actionDelimiter,
removeNullishValues,
} = require('librechat-data-provider');
const { encryptMetadata, domainParser } = require('~/server/services/ActionService');
const { updateAction, getActions, deleteAction } = require('~/models/Action');
const { isActionDomainAllowed } = require('~/server/services/domains');
const { getAgent, updateAgent } = require('~/models/Agent');
const { logger } = require('~/config');
const { getRoleByName } = require('~/models/Role');
const router = express.Router();
const checkAgentCreate = generateCheckAccess({
permissionType: PermissionTypes.AGENTS,
permissions: [Permissions.USE, Permissions.CREATE],
getRoleByName,
});
// If the user has ADMIN role
// then action edition is possible even if not owner of the assistant
const isAdmin = (req) => {
@@ -41,7 +55,7 @@ router.get('/', async (req, res) => {
* @param {ActionMetadata} req.body.metadata - Metadata for the action.
* @returns {Object} 200 - success response - application/json
*/
router.post('/:agent_id', async (req, res) => {
router.post('/:agent_id', checkAgentCreate, async (req, res) => {
try {
const { agent_id } = req.params;
@@ -149,7 +163,7 @@ router.post('/:agent_id', async (req, res) => {
* @param {string} req.params.action_id - The ID of the action to delete.
* @returns {Object} 200 - success response - application/json
*/
router.delete('/:agent_id/:action_id', async (req, res) => {
router.delete('/:agent_id/:action_id', checkAgentCreate, async (req, res) => {
try {
const { agent_id, action_id } = req.params;
const admin = isAdmin(req);

View File

@@ -1,22 +1,28 @@
const express = require('express');
const { generateCheckAccess, skipAgentCheck } = require('@librechat/api');
const { PermissionTypes, Permissions } = require('librechat-data-provider');
const {
setHeaders,
moderateText,
// validateModel,
generateCheckAccess,
validateConvoAccess,
buildEndpointOption,
} = require('~/server/middleware');
const { initializeClient } = require('~/server/services/Endpoints/agents');
const AgentController = require('~/server/controllers/agents/request');
const addTitle = require('~/server/services/Endpoints/agents/title');
const { getRoleByName } = require('~/models/Role');
const router = express.Router();
router.use(moderateText);
const checkAgentAccess = generateCheckAccess(PermissionTypes.AGENTS, [Permissions.USE]);
const checkAgentAccess = generateCheckAccess({
permissionType: PermissionTypes.AGENTS,
permissions: [Permissions.USE],
skipCheck: skipAgentCheck,
getRoleByName,
});
router.use(checkAgentAccess);
router.use(validateConvoAccess);

View File

@@ -1,29 +1,36 @@
const express = require('express');
const { generateCheckAccess } = require('@librechat/api');
const { PermissionTypes, Permissions } = require('librechat-data-provider');
const { requireJwtAuth, generateCheckAccess } = require('~/server/middleware');
const { requireJwtAuth } = require('~/server/middleware');
const v1 = require('~/server/controllers/agents/v1');
const { getRoleByName } = require('~/models/Role');
const actions = require('./actions');
const tools = require('./tools');
const router = express.Router();
const avatar = express.Router();
const checkAgentAccess = generateCheckAccess(PermissionTypes.AGENTS, [Permissions.USE]);
const checkAgentCreate = generateCheckAccess(PermissionTypes.AGENTS, [
Permissions.USE,
Permissions.CREATE,
]);
const checkAgentAccess = generateCheckAccess({
permissionType: PermissionTypes.AGENTS,
permissions: [Permissions.USE],
getRoleByName,
});
const checkAgentCreate = generateCheckAccess({
permissionType: PermissionTypes.AGENTS,
permissions: [Permissions.USE, Permissions.CREATE],
getRoleByName,
});
const checkGlobalAgentShare = generateCheckAccess(
PermissionTypes.AGENTS,
[Permissions.USE, Permissions.CREATE],
{
const checkGlobalAgentShare = generateCheckAccess({
permissionType: PermissionTypes.AGENTS,
permissions: [Permissions.USE, Permissions.CREATE],
bodyProps: {
[Permissions.SHARED_GLOBAL]: ['projectIds', 'removeProjectIds'],
},
);
getRoleByName,
});
router.use(requireJwtAuth);
router.use(checkAgentAccess);
/**
* Agent actions route.

View File

@@ -1,207 +0,0 @@
const express = require('express');
const { getResponseSender } = require('librechat-data-provider');
const {
setHeaders,
moderateText,
validateModel,
handleAbortError,
validateEndpoint,
buildEndpointOption,
createAbortController,
} = require('~/server/middleware');
const { sendMessage, createOnProgress, formatSteps, formatAction } = require('~/server/utils');
const { initializeClient } = require('~/server/services/Endpoints/gptPlugins');
const { saveMessage, updateMessage } = require('~/models');
const { validateTools } = require('~/app');
const { logger } = require('~/config');
const router = express.Router();
router.use(moderateText);
router.post(
'/',
validateEndpoint,
validateModel,
buildEndpointOption,
setHeaders,
async (req, res) => {
let {
text,
generation,
endpointOption,
conversationId,
responseMessageId,
isContinued = false,
parentMessageId = null,
overrideParentMessageId = null,
} = req.body;
logger.debug('[/edit/gptPlugins]', {
text,
generation,
isContinued,
conversationId,
...endpointOption,
});
let userMessage;
let userMessagePromise;
let promptTokens;
const sender = getResponseSender({
...endpointOption,
model: endpointOption.modelOptions.model,
});
const userMessageId = parentMessageId;
const user = req.user.id;
const plugin = {
loading: true,
inputs: [],
latest: null,
outputs: null,
};
const getReqData = (data = {}) => {
for (let key in data) {
if (key === 'userMessage') {
userMessage = data[key];
} else if (key === 'userMessagePromise') {
userMessagePromise = data[key];
} else if (key === 'responseMessageId') {
responseMessageId = data[key];
} else if (key === 'promptTokens') {
promptTokens = data[key];
}
}
};
const {
onProgress: progressCallback,
sendIntermediateMessage,
getPartialText,
} = createOnProgress({
generation,
onProgress: () => {
if (plugin.loading === true) {
plugin.loading = false;
}
},
});
const onChainEnd = (data) => {
let { intermediateSteps: steps } = data;
plugin.outputs = steps && steps[0].action ? formatSteps(steps) : 'An error occurred.';
plugin.loading = false;
saveMessage(
req,
{ ...userMessage, user },
{ context: 'api/server/routes/ask/gptPlugins.js - onChainEnd' },
);
sendIntermediateMessage(res, {
plugin,
parentMessageId: userMessage.messageId,
messageId: responseMessageId,
});
// logger.debug('CHAIN END', plugin.outputs);
};
const getAbortData = () => ({
sender,
conversationId,
userMessagePromise,
messageId: responseMessageId,
parentMessageId: overrideParentMessageId ?? userMessageId,
text: getPartialText(),
plugin: { ...plugin, loading: false },
userMessage,
promptTokens,
});
const { abortController, onStart } = createAbortController(req, res, getAbortData, getReqData);
try {
endpointOption.tools = await validateTools(user, endpointOption.tools);
const { client } = await initializeClient({ req, res, endpointOption });
const onAgentAction = (action, start = false) => {
const formattedAction = formatAction(action);
plugin.inputs.push(formattedAction);
plugin.latest = formattedAction.plugin;
if (!start && !client.skipSaveUserMessage) {
saveMessage(
req,
{ ...userMessage, user },
{ context: 'api/server/routes/ask/gptPlugins.js - onAgentAction' },
);
}
sendIntermediateMessage(res, {
plugin,
parentMessageId: userMessage.messageId,
messageId: responseMessageId,
});
// logger.debug('PLUGIN ACTION', formattedAction);
};
let response = await client.sendMessage(text, {
user,
generation,
isContinued,
isEdited: true,
conversationId,
parentMessageId,
responseMessageId,
overrideParentMessageId,
getReqData,
onAgentAction,
onChainEnd,
onStart,
...endpointOption,
progressCallback,
progressOptions: {
res,
plugin,
// parentMessageId: overrideParentMessageId || userMessageId,
},
abortController,
});
if (overrideParentMessageId) {
response.parentMessageId = overrideParentMessageId;
}
logger.debug('[/edit/gptPlugins] CLIENT RESPONSE', response);
const { conversation = {} } = await response.databasePromise;
delete response.databasePromise;
conversation.title =
conversation && !conversation.title ? null : conversation?.title || 'New Chat';
sendMessage(res, {
title: conversation.title,
final: true,
conversation,
requestMessage: userMessage,
responseMessage: response,
});
res.end();
response.plugin = { ...plugin, loading: false };
await updateMessage(
req,
{ ...response, user },
{ context: 'api/server/routes/edit/gptPlugins.js' },
);
} catch (error) {
const partialText = getPartialText();
handleAbortError(res, req, error, {
partialText,
conversationId,
sender,
messageId: responseMessageId,
parentMessageId: userMessageId ?? parentMessageId,
});
}
},
);
module.exports = router;

View File

@@ -3,7 +3,6 @@ const openAI = require('./openAI');
const custom = require('./custom');
const google = require('./google');
const anthropic = require('./anthropic');
const gptPlugins = require('./gptPlugins');
const { isEnabled } = require('~/server/utils');
const { EModelEndpoint } = require('librechat-data-provider');
const {
@@ -39,7 +38,6 @@ if (isEnabled(LIMIT_MESSAGE_USER)) {
router.use(validateConvoAccess);
router.use([`/${EModelEndpoint.azureOpenAI}`, `/${EModelEndpoint.openAI}`], openAI);
router.use(`/${EModelEndpoint.gptPlugins}`, gptPlugins);
router.use(`/${EModelEndpoint.anthropic}`, anthropic);
router.use(`/${EModelEndpoint.google}`, google);
router.use(`/${EModelEndpoint.custom}`, custom);

View File

@@ -283,7 +283,10 @@ router.post('/', async (req, res) => {
message += ': ' + error.message;
}
if (error.message?.includes('Invalid file format')) {
if (
error.message?.includes('Invalid file format') ||
error.message?.includes('No OCR result')
) {
message = error.message;
}

View File

@@ -5,6 +5,13 @@ const { CacheKeys } = require('librechat-data-provider');
const { requireJwtAuth } = require('~/server/middleware');
const { getFlowStateManager } = require('~/config');
const { getLogStores } = require('~/cache');
const {
getMCPServers,
getMCPServer,
createMCPServer,
updateMCPServer,
deleteMCPServer,
} = require('@librechat/api');
const router = Router();
@@ -202,4 +209,44 @@ router.get('/oauth/status/:flowId', async (req, res) => {
}
});
/**
* Get all MCP servers for the authenticated user
* @route GET /api/mcp
* @returns {Array} Array of MCP servers
*/
router.get('/', requireJwtAuth, getMCPServers);
/**
* Get a single MCP server by ID
* @route GET /api/mcp/:mcp_id
* @param {string} mcp_id - The ID of the MCP server to fetch
* @returns {object} MCP server data
*/
router.get('/:mcp_id', requireJwtAuth, getMCPServer);
/**
* Create a new MCP server
* @route POST /api/mcp/add
* @param {object} req.body - MCP server data
* @returns {object} Created MCP server with populated tools
*/
router.post('/add', requireJwtAuth, createMCPServer);
/**
* Update an existing MCP server
* @route PUT /api/mcp/:mcp_id
* @param {string} mcp_id - The ID of the MCP server to update
* @param {object} req.body - Updated MCP server data
* @returns {object} Updated MCP server with populated tools
*/
router.put('/:mcp_id', requireJwtAuth, updateMCPServer);
/**
* Delete an MCP server
* @route DELETE /api/mcp/:mcp_id
* @param {string} mcp_id - The ID of the MCP server to delete
* @returns {object} Deletion confirmation
*/
router.delete('/:mcp_id', requireJwtAuth, deleteMCPServer);
module.exports = router;

View File

@@ -1,37 +1,43 @@
const express = require('express');
const { Tokenizer } = require('@librechat/api');
const { Tokenizer, generateCheckAccess } = require('@librechat/api');
const { PermissionTypes, Permissions } = require('librechat-data-provider');
const {
getAllUserMemories,
toggleUserMemories,
createMemory,
setMemory,
deleteMemory,
setMemory,
} = require('~/models');
const { requireJwtAuth, generateCheckAccess } = require('~/server/middleware');
const { requireJwtAuth } = require('~/server/middleware');
const { getRoleByName } = require('~/models/Role');
const router = express.Router();
const checkMemoryRead = generateCheckAccess(PermissionTypes.MEMORIES, [
Permissions.USE,
Permissions.READ,
]);
const checkMemoryCreate = generateCheckAccess(PermissionTypes.MEMORIES, [
Permissions.USE,
Permissions.CREATE,
]);
const checkMemoryUpdate = generateCheckAccess(PermissionTypes.MEMORIES, [
Permissions.USE,
Permissions.UPDATE,
]);
const checkMemoryDelete = generateCheckAccess(PermissionTypes.MEMORIES, [
Permissions.USE,
Permissions.UPDATE,
]);
const checkMemoryOptOut = generateCheckAccess(PermissionTypes.MEMORIES, [
Permissions.USE,
Permissions.OPT_OUT,
]);
const checkMemoryRead = generateCheckAccess({
permissionType: PermissionTypes.MEMORIES,
permissions: [Permissions.USE, Permissions.READ],
getRoleByName,
});
const checkMemoryCreate = generateCheckAccess({
permissionType: PermissionTypes.MEMORIES,
permissions: [Permissions.USE, Permissions.CREATE],
getRoleByName,
});
const checkMemoryUpdate = generateCheckAccess({
permissionType: PermissionTypes.MEMORIES,
permissions: [Permissions.USE, Permissions.UPDATE],
getRoleByName,
});
const checkMemoryDelete = generateCheckAccess({
permissionType: PermissionTypes.MEMORIES,
permissions: [Permissions.USE, Permissions.UPDATE],
getRoleByName,
});
const checkMemoryOptOut = generateCheckAccess({
permissionType: PermissionTypes.MEMORIES,
permissions: [Permissions.USE, Permissions.OPT_OUT],
getRoleByName,
});
router.use(requireJwtAuth);

View File

@@ -1,5 +1,7 @@
const express = require('express');
const { PermissionTypes, Permissions, SystemRoles } = require('librechat-data-provider');
const { logger } = require('@librechat/data-schemas');
const { generateCheckAccess } = require('@librechat/api');
const { Permissions, SystemRoles, PermissionTypes } = require('librechat-data-provider');
const {
getPrompt,
getPrompts,
@@ -14,24 +16,30 @@ const {
// updatePromptLabels,
makePromptProduction,
} = require('~/models/Prompt');
const { requireJwtAuth, generateCheckAccess } = require('~/server/middleware');
const { logger } = require('~/config');
const { requireJwtAuth } = require('~/server/middleware');
const { getRoleByName } = require('~/models/Role');
const router = express.Router();
const checkPromptAccess = generateCheckAccess(PermissionTypes.PROMPTS, [Permissions.USE]);
const checkPromptCreate = generateCheckAccess(PermissionTypes.PROMPTS, [
Permissions.USE,
Permissions.CREATE,
]);
const checkPromptAccess = generateCheckAccess({
permissionType: PermissionTypes.PROMPTS,
permissions: [Permissions.USE],
getRoleByName,
});
const checkPromptCreate = generateCheckAccess({
permissionType: PermissionTypes.PROMPTS,
permissions: [Permissions.USE, Permissions.CREATE],
getRoleByName,
});
const checkGlobalPromptShare = generateCheckAccess(
PermissionTypes.PROMPTS,
[Permissions.USE, Permissions.CREATE],
{
const checkGlobalPromptShare = generateCheckAccess({
permissionType: PermissionTypes.PROMPTS,
permissions: [Permissions.USE, Permissions.CREATE],
bodyProps: {
[Permissions.SHARED_GLOBAL]: ['projectIds', 'removeProjectIds'],
},
);
getRoleByName,
});
router.use(requireJwtAuth);
router.use(checkPromptAccess);

View File

@@ -1,18 +1,24 @@
const express = require('express');
const { logger } = require('@librechat/data-schemas');
const { generateCheckAccess } = require('@librechat/api');
const { PermissionTypes, Permissions } = require('librechat-data-provider');
const {
getConversationTags,
updateTagsForConversation,
updateConversationTag,
createConversationTag,
deleteConversationTag,
updateTagsForConversation,
getConversationTags,
} = require('~/models/ConversationTag');
const { requireJwtAuth, generateCheckAccess } = require('~/server/middleware');
const { logger } = require('~/config');
const { requireJwtAuth } = require('~/server/middleware');
const { getRoleByName } = require('~/models/Role');
const router = express.Router();
const checkBookmarkAccess = generateCheckAccess(PermissionTypes.BOOKMARKS, [Permissions.USE]);
const checkBookmarkAccess = generateCheckAccess({
permissionType: PermissionTypes.BOOKMARKS,
permissions: [Permissions.USE],
getRoleByName,
});
router.use(requireJwtAuth);
router.use(checkBookmarkAccess);

View File

@@ -1,4 +1,7 @@
const { klona } = require('klona');
const { sleep } = require('@librechat/agents');
const { sendEvent } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const {
StepTypes,
RunStatus,
@@ -11,11 +14,10 @@ const {
} = require('librechat-data-provider');
const { retrieveAndProcessFile } = require('~/server/services/Files/process');
const { processRequiredActions } = require('~/server/services/ToolService');
const { createOnProgress, sendMessage, sleep } = require('~/server/utils');
const { RunManager, waitForRun } = require('~/server/services/Runs');
const { processMessages } = require('~/server/services/Threads');
const { createOnProgress } = require('~/server/utils');
const { TextStream } = require('~/app/clients');
const { logger } = require('~/config');
/**
* Sorts, processes, and flattens messages to a single string.
@@ -64,7 +66,7 @@ async function createOnTextProgress({
};
logger.debug('Content data:', contentData);
sendMessage(openai.res, contentData);
sendEvent(openai.res, contentData);
};
}

View File

@@ -1,4 +1,5 @@
const bcrypt = require('bcryptjs');
const jwt = require('jsonwebtoken');
const { webcrypto } = require('node:crypto');
const { isEnabled } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
@@ -499,6 +500,18 @@ const resendVerificationEmail = async (req) => {
};
}
};
/**
* Generate a short-lived JWT token
* @param {String} userId - The ID of the user
* @param {String} [expireIn='5m'] - The expiration time for the token (default is 5 minutes)
* @returns {String} - The generated JWT token
*/
const generateShortLivedToken = (userId, expireIn = '5m') => {
return jwt.sign({ id: userId }, process.env.JWT_SECRET, {
expiresIn: expireIn,
algorithm: 'HS256',
});
};
module.exports = {
logoutUser,
@@ -506,7 +519,8 @@ module.exports = {
registerUser,
setAuthTokens,
resetPassword,
setOpenIDAuthTokens,
requestPasswordReset,
resendVerificationEmail,
setOpenIDAuthTokens,
generateShortLivedToken,
};

View File

@@ -1,5 +1,6 @@
const { isUserProvided } = require('@librechat/api');
const { EModelEndpoint } = require('librechat-data-provider');
const { isUserProvided, generateConfig } = require('~/server/utils');
const { generateConfig } = require('~/server/utils/handleText');
const {
OPENAI_API_KEY: openAIApiKey,

View File

@@ -40,6 +40,7 @@ async function getBalanceConfig() {
/**
*
* @param {string | EModelEndpoint} endpoint
* @returns {Promise<TEndpoint | undefined>}
*/
const getCustomEndpointConfig = async (endpoint) => {
const customConfig = await getCustomConfig();

View File

@@ -1,3 +1,5 @@
const fs = require('fs');
const path = require('path');
const { EModelEndpoint } = require('librechat-data-provider');
const { isUserProvided } = require('~/server/utils');
const { config } = require('./EndpointService');
@@ -11,9 +13,21 @@ const { openAIApiKey, azureOpenAIApiKey, useAzurePlugins, userProvidedOpenAI, go
async function loadAsyncEndpoints(req) {
let i = 0;
let serviceKey, googleUserProvides;
const serviceKeyPath =
process.env.GOOGLE_SERVICE_KEY_FILE_PATH ||
path.join(__dirname, '../../..', 'data', 'auth.json');
try {
serviceKey = require('~/data/auth.json');
} catch (e) {
if (process.env.GOOGLE_SERVICE_KEY_FILE_PATH) {
const absolutePath = path.isAbsolute(serviceKeyPath)
? serviceKeyPath
: path.resolve(serviceKeyPath);
const fileContent = fs.readFileSync(absolutePath, 'utf8');
serviceKey = JSON.parse(fileContent);
} else {
serviceKey = require('~/data/auth.json');
}
} catch {
if (i === 0) {
i++;
}
@@ -32,14 +46,14 @@ async function loadAsyncEndpoints(req) {
const gptPlugins =
useAzure || openAIApiKey || azureOpenAIApiKey
? {
availableAgents: ['classic', 'functions'],
userProvide: useAzure ? false : userProvidedOpenAI,
userProvideURL: useAzure
? false
: config[EModelEndpoint.openAI]?.userProvideURL ||
availableAgents: ['classic', 'functions'],
userProvide: useAzure ? false : userProvidedOpenAI,
userProvideURL: useAzure
? false
: config[EModelEndpoint.openAI]?.userProvideURL ||
config[EModelEndpoint.azureOpenAI]?.userProvideURL,
azure: useAzurePlugins || useAzure,
}
azure: useAzurePlugins || useAzure,
}
: false;
return { google, gptPlugins };

View File

@@ -1,18 +1,18 @@
const path = require('path');
const {
CacheKeys,
configSchema,
EImageOutputType,
validateSettingDefinitions,
agentParamSettings,
paramSettings,
} = require('librechat-data-provider');
const getLogStores = require('~/cache/getLogStores');
const loadYaml = require('~/utils/loadYaml');
const { logger } = require('~/config');
const axios = require('axios');
const yaml = require('js-yaml');
const keyBy = require('lodash/keyBy');
const { loadYaml } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const {
CacheKeys,
configSchema,
paramSettings,
EImageOutputType,
agentParamSettings,
validateSettingDefinitions,
} = require('librechat-data-provider');
const getLogStores = require('~/cache/getLogStores');
const projectRoot = path.resolve(__dirname, '..', '..', '..', '..');
const defaultConfigPath = path.resolve(projectRoot, 'librechat.yaml');

View File

@@ -1,6 +1,9 @@
jest.mock('axios');
jest.mock('~/cache/getLogStores');
jest.mock('~/utils/loadYaml');
jest.mock('@librechat/api', () => ({
...jest.requireActual('@librechat/api'),
loadYaml: jest.fn(),
}));
jest.mock('librechat-data-provider', () => {
const actual = jest.requireActual('librechat-data-provider');
return {
@@ -30,11 +33,22 @@ jest.mock('librechat-data-provider', () => {
};
});
jest.mock('@librechat/data-schemas', () => {
return {
logger: {
info: jest.fn(),
warn: jest.fn(),
debug: jest.fn(),
error: jest.fn(),
},
};
});
const axios = require('axios');
const { loadYaml } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const loadCustomConfig = require('./loadCustomConfig');
const getLogStores = require('~/cache/getLogStores');
const loadYaml = require('~/utils/loadYaml');
const { logger } = require('~/config');
describe('loadCustomConfig', () => {
const mockSet = jest.fn();

View File

@@ -11,30 +11,13 @@ const {
replaceSpecialVars,
providerEndpointMap,
} = require('librechat-data-provider');
const initAnthropic = require('~/server/services/Endpoints/anthropic/initialize');
const getBedrockOptions = require('~/server/services/Endpoints/bedrock/options');
const initOpenAI = require('~/server/services/Endpoints/openAI/initialize');
const initCustom = require('~/server/services/Endpoints/custom/initialize');
const initGoogle = require('~/server/services/Endpoints/google/initialize');
const { getProviderConfig } = require('~/server/services/Endpoints');
const generateArtifactsPrompt = require('~/app/clients/prompts/artifacts');
const { getCustomEndpointConfig } = require('~/server/services/Config');
const { processFiles } = require('~/server/services/Files/process');
const { getFiles, getToolFilesByIds } = require('~/models/File');
const { getConvoFiles } = require('~/models/Conversation');
const { getModelMaxTokens } = require('~/utils');
const providerConfigMap = {
[Providers.XAI]: initCustom,
[Providers.OLLAMA]: initCustom,
[Providers.DEEPSEEK]: initCustom,
[Providers.OPENROUTER]: initCustom,
[EModelEndpoint.openAI]: initOpenAI,
[EModelEndpoint.google]: initGoogle,
[EModelEndpoint.azureOpenAI]: initOpenAI,
[EModelEndpoint.anthropic]: initAnthropic,
[EModelEndpoint.bedrock]: getBedrockOptions,
};
/**
* @param {object} params
* @param {ServerRequest} params.req
@@ -114,17 +97,9 @@ const initializeAgent = async ({
})) ?? {};
agent.endpoint = provider;
let getOptions = providerConfigMap[provider];
if (!getOptions && providerConfigMap[provider.toLowerCase()] != null) {
agent.provider = provider.toLowerCase();
getOptions = providerConfigMap[agent.provider];
} else if (!getOptions) {
const customEndpointConfig = await getCustomEndpointConfig(provider);
if (!customEndpointConfig) {
throw new Error(`Provider ${provider} not supported`);
}
getOptions = initCustom;
agent.provider = Providers.OPENAI;
const { getOptions, overrideProvider } = await getProviderConfig(provider);
if (overrideProvider) {
agent.provider = overrideProvider;
}
const _endpointOption =

View File

@@ -23,7 +23,7 @@ const addTitle = async (req, { text, response, client }) => {
let timeoutId;
try {
const timeoutPromise = new Promise((_, reject) => {
timeoutId = setTimeout(() => reject(new Error('Title generation timeout')), 25000);
timeoutId = setTimeout(() => reject(new Error('Title generation timeout')), 45000);
}).catch((error) => {
logger.error('Title error:', error);
});

View File

@@ -41,7 +41,7 @@ const initializeClient = async ({ req, res, endpointOption, overrideModel, optio
{
reverseProxyUrl: ANTHROPIC_REVERSE_PROXY ?? null,
proxy: PROXY ?? null,
modelOptions: endpointOption.model_parameters,
modelOptions: endpointOption?.model_parameters ?? {},
},
clientOptions,
);

View File

@@ -75,6 +75,7 @@ function getLLMConfig(apiKey, options = {}) {
if (options.reverseProxyUrl) {
requestOptions.clientOptions.baseURL = options.reverseProxyUrl;
requestOptions.anthropicApiUrl = options.reverseProxyUrl;
}
return {

View File

@@ -1,11 +1,45 @@
const { anthropicSettings } = require('librechat-data-provider');
const { anthropicSettings, removeNullishValues } = require('librechat-data-provider');
const { getLLMConfig } = require('~/server/services/Endpoints/anthropic/llm');
const { checkPromptCacheSupport, getClaudeHeaders, configureReasoning } = require('./helpers');
jest.mock('https-proxy-agent', () => ({
HttpsProxyAgent: jest.fn().mockImplementation((proxy) => ({ proxy })),
}));
jest.mock('./helpers', () => ({
checkPromptCacheSupport: jest.fn(),
getClaudeHeaders: jest.fn(),
configureReasoning: jest.fn((requestOptions) => requestOptions),
}));
jest.mock('librechat-data-provider', () => ({
anthropicSettings: {
model: { default: 'claude-3-opus-20240229' },
maxOutputTokens: { default: 4096, reset: jest.fn(() => 4096) },
thinking: { default: false },
promptCache: { default: false },
thinkingBudget: { default: null },
},
removeNullishValues: jest.fn((obj) => {
const result = {};
for (const key in obj) {
if (obj[key] !== null && obj[key] !== undefined) {
result[key] = obj[key];
}
}
return result;
}),
}));
describe('getLLMConfig', () => {
beforeEach(() => {
jest.clearAllMocks();
checkPromptCacheSupport.mockReturnValue(false);
getClaudeHeaders.mockReturnValue(undefined);
configureReasoning.mockImplementation((requestOptions) => requestOptions);
anthropicSettings.maxOutputTokens.reset.mockReturnValue(4096);
});
it('should create a basic configuration with default values', () => {
const result = getLLMConfig('test-api-key', { modelOptions: {} });
@@ -36,6 +70,7 @@ describe('getLLMConfig', () => {
});
expect(result.llmConfig.clientOptions).toHaveProperty('baseURL', 'http://reverse-proxy');
expect(result.llmConfig).toHaveProperty('anthropicApiUrl', 'http://reverse-proxy');
});
it('should include topK and topP for non-Claude-3.7 models', () => {
@@ -65,6 +100,11 @@ describe('getLLMConfig', () => {
});
it('should NOT include topK and topP for Claude-3-7 models (hyphen notation)', () => {
configureReasoning.mockImplementation((requestOptions) => {
requestOptions.thinking = { type: 'enabled' };
return requestOptions;
});
const result = getLLMConfig('test-api-key', {
modelOptions: {
model: 'claude-3-7-sonnet',
@@ -78,6 +118,11 @@ describe('getLLMConfig', () => {
});
it('should NOT include topK and topP for Claude-3.7 models (decimal notation)', () => {
configureReasoning.mockImplementation((requestOptions) => {
requestOptions.thinking = { type: 'enabled' };
return requestOptions;
});
const result = getLLMConfig('test-api-key', {
modelOptions: {
model: 'claude-3.7-sonnet',
@@ -154,4 +199,160 @@ describe('getLLMConfig', () => {
expect(result3.llmConfig).toHaveProperty('topK', 10);
expect(result3.llmConfig).toHaveProperty('topP', 0.9);
});
describe('Edge cases', () => {
it('should handle missing apiKey', () => {
const result = getLLMConfig(undefined, { modelOptions: {} });
expect(result.llmConfig).not.toHaveProperty('apiKey');
});
it('should handle empty modelOptions', () => {
expect(() => {
getLLMConfig('test-api-key', {});
}).toThrow("Cannot read properties of undefined (reading 'thinking')");
});
it('should handle no options parameter', () => {
expect(() => {
getLLMConfig('test-api-key');
}).toThrow("Cannot read properties of undefined (reading 'thinking')");
});
it('should handle temperature, stop sequences, and stream settings', () => {
const result = getLLMConfig('test-api-key', {
modelOptions: {
temperature: 0.7,
stop: ['\n\n', 'END'],
stream: false,
},
});
expect(result.llmConfig).toHaveProperty('temperature', 0.7);
expect(result.llmConfig).toHaveProperty('stopSequences', ['\n\n', 'END']);
expect(result.llmConfig).toHaveProperty('stream', false);
});
it('should handle maxOutputTokens when explicitly set to falsy value', () => {
anthropicSettings.maxOutputTokens.reset.mockReturnValue(8192);
const result = getLLMConfig('test-api-key', {
modelOptions: {
model: 'claude-3-opus',
maxOutputTokens: null,
},
});
expect(anthropicSettings.maxOutputTokens.reset).toHaveBeenCalledWith('claude-3-opus');
expect(result.llmConfig).toHaveProperty('maxTokens', 8192);
});
it('should handle both proxy and reverseProxyUrl', () => {
const result = getLLMConfig('test-api-key', {
modelOptions: {},
proxy: 'http://proxy:8080',
reverseProxyUrl: 'https://reverse-proxy.com',
});
expect(result.llmConfig.clientOptions).toHaveProperty('fetchOptions');
expect(result.llmConfig.clientOptions.fetchOptions).toHaveProperty('dispatcher');
expect(result.llmConfig.clientOptions.fetchOptions.dispatcher).toBeDefined();
expect(result.llmConfig.clientOptions.fetchOptions.dispatcher.constructor.name).toBe(
'ProxyAgent',
);
expect(result.llmConfig.clientOptions).toHaveProperty('baseURL', 'https://reverse-proxy.com');
expect(result.llmConfig).toHaveProperty('anthropicApiUrl', 'https://reverse-proxy.com');
});
it('should handle prompt cache with supported model', () => {
checkPromptCacheSupport.mockReturnValue(true);
getClaudeHeaders.mockReturnValue({ 'anthropic-beta': 'prompt-caching-2024-07-31' });
const result = getLLMConfig('test-api-key', {
modelOptions: {
model: 'claude-3-5-sonnet',
promptCache: true,
},
});
expect(checkPromptCacheSupport).toHaveBeenCalledWith('claude-3-5-sonnet');
expect(getClaudeHeaders).toHaveBeenCalledWith('claude-3-5-sonnet', true);
expect(result.llmConfig.clientOptions.defaultHeaders).toEqual({
'anthropic-beta': 'prompt-caching-2024-07-31',
});
});
it('should handle thinking and thinkingBudget options', () => {
configureReasoning.mockImplementation((requestOptions, systemOptions) => {
if (systemOptions.thinking) {
requestOptions.thinking = { type: 'enabled' };
}
if (systemOptions.thinkingBudget) {
requestOptions.thinking = {
...requestOptions.thinking,
budget_tokens: systemOptions.thinkingBudget,
};
}
return requestOptions;
});
getLLMConfig('test-api-key', {
modelOptions: {
model: 'claude-3-7-sonnet',
thinking: true,
thinkingBudget: 5000,
},
});
expect(configureReasoning).toHaveBeenCalledWith(
expect.any(Object),
expect.objectContaining({
thinking: true,
promptCache: false,
thinkingBudget: 5000,
}),
);
});
it('should remove system options from modelOptions', () => {
const modelOptions = {
model: 'claude-3-opus',
thinking: true,
promptCache: true,
thinkingBudget: 1000,
temperature: 0.5,
};
getLLMConfig('test-api-key', { modelOptions });
expect(modelOptions).not.toHaveProperty('thinking');
expect(modelOptions).not.toHaveProperty('promptCache');
expect(modelOptions).not.toHaveProperty('thinkingBudget');
expect(modelOptions).toHaveProperty('temperature', 0.5);
});
it('should handle all nullish values removal', () => {
removeNullishValues.mockImplementation((obj) => {
const cleaned = {};
Object.entries(obj).forEach(([key, value]) => {
if (value !== null && value !== undefined) {
cleaned[key] = value;
}
});
return cleaned;
});
const result = getLLMConfig('test-api-key', {
modelOptions: {
temperature: null,
topP: undefined,
topK: 0,
stop: [],
},
});
expect(result.llmConfig).not.toHaveProperty('temperature');
expect(result.llmConfig).not.toHaveProperty('topP');
expect(result.llmConfig).toHaveProperty('topK', 0);
expect(result.llmConfig).toHaveProperty('stopSequences', []);
});
});
});

View File

@@ -1,12 +1,7 @@
const OpenAI = require('openai');
const { HttpsProxyAgent } = require('https-proxy-agent');
const { constructAzureURL, isUserProvided } = require('@librechat/api');
const {
ErrorTypes,
EModelEndpoint,
resolveHeaders,
mapModelToAzureConfig,
} = require('librechat-data-provider');
const { constructAzureURL, isUserProvided, resolveHeaders } = require('@librechat/api');
const { ErrorTypes, EModelEndpoint, mapModelToAzureConfig } = require('librechat-data-provider');
const {
getUserKeyValues,
getUserKeyExpiry,
@@ -114,11 +109,14 @@ const initializeClient = async ({ req, res, version, endpointOption, initAppClie
apiKey = azureOptions.azureOpenAIApiKey;
opts.defaultQuery = { 'api-version': azureOptions.azureOpenAIApiVersion };
opts.defaultHeaders = resolveHeaders({
...headers,
'api-key': apiKey,
'OpenAI-Beta': `assistants=${version}`,
});
opts.defaultHeaders = resolveHeaders(
{
...headers,
'api-key': apiKey,
'OpenAI-Beta': `assistants=${version}`,
},
req.user,
);
opts.model = azureOptions.azureOpenAIApiDeploymentName;
if (initAppClient) {

View File

@@ -64,7 +64,7 @@ const getOptions = async ({ req, overrideModel, endpointOption }) => {
/** @type {BedrockClientOptions} */
const requestOptions = {
model: overrideModel ?? endpointOption.model,
model: overrideModel ?? endpointOption?.model,
region: BEDROCK_AWS_DEFAULT_REGION,
};
@@ -76,7 +76,7 @@ const getOptions = async ({ req, overrideModel, endpointOption }) => {
const llmConfig = bedrockOutputParser(
bedrockInputParser.parse(
removeNullishValues(Object.assign(requestOptions, endpointOption.model_parameters)),
removeNullishValues(Object.assign(requestOptions, endpointOption?.model_parameters ?? {})),
),
);

View File

@@ -6,7 +6,7 @@ const {
extractEnvVariable,
} = require('librechat-data-provider');
const { Providers } = require('@librechat/agents');
const { getOpenAIConfig, createHandleLLMNewToken } = require('@librechat/api');
const { getOpenAIConfig, createHandleLLMNewToken, resolveHeaders } = require('@librechat/api');
const { getUserKeyValues, checkUserKeyExpiry } = require('~/server/services/UserService');
const { getCustomEndpointConfig } = require('~/server/services/Config');
const { fetchModels } = require('~/server/services/ModelService');
@@ -28,12 +28,7 @@ const initializeClient = async ({ req, res, endpointOption, optionsOnly, overrid
const CUSTOM_API_KEY = extractEnvVariable(endpointConfig.apiKey);
const CUSTOM_BASE_URL = extractEnvVariable(endpointConfig.baseURL);
let resolvedHeaders = {};
if (endpointConfig.headers && typeof endpointConfig.headers === 'object') {
Object.keys(endpointConfig.headers).forEach((key) => {
resolvedHeaders[key] = extractEnvVariable(endpointConfig.headers[key]);
});
}
let resolvedHeaders = resolveHeaders(endpointConfig.headers, req.user);
if (CUSTOM_API_KEY.match(envVarRegex)) {
throw new Error(`Missing API Key for ${endpoint}.`);
@@ -134,7 +129,7 @@ const initializeClient = async ({ req, res, endpointOption, optionsOnly, overrid
};
if (optionsOnly) {
const modelOptions = endpointOption.model_parameters;
const modelOptions = endpointOption?.model_parameters ?? {};
if (endpoint !== Providers.OLLAMA) {
clientOptions = Object.assign(
{

View File

@@ -0,0 +1,93 @@
const initializeClient = require('./initialize');
jest.mock('@librechat/api', () => ({
resolveHeaders: jest.fn(),
getOpenAIConfig: jest.fn(),
createHandleLLMNewToken: jest.fn(),
}));
jest.mock('librechat-data-provider', () => ({
CacheKeys: { TOKEN_CONFIG: 'token_config' },
ErrorTypes: { NO_USER_KEY: 'NO_USER_KEY', NO_BASE_URL: 'NO_BASE_URL' },
envVarRegex: /\$\{([^}]+)\}/,
FetchTokenConfig: {},
extractEnvVariable: jest.fn((value) => value),
}));
jest.mock('@librechat/agents', () => ({
Providers: { OLLAMA: 'ollama' },
}));
jest.mock('~/server/services/UserService', () => ({
getUserKeyValues: jest.fn(),
checkUserKeyExpiry: jest.fn(),
}));
jest.mock('~/server/services/Config', () => ({
getCustomEndpointConfig: jest.fn().mockResolvedValue({
apiKey: 'test-key',
baseURL: 'https://test.com',
headers: { 'x-user': '{{LIBRECHAT_USER_ID}}', 'x-email': '{{LIBRECHAT_USER_EMAIL}}' },
models: { default: ['test-model'] },
}),
}));
jest.mock('~/server/services/ModelService', () => ({
fetchModels: jest.fn(),
}));
jest.mock('~/app/clients/OpenAIClient', () => {
return jest.fn().mockImplementation(() => ({
options: {},
}));
});
jest.mock('~/server/utils', () => ({
isUserProvided: jest.fn().mockReturnValue(false),
}));
jest.mock('~/cache/getLogStores', () =>
jest.fn().mockReturnValue({
get: jest.fn(),
}),
);
describe('custom/initializeClient', () => {
const mockRequest = {
body: { endpoint: 'test-endpoint' },
user: { id: 'user-123', email: 'test@example.com' },
app: { locals: {} },
};
const mockResponse = {};
beforeEach(() => {
jest.clearAllMocks();
});
it('calls resolveHeaders with headers and user', async () => {
const { resolveHeaders } = require('@librechat/api');
await initializeClient({ req: mockRequest, res: mockResponse, optionsOnly: true });
expect(resolveHeaders).toHaveBeenCalledWith(
{ 'x-user': '{{LIBRECHAT_USER_ID}}', 'x-email': '{{LIBRECHAT_USER_EMAIL}}' },
{ id: 'user-123', email: 'test@example.com' },
);
});
it('throws if endpoint config is missing', async () => {
const { getCustomEndpointConfig } = require('~/server/services/Config');
getCustomEndpointConfig.mockResolvedValueOnce(null);
await expect(
initializeClient({ req: mockRequest, res: mockResponse, optionsOnly: true }),
).rejects.toThrow('Config not found for the test-endpoint custom endpoint.');
});
it('throws if user is missing', async () => {
await expect(
initializeClient({
req: { ...mockRequest, user: undefined },
res: mockResponse,
optionsOnly: true,
}),
).rejects.toThrow("Cannot read properties of undefined (reading 'id')");
});
});

View File

@@ -1,7 +1,8 @@
const fs = require('fs');
const path = require('path');
const { getGoogleConfig, isEnabled } = require('@librechat/api');
const { EModelEndpoint, AuthKeys } = require('librechat-data-provider');
const { getUserKey, checkUserKeyExpiry } = require('~/server/services/UserService');
const { getLLMConfig } = require('~/server/services/Endpoints/google/llm');
const { isEnabled } = require('~/server/utils');
const { GoogleClient } = require('~/app');
const initializeClient = async ({ req, res, endpointOption, overrideModel, optionsOnly }) => {
@@ -16,9 +17,21 @@ const initializeClient = async ({ req, res, endpointOption, overrideModel, optio
}
let serviceKey = {};
try {
serviceKey = require('~/data/auth.json');
} catch (e) {
if (process.env.GOOGLE_SERVICE_KEY_FILE_PATH) {
const serviceKeyPath =
process.env.GOOGLE_SERVICE_KEY_FILE_PATH ||
path.join(__dirname, '../../../../..', 'data', 'auth.json');
const absolutePath = path.isAbsolute(serviceKeyPath)
? serviceKeyPath
: path.resolve(serviceKeyPath);
const fileContent = fs.readFileSync(absolutePath, 'utf8');
serviceKey = JSON.parse(fileContent);
} else {
serviceKey = require('~/data/auth.json');
}
} catch (_e) {
// Do nothing
}
@@ -58,14 +71,14 @@ const initializeClient = async ({ req, res, endpointOption, overrideModel, optio
if (optionsOnly) {
clientOptions = Object.assign(
{
modelOptions: endpointOption.model_parameters,
modelOptions: endpointOption?.model_parameters ?? {},
},
clientOptions,
);
if (overrideModel) {
clientOptions.modelOptions.model = overrideModel;
}
return getLLMConfig(credentials, clientOptions);
return getGoogleConfig(credentials, clientOptions);
}
const client = new GoogleClient(credentials, clientOptions);

View File

@@ -1,41 +0,0 @@
const { removeNullishValues } = require('librechat-data-provider');
const generateArtifactsPrompt = require('~/app/clients/prompts/artifacts');
const buildOptions = (endpoint, parsedBody) => {
const {
modelLabel,
chatGptLabel,
promptPrefix,
agentOptions,
tools = [],
iconURL,
greeting,
spec,
maxContextTokens,
artifacts,
...modelOptions
} = parsedBody;
const endpointOption = removeNullishValues({
endpoint,
tools: tools
.map((tool) => tool?.pluginKey ?? tool)
.filter((toolName) => typeof toolName === 'string'),
modelLabel,
chatGptLabel,
promptPrefix,
agentOptions,
iconURL,
greeting,
spec,
maxContextTokens,
modelOptions,
});
if (typeof artifacts === 'string') {
endpointOption.artifactsPrompt = generateArtifactsPrompt({ endpoint, artifacts });
}
return endpointOption;
};
module.exports = buildOptions;

View File

@@ -1,7 +0,0 @@
const buildOptions = require('./build');
const initializeClient = require('./initialize');
module.exports = {
buildOptions,
initializeClient,
};

View File

@@ -1,134 +0,0 @@
const {
EModelEndpoint,
resolveHeaders,
mapModelToAzureConfig,
} = require('librechat-data-provider');
const { isEnabled, isUserProvided, getAzureCredentials } = require('@librechat/api');
const { getUserKeyValues, checkUserKeyExpiry } = require('~/server/services/UserService');
const { PluginsClient } = require('~/app');
const initializeClient = async ({ req, res, endpointOption }) => {
const {
PROXY,
OPENAI_API_KEY,
AZURE_API_KEY,
PLUGINS_USE_AZURE,
OPENAI_REVERSE_PROXY,
AZURE_OPENAI_BASEURL,
OPENAI_SUMMARIZE,
DEBUG_PLUGINS,
} = process.env;
const { key: expiresAt, model: modelName } = req.body;
const contextStrategy = isEnabled(OPENAI_SUMMARIZE) ? 'summarize' : null;
let useAzure = isEnabled(PLUGINS_USE_AZURE);
let endpoint = useAzure ? EModelEndpoint.azureOpenAI : EModelEndpoint.openAI;
/** @type {false | TAzureConfig} */
const azureConfig = req.app.locals[EModelEndpoint.azureOpenAI];
useAzure = useAzure || azureConfig?.plugins;
if (useAzure && endpoint !== EModelEndpoint.azureOpenAI) {
endpoint = EModelEndpoint.azureOpenAI;
}
const credentials = {
[EModelEndpoint.openAI]: OPENAI_API_KEY,
[EModelEndpoint.azureOpenAI]: AZURE_API_KEY,
};
const baseURLOptions = {
[EModelEndpoint.openAI]: OPENAI_REVERSE_PROXY,
[EModelEndpoint.azureOpenAI]: AZURE_OPENAI_BASEURL,
};
const userProvidesKey = isUserProvided(credentials[endpoint]);
const userProvidesURL = isUserProvided(baseURLOptions[endpoint]);
let userValues = null;
if (expiresAt && (userProvidesKey || userProvidesURL)) {
checkUserKeyExpiry(expiresAt, endpoint);
userValues = await getUserKeyValues({ userId: req.user.id, name: endpoint });
}
let apiKey = userProvidesKey ? userValues?.apiKey : credentials[endpoint];
let baseURL = userProvidesURL ? userValues?.baseURL : baseURLOptions[endpoint];
const clientOptions = {
contextStrategy,
debug: isEnabled(DEBUG_PLUGINS),
reverseProxyUrl: baseURL ? baseURL : null,
proxy: PROXY ?? null,
req,
res,
...endpointOption,
};
if (useAzure && azureConfig) {
const { modelGroupMap, groupMap } = azureConfig;
const {
azureOptions,
baseURL,
headers = {},
serverless,
} = mapModelToAzureConfig({
modelName,
modelGroupMap,
groupMap,
});
clientOptions.reverseProxyUrl = baseURL ?? clientOptions.reverseProxyUrl;
clientOptions.headers = resolveHeaders({ ...headers, ...(clientOptions.headers ?? {}) });
clientOptions.titleConvo = azureConfig.titleConvo;
clientOptions.titleModel = azureConfig.titleModel;
clientOptions.titleMethod = azureConfig.titleMethod ?? 'completion';
const azureRate = modelName.includes('gpt-4') ? 30 : 17;
clientOptions.streamRate = azureConfig.streamRate ?? azureRate;
const groupName = modelGroupMap[modelName].group;
clientOptions.addParams = azureConfig.groupMap[groupName].addParams;
clientOptions.dropParams = azureConfig.groupMap[groupName].dropParams;
clientOptions.forcePrompt = azureConfig.groupMap[groupName].forcePrompt;
apiKey = azureOptions.azureOpenAIApiKey;
clientOptions.azure = !serverless && azureOptions;
if (serverless === true) {
clientOptions.defaultQuery = azureOptions.azureOpenAIApiVersion
? { 'api-version': azureOptions.azureOpenAIApiVersion }
: undefined;
clientOptions.headers['api-key'] = apiKey;
}
} else if (useAzure || (apiKey && apiKey.includes('{"azure') && !clientOptions.azure)) {
clientOptions.azure = userProvidesKey ? JSON.parse(userValues.apiKey) : getAzureCredentials();
apiKey = clientOptions.azure.azureOpenAIApiKey;
}
/** @type {undefined | TBaseEndpoint} */
const pluginsConfig = req.app.locals[EModelEndpoint.gptPlugins];
if (!useAzure && pluginsConfig) {
clientOptions.streamRate = pluginsConfig.streamRate;
}
/** @type {undefined | TBaseEndpoint} */
const allConfig = req.app.locals.all;
if (allConfig) {
clientOptions.streamRate = allConfig.streamRate;
}
if (!apiKey) {
throw new Error(`${endpoint} API key not provided. Please provide it again.`);
}
const client = new PluginsClient(apiKey, clientOptions);
return {
client,
azure: clientOptions.azure,
openAIApiKey: apiKey,
};
};
module.exports = initializeClient;

View File

@@ -1,410 +0,0 @@
// gptPlugins/initializeClient.spec.js
jest.mock('~/cache/getLogStores');
const { EModelEndpoint, ErrorTypes, validateAzureGroups } = require('librechat-data-provider');
const { getUserKey, getUserKeyValues } = require('~/server/services/UserService');
const initializeClient = require('./initialize');
const { PluginsClient } = require('~/app');
// Mock getUserKey since it's the only function we want to mock
jest.mock('~/server/services/UserService', () => ({
getUserKey: jest.fn(),
getUserKeyValues: jest.fn(),
checkUserKeyExpiry: jest.requireActual('~/server/services/UserService').checkUserKeyExpiry,
}));
describe('gptPlugins/initializeClient', () => {
// Set up environment variables
const originalEnvironment = process.env;
const app = {
locals: {},
};
const validAzureConfigs = [
{
group: 'librechat-westus',
apiKey: 'WESTUS_API_KEY',
instanceName: 'librechat-westus',
version: '2023-12-01-preview',
models: {
'gpt-4-vision-preview': {
deploymentName: 'gpt-4-vision-preview',
version: '2024-02-15-preview',
},
'gpt-3.5-turbo': {
deploymentName: 'gpt-35-turbo',
},
'gpt-3.5-turbo-1106': {
deploymentName: 'gpt-35-turbo-1106',
},
'gpt-4': {
deploymentName: 'gpt-4',
},
'gpt-4-1106-preview': {
deploymentName: 'gpt-4-1106-preview',
},
},
},
{
group: 'librechat-eastus',
apiKey: 'EASTUS_API_KEY',
instanceName: 'librechat-eastus',
deploymentName: 'gpt-4-turbo',
version: '2024-02-15-preview',
models: {
'gpt-4-turbo': true,
},
baseURL: 'https://eastus.example.com',
additionalHeaders: {
'x-api-key': 'x-api-key-value',
},
},
{
group: 'mistral-inference',
apiKey: 'AZURE_MISTRAL_API_KEY',
baseURL:
'https://Mistral-large-vnpet-serverless.region.inference.ai.azure.com/v1/chat/completions',
serverless: true,
models: {
'mistral-large': true,
},
},
{
group: 'llama-70b-chat',
apiKey: 'AZURE_LLAMA2_70B_API_KEY',
baseURL:
'https://Llama-2-70b-chat-qmvyb-serverless.region.inference.ai.azure.com/v1/chat/completions',
serverless: true,
models: {
'llama-70b-chat': true,
},
},
];
const { modelNames, modelGroupMap, groupMap } = validateAzureGroups(validAzureConfigs);
beforeEach(() => {
jest.resetModules(); // Clears the cache
process.env = { ...originalEnvironment }; // Make a copy
});
afterAll(() => {
process.env = originalEnvironment; // Restore original env vars
});
test('should initialize PluginsClient with OpenAI API key and default options', async () => {
process.env.OPENAI_API_KEY = 'test-openai-api-key';
process.env.PLUGINS_USE_AZURE = 'false';
process.env.DEBUG_PLUGINS = 'false';
process.env.OPENAI_SUMMARIZE = 'false';
const req = {
body: { key: null },
user: { id: '123' },
app,
};
const res = {};
const endpointOption = { modelOptions: { model: 'default-model' } };
const { client, openAIApiKey } = await initializeClient({ req, res, endpointOption });
expect(openAIApiKey).toBe('test-openai-api-key');
expect(client).toBeInstanceOf(PluginsClient);
});
test('should initialize PluginsClient with Azure credentials when PLUGINS_USE_AZURE is true', async () => {
process.env.AZURE_API_KEY = 'test-azure-api-key';
(process.env.AZURE_OPENAI_API_INSTANCE_NAME = 'some-value'),
(process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME = 'some-value'),
(process.env.AZURE_OPENAI_API_VERSION = 'some-value'),
(process.env.AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME = 'some-value'),
(process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME = 'some-value'),
(process.env.PLUGINS_USE_AZURE = 'true');
process.env.DEBUG_PLUGINS = 'false';
process.env.OPENAI_SUMMARIZE = 'false';
const req = {
body: { key: null },
user: { id: '123' },
app,
};
const res = {};
const endpointOption = { modelOptions: { model: 'test-model' } };
const { client, azure } = await initializeClient({ req, res, endpointOption });
expect(azure.azureOpenAIApiKey).toBe('test-azure-api-key');
expect(client).toBeInstanceOf(PluginsClient);
});
test('should use the debug option when DEBUG_PLUGINS is enabled', async () => {
process.env.OPENAI_API_KEY = 'test-openai-api-key';
process.env.DEBUG_PLUGINS = 'true';
const req = {
body: { key: null },
user: { id: '123' },
app,
};
const res = {};
const endpointOption = { modelOptions: { model: 'default-model' } };
const { client } = await initializeClient({ req, res, endpointOption });
expect(client.options.debug).toBe(true);
});
test('should set contextStrategy to summarize when OPENAI_SUMMARIZE is enabled', async () => {
process.env.OPENAI_API_KEY = 'test-openai-api-key';
process.env.OPENAI_SUMMARIZE = 'true';
const req = {
body: { key: null },
user: { id: '123' },
app,
};
const res = {};
const endpointOption = { modelOptions: { model: 'default-model' } };
const { client } = await initializeClient({ req, res, endpointOption });
expect(client.options.contextStrategy).toBe('summarize');
});
// ... additional tests for reverseProxyUrl, proxy, user-provided keys, etc.
test('should throw an error if no API keys are provided in the environment', async () => {
// Clear the environment variables for API keys
delete process.env.OPENAI_API_KEY;
delete process.env.AZURE_API_KEY;
const req = {
body: { key: null },
user: { id: '123' },
app,
};
const res = {};
const endpointOption = { modelOptions: { model: 'default-model' } };
await expect(initializeClient({ req, res, endpointOption })).rejects.toThrow(
`${EModelEndpoint.openAI} API key not provided.`,
);
});
// Additional tests for gptPlugins/initializeClient.spec.js
// ... (previous test setup code)
test('should handle user-provided OpenAI keys and check expiry', async () => {
process.env.OPENAI_API_KEY = 'user_provided';
process.env.PLUGINS_USE_AZURE = 'false';
const futureDate = new Date(Date.now() + 10000).toISOString();
const req = {
body: { key: futureDate },
user: { id: '123' },
app,
};
const res = {};
const endpointOption = { modelOptions: { model: 'default-model' } };
getUserKeyValues.mockResolvedValue({ apiKey: 'test-user-provided-openai-api-key' });
const { openAIApiKey } = await initializeClient({ req, res, endpointOption });
expect(openAIApiKey).toBe('test-user-provided-openai-api-key');
});
test('should handle user-provided Azure keys and check expiry', async () => {
process.env.AZURE_API_KEY = 'user_provided';
process.env.PLUGINS_USE_AZURE = 'true';
const futureDate = new Date(Date.now() + 10000).toISOString();
const req = {
body: { key: futureDate },
user: { id: '123' },
app,
};
const res = {};
const endpointOption = { modelOptions: { model: 'test-model' } };
getUserKeyValues.mockResolvedValue({
apiKey: JSON.stringify({
azureOpenAIApiKey: 'test-user-provided-azure-api-key',
azureOpenAIApiDeploymentName: 'test-deployment',
}),
});
const { azure } = await initializeClient({ req, res, endpointOption });
expect(azure.azureOpenAIApiKey).toBe('test-user-provided-azure-api-key');
});
test('should throw an error if the user-provided key has expired', async () => {
process.env.OPENAI_API_KEY = 'user_provided';
process.env.PLUGINS_USE_AZURE = 'FALSE';
const expiresAt = new Date(Date.now() - 10000).toISOString(); // Expired
const req = {
body: { key: expiresAt },
user: { id: '123' },
app,
};
const res = {};
const endpointOption = { modelOptions: { model: 'default-model' } };
await expect(initializeClient({ req, res, endpointOption })).rejects.toThrow(
/expired_user_key/,
);
});
test('should throw an error if the user-provided Azure key is invalid JSON', async () => {
process.env.AZURE_API_KEY = 'user_provided';
process.env.PLUGINS_USE_AZURE = 'true';
const req = {
body: { key: new Date(Date.now() + 10000).toISOString() },
user: { id: '123' },
app,
};
const res = {};
const endpointOption = { modelOptions: { model: 'default-model' } };
// Simulate an invalid JSON string returned from getUserKey
getUserKey.mockResolvedValue('invalid-json');
getUserKeyValues.mockImplementation(() => {
let userValues = getUserKey();
try {
userValues = JSON.parse(userValues);
} catch (e) {
throw new Error(
JSON.stringify({
type: ErrorTypes.INVALID_USER_KEY,
}),
);
}
return userValues;
});
await expect(initializeClient({ req, res, endpointOption })).rejects.toThrow(
/invalid_user_key/,
);
});
test('should correctly handle the presence of a reverse proxy', async () => {
process.env.OPENAI_REVERSE_PROXY = 'http://reverse.proxy';
process.env.PROXY = 'http://proxy';
process.env.OPENAI_API_KEY = 'test-openai-api-key';
const req = {
body: { key: null },
user: { id: '123' },
app,
};
const res = {};
const endpointOption = { modelOptions: { model: 'default-model' } };
const { client } = await initializeClient({ req, res, endpointOption });
expect(client.options.reverseProxyUrl).toBe('http://reverse.proxy');
expect(client.options.proxy).toBe('http://proxy');
});
test('should throw an error when user-provided values are not valid JSON', async () => {
process.env.OPENAI_API_KEY = 'user_provided';
const req = {
body: { key: new Date(Date.now() + 10000).toISOString(), endpoint: 'openAI' },
user: { id: '123' },
app,
};
const res = {};
const endpointOption = {};
// Mock getUserKey to return a non-JSON string
getUserKey.mockResolvedValue('not-a-json');
getUserKeyValues.mockImplementation(() => {
let userValues = getUserKey();
try {
userValues = JSON.parse(userValues);
} catch (e) {
throw new Error(
JSON.stringify({
type: ErrorTypes.INVALID_USER_KEY,
}),
);
}
return userValues;
});
await expect(initializeClient({ req, res, endpointOption })).rejects.toThrow(
/invalid_user_key/,
);
});
test('should initialize client correctly for Azure OpenAI with valid configuration', async () => {
const req = {
body: {
key: null,
endpoint: EModelEndpoint.gptPlugins,
model: modelNames[0],
},
user: { id: '123' },
app: {
locals: {
[EModelEndpoint.azureOpenAI]: {
plugins: true,
modelNames,
modelGroupMap,
groupMap,
},
},
},
};
const res = {};
const endpointOption = {};
const client = await initializeClient({ req, res, endpointOption });
expect(client.client.options.azure).toBeDefined();
});
test('should initialize client with default options when certain env vars are not set', async () => {
delete process.env.OPENAI_SUMMARIZE;
process.env.OPENAI_API_KEY = 'some-api-key';
const req = {
body: { key: null, endpoint: EModelEndpoint.gptPlugins },
user: { id: '123' },
app,
};
const res = {};
const endpointOption = {};
const client = await initializeClient({ req, res, endpointOption });
expect(client.client.options.contextStrategy).toBe(null);
});
test('should correctly use user-provided apiKey and baseURL when provided', async () => {
process.env.OPENAI_API_KEY = 'user_provided';
process.env.OPENAI_REVERSE_PROXY = 'user_provided';
const req = {
body: {
key: new Date(Date.now() + 10000).toISOString(),
endpoint: 'openAI',
},
user: {
id: '123',
},
app,
};
const res = {};
const endpointOption = {};
getUserKeyValues.mockResolvedValue({
apiKey: 'test',
baseURL: 'https://user-provided-url.com',
});
const result = await initializeClient({ req, res, endpointOption });
expect(result.openAIApiKey).toBe('test');
expect(result.client.options.reverseProxyUrl).toBe('https://user-provided-url.com');
});
});

View File

@@ -0,0 +1,58 @@
const { Providers } = require('@librechat/agents');
const { EModelEndpoint } = require('librechat-data-provider');
const initAnthropic = require('~/server/services/Endpoints/anthropic/initialize');
const getBedrockOptions = require('~/server/services/Endpoints/bedrock/options');
const initOpenAI = require('~/server/services/Endpoints/openAI/initialize');
const initCustom = require('~/server/services/Endpoints/custom/initialize');
const initGoogle = require('~/server/services/Endpoints/google/initialize');
const { getCustomEndpointConfig } = require('~/server/services/Config');
const providerConfigMap = {
[Providers.XAI]: initCustom,
[Providers.OLLAMA]: initCustom,
[Providers.DEEPSEEK]: initCustom,
[Providers.OPENROUTER]: initCustom,
[EModelEndpoint.openAI]: initOpenAI,
[EModelEndpoint.google]: initGoogle,
[EModelEndpoint.azureOpenAI]: initOpenAI,
[EModelEndpoint.anthropic]: initAnthropic,
[EModelEndpoint.bedrock]: getBedrockOptions,
};
/**
* Get the provider configuration and override endpoint based on the provider string
* @param {string} provider - The provider string
* @returns {Promise<{
* getOptions: Function,
* overrideProvider?: string,
* customEndpointConfig?: TEndpoint
* }>}
*/
async function getProviderConfig(provider) {
let getOptions = providerConfigMap[provider];
let overrideProvider;
/** @type {TEndpoint | undefined} */
let customEndpointConfig;
if (!getOptions && providerConfigMap[provider.toLowerCase()] != null) {
overrideProvider = provider.toLowerCase();
getOptions = providerConfigMap[overrideProvider];
} else if (!getOptions) {
customEndpointConfig = await getCustomEndpointConfig(provider);
if (!customEndpointConfig) {
throw new Error(`Provider ${provider} not supported`);
}
getOptions = initCustom;
overrideProvider = Providers.OPENAI;
}
return {
getOptions,
overrideProvider,
customEndpointConfig,
};
}
module.exports = {
getProviderConfig,
};

View File

@@ -1,11 +1,7 @@
const {
ErrorTypes,
EModelEndpoint,
resolveHeaders,
mapModelToAzureConfig,
} = require('librechat-data-provider');
const { ErrorTypes, EModelEndpoint, mapModelToAzureConfig } = require('librechat-data-provider');
const {
isEnabled,
resolveHeaders,
isUserProvided,
getOpenAIConfig,
getAzureCredentials,
@@ -84,7 +80,10 @@ const initializeClient = async ({
});
clientOptions.reverseProxyUrl = baseURL ?? clientOptions.reverseProxyUrl;
clientOptions.headers = resolveHeaders({ ...headers, ...(clientOptions.headers ?? {}) });
clientOptions.headers = resolveHeaders(
{ ...headers, ...(clientOptions.headers ?? {}) },
req.user,
);
clientOptions.titleConvo = azureConfig.titleConvo;
clientOptions.titleModel = azureConfig.titleModel;
@@ -139,7 +138,7 @@ const initializeClient = async ({
}
if (optionsOnly) {
const modelOptions = endpointOption.model_parameters;
const modelOptions = endpointOption?.model_parameters ?? {};
modelOptions.model = modelName;
clientOptions = Object.assign({ modelOptions }, clientOptions);
clientOptions.modelOptions.user = req.user.id;

View File

@@ -1,10 +1,11 @@
const fs = require('fs');
const path = require('path');
const axios = require('axios');
const { logger } = require('@librechat/data-schemas');
const { EModelEndpoint } = require('librechat-data-provider');
const { generateShortLivedToken } = require('~/server/services/AuthService');
const { getBufferMetadata } = require('~/server/utils');
const paths = require('~/config/paths');
const { logger } = require('~/config');
/**
* Saves a file to a specified output path with a new filename.
@@ -206,7 +207,7 @@ const deleteLocalFile = async (req, file) => {
const cleanFilepath = file.filepath.split('?')[0];
if (file.embedded && process.env.RAG_API_URL) {
const jwtToken = req.headers.authorization.split(' ')[1];
const jwtToken = generateShortLivedToken(req.user.id);
axios.delete(`${process.env.RAG_API_URL}/documents`, {
headers: {
Authorization: `Bearer ${jwtToken}`,

View File

@@ -4,6 +4,7 @@ const FormData = require('form-data');
const { logAxiosError } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const { FileSources } = require('librechat-data-provider');
const { generateShortLivedToken } = require('~/server/services/AuthService');
/**
* Deletes a file from the vector database. This function takes a file object, constructs the full path, and
@@ -23,7 +24,8 @@ const deleteVectors = async (req, file) => {
return;
}
try {
const jwtToken = req.headers.authorization.split(' ')[1];
const jwtToken = generateShortLivedToken(req.user.id);
return await axios.delete(`${process.env.RAG_API_URL}/documents`, {
headers: {
Authorization: `Bearer ${jwtToken}`,
@@ -70,7 +72,7 @@ async function uploadVectors({ req, file, file_id, entity_id }) {
}
try {
const jwtToken = req.headers.authorization.split(' ')[1];
const jwtToken = generateShortLivedToken(req.user.id);
const formData = new FormData();
formData.append('file_id', file_id);
formData.append('file', fs.createReadStream(file.path));

View File

@@ -55,7 +55,9 @@ const processFiles = async (files, fileIds) => {
}
if (!fileIds) {
return await Promise.all(promises);
const results = await Promise.all(promises);
// Filter out null results from failed updateFileUsage calls
return results.filter((result) => result != null);
}
for (let file_id of fileIds) {
@@ -67,7 +69,9 @@ const processFiles = async (files, fileIds) => {
}
// TODO: calculate token cost when image is first uploaded
return await Promise.all(promises);
const results = await Promise.all(promises);
// Filter out null results from failed updateFileUsage calls
return results.filter((result) => result != null);
};
/**

View File

@@ -0,0 +1,208 @@
// Mock the updateFileUsage function before importing the actual processFiles
jest.mock('~/models/File', () => ({
updateFileUsage: jest.fn(),
}));
// Mock winston and logger configuration to avoid dependency issues
jest.mock('~/config', () => ({
logger: {
info: jest.fn(),
warn: jest.fn(),
debug: jest.fn(),
error: jest.fn(),
},
}));
// Mock all other dependencies that might cause issues
jest.mock('librechat-data-provider', () => ({
isUUID: { parse: jest.fn() },
megabyte: 1024 * 1024,
FileContext: { message_attachment: 'message_attachment' },
FileSources: { local: 'local' },
EModelEndpoint: { assistants: 'assistants' },
EToolResources: { file_search: 'file_search' },
mergeFileConfig: jest.fn(),
removeNullishValues: jest.fn((obj) => obj),
isAssistantsEndpoint: jest.fn(),
}));
jest.mock('~/server/services/Files/images', () => ({
convertImage: jest.fn(),
resizeAndConvert: jest.fn(),
resizeImageBuffer: jest.fn(),
}));
jest.mock('~/server/controllers/assistants/v2', () => ({
addResourceFileId: jest.fn(),
deleteResourceFileId: jest.fn(),
}));
jest.mock('~/models/Agent', () => ({
addAgentResourceFile: jest.fn(),
removeAgentResourceFiles: jest.fn(),
}));
jest.mock('~/server/controllers/assistants/helpers', () => ({
getOpenAIClient: jest.fn(),
}));
jest.mock('~/server/services/Tools/credentials', () => ({
loadAuthValues: jest.fn(),
}));
jest.mock('~/server/services/Config', () => ({
checkCapability: jest.fn(),
}));
jest.mock('~/server/utils/queue', () => ({
LB_QueueAsyncCall: jest.fn(),
}));
jest.mock('./strategies', () => ({
getStrategyFunctions: jest.fn(),
}));
jest.mock('~/server/utils', () => ({
determineFileType: jest.fn(),
}));
// Import the actual processFiles function after all mocks are set up
const { processFiles } = require('./process');
const { updateFileUsage } = require('~/models/File');
describe('processFiles', () => {
beforeEach(() => {
jest.clearAllMocks();
});
describe('null filtering functionality', () => {
it('should filter out null results from updateFileUsage when files do not exist', async () => {
const mockFiles = [
{ file_id: 'existing-file-1' },
{ file_id: 'non-existent-file' },
{ file_id: 'existing-file-2' },
];
// Mock updateFileUsage to return null for non-existent files
updateFileUsage.mockImplementation(({ file_id }) => {
if (file_id === 'non-existent-file') {
return Promise.resolve(null); // Simulate file not found in the database
}
return Promise.resolve({ file_id, usage: 1 });
});
const result = await processFiles(mockFiles);
expect(updateFileUsage).toHaveBeenCalledTimes(3);
expect(result).toEqual([
{ file_id: 'existing-file-1', usage: 1 },
{ file_id: 'existing-file-2', usage: 1 },
]);
// Critical test - ensure no null values in result
expect(result).not.toContain(null);
expect(result).not.toContain(undefined);
expect(result.length).toBe(2); // Only valid files should be returned
});
it('should return empty array when all updateFileUsage calls return null', async () => {
const mockFiles = [{ file_id: 'non-existent-1' }, { file_id: 'non-existent-2' }];
// All updateFileUsage calls return null
updateFileUsage.mockResolvedValue(null);
const result = await processFiles(mockFiles);
expect(updateFileUsage).toHaveBeenCalledTimes(2);
expect(result).toEqual([]);
expect(result).not.toContain(null);
expect(result.length).toBe(0);
});
it('should work correctly when all files exist', async () => {
const mockFiles = [{ file_id: 'file-1' }, { file_id: 'file-2' }];
updateFileUsage.mockImplementation(({ file_id }) => {
return Promise.resolve({ file_id, usage: 1 });
});
const result = await processFiles(mockFiles);
expect(result).toEqual([
{ file_id: 'file-1', usage: 1 },
{ file_id: 'file-2', usage: 1 },
]);
expect(result).not.toContain(null);
expect(result.length).toBe(2);
});
it('should handle fileIds parameter and filter nulls correctly', async () => {
const mockFiles = [{ file_id: 'file-1' }];
const mockFileIds = ['file-2', 'non-existent-file'];
updateFileUsage.mockImplementation(({ file_id }) => {
if (file_id === 'non-existent-file') {
return Promise.resolve(null);
}
return Promise.resolve({ file_id, usage: 1 });
});
const result = await processFiles(mockFiles, mockFileIds);
expect(result).toEqual([
{ file_id: 'file-1', usage: 1 },
{ file_id: 'file-2', usage: 1 },
]);
expect(result).not.toContain(null);
expect(result).not.toContain(undefined);
expect(result.length).toBe(2);
});
it('should handle duplicate file_ids correctly', async () => {
const mockFiles = [
{ file_id: 'duplicate-file' },
{ file_id: 'duplicate-file' }, // Duplicate should be ignored
{ file_id: 'unique-file' },
];
updateFileUsage.mockImplementation(({ file_id }) => {
return Promise.resolve({ file_id, usage: 1 });
});
const result = await processFiles(mockFiles);
// Should only call updateFileUsage twice (duplicate ignored)
expect(updateFileUsage).toHaveBeenCalledTimes(2);
expect(result).toEqual([
{ file_id: 'duplicate-file', usage: 1 },
{ file_id: 'unique-file', usage: 1 },
]);
expect(result.length).toBe(2);
});
});
describe('edge cases', () => {
it('should handle empty files array', async () => {
const result = await processFiles([]);
expect(result).toEqual([]);
expect(updateFileUsage).not.toHaveBeenCalled();
});
it('should handle mixed null and undefined returns from updateFileUsage', async () => {
const mockFiles = [{ file_id: 'file-1' }, { file_id: 'file-2' }, { file_id: 'file-3' }];
updateFileUsage.mockImplementation(({ file_id }) => {
if (file_id === 'file-1') return Promise.resolve(null);
if (file_id === 'file-2') return Promise.resolve(undefined);
return Promise.resolve({ file_id, usage: 1 });
});
const result = await processFiles(mockFiles);
expect(result).toEqual([{ file_id: 'file-3', usage: 1 }]);
expect(result).not.toContain(null);
expect(result).not.toContain(undefined);
expect(result.length).toBe(1);
});
});
});

View File

@@ -1,5 +1,9 @@
const { FileSources } = require('librechat-data-provider');
const { uploadMistralOCR, uploadAzureMistralOCR } = require('@librechat/api');
const {
uploadMistralOCR,
uploadAzureMistralOCR,
uploadGoogleVertexMistralOCR,
} = require('@librechat/api');
const {
getFirebaseURL,
prepareImageURL,
@@ -222,6 +226,26 @@ const azureMistralOCRStrategy = () => ({
handleFileUpload: uploadAzureMistralOCR,
});
const vertexMistralOCRStrategy = () => ({
/** @type {typeof saveFileFromURL | null} */
saveURL: null,
/** @type {typeof getLocalFileURL | null} */
getFileURL: null,
/** @type {typeof saveLocalBuffer | null} */
saveBuffer: null,
/** @type {typeof processLocalAvatar | null} */
processAvatar: null,
/** @type {typeof uploadLocalImage | null} */
handleImageUpload: null,
/** @type {typeof prepareImagesLocal | null} */
prepareImagePayload: null,
/** @type {typeof deleteLocalFile | null} */
deleteFile: null,
/** @type {typeof getLocalFileStream | null} */
getDownloadStream: null,
handleFileUpload: uploadGoogleVertexMistralOCR,
});
// Strategy Selector
const getStrategyFunctions = (fileSource) => {
if (fileSource === FileSources.firebase) {
@@ -244,6 +268,8 @@ const getStrategyFunctions = (fileSource) => {
return mistralOCRStrategy();
} else if (fileSource === FileSources.azure_mistral_ocr) {
return azureMistralOCRStrategy();
} else if (fileSource === FileSources.vertexai_mistral_ocr) {
return vertexMistralOCRStrategy();
} else {
throw new Error('Invalid file source');
}

View File

@@ -1,3 +1,6 @@
const { sleep } = require('@librechat/agents');
const { sendEvent } = require('@librechat/api');
const { logger } = require('@librechat/data-schemas');
const {
Constants,
StepTypes,
@@ -8,9 +11,8 @@ const {
} = require('librechat-data-provider');
const { retrieveAndProcessFile } = require('~/server/services/Files/process');
const { processRequiredActions } = require('~/server/services/ToolService');
const { createOnProgress, sendMessage, sleep } = require('~/server/utils');
const { processMessages } = require('~/server/services/Threads');
const { logger } = require('~/config');
const { createOnProgress } = require('~/server/utils');
/**
* Implements the StreamRunManager functionality for managing the streaming
@@ -126,7 +128,7 @@ class StreamRunManager {
conversationId: this.finalMessage.conversationId,
};
sendMessage(this.res, contentData);
sendEvent(this.res, contentData);
}
/* <------------------ Misc. Helpers ------------------> */
@@ -302,7 +304,7 @@ class StreamRunManager {
for (const d of delta[key]) {
if (typeof d === 'object' && !Object.prototype.hasOwnProperty.call(d, 'index')) {
logger.warn('Expected an object with an \'index\' for array updates but got:', d);
logger.warn("Expected an object with an 'index' for array updates but got:", d);
continue;
}

View File

@@ -1,9 +1,9 @@
const { logger } = require('@librechat/data-schemas');
const { CacheKeys, processMCPEnv } = require('librechat-data-provider');
const { CacheKeys } = require('librechat-data-provider');
const { findToken, updateToken, createToken, deleteTokens } = require('~/models');
const { getMCPManager, getFlowStateManager } = require('~/config');
const { getCachedTools, setCachedTools } = require('./Config');
const { getLogStores } = require('~/cache');
const { findToken, updateToken, createToken, deleteTokens } = require('~/models');
/**
* Initialize MCP servers
@@ -30,7 +30,6 @@ async function initializeMCP(app) {
createToken,
deleteTokens,
},
processMCPEnv,
});
delete app.locals.mcpConfig;

View File

@@ -41,6 +41,7 @@ async function loadDefaultInterface(config, configDefaults, roleName = SystemRol
sidePanel: interfaceConfig?.sidePanel ?? defaults.sidePanel,
privacyPolicy: interfaceConfig?.privacyPolicy ?? defaults.privacyPolicy,
termsOfService: interfaceConfig?.termsOfService ?? defaults.termsOfService,
mcpServers: interfaceConfig?.mcpServers ?? defaults.mcpServers,
bookmarks: interfaceConfig?.bookmarks ?? defaults.bookmarks,
memories: shouldDisableMemories ? false : (interfaceConfig?.memories ?? defaults.memories),
prompts: interfaceConfig?.prompts ?? defaults.prompts,

View File

@@ -7,9 +7,9 @@ const {
defaultAssistantsVersion,
defaultAgentCapabilities,
} = require('librechat-data-provider');
const { sendEvent } = require('@librechat/api');
const { Providers } = require('@librechat/agents');
const partialRight = require('lodash/partialRight');
const { sendMessage } = require('./streamResponse');
/** Helper function to escape special characters in regex
* @param {string} string - The string to escape.
@@ -37,7 +37,7 @@ const createOnProgress = (
basePayload.text = basePayload.text + chunk;
const payload = Object.assign({}, basePayload, rest);
sendMessage(res, payload);
sendEvent(res, payload);
if (_onProgress) {
_onProgress(payload);
}
@@ -50,7 +50,7 @@ const createOnProgress = (
const sendIntermediateMessage = (res, payload, extraTokens = '') => {
basePayload.text = basePayload.text + extraTokens;
const message = Object.assign({}, basePayload, payload);
sendMessage(res, message);
sendEvent(res, message);
if (i === 0) {
basePayload.initial = false;
}

View File

@@ -1,11 +1,9 @@
const streamResponse = require('./streamResponse');
const removePorts = require('./removePorts');
const countTokens = require('./countTokens');
const handleText = require('./handleText');
const sendEmail = require('./sendEmail');
const queue = require('./queue');
const files = require('./files');
const math = require('./math');
/**
* Check if email configuration is set
@@ -28,7 +26,6 @@ function checkEmailConfig() {
}
module.exports = {
...streamResponse,
checkEmailConfig,
...handleText,
countTokens,
@@ -36,5 +33,4 @@ module.exports = {
sendEmail,
...files,
...queue,
math,
};

View File

@@ -118,7 +118,7 @@ class CustomOpenIDStrategy extends OpenIDStrategy {
*/
const exchangeAccessTokenIfNeeded = async (config, accessToken, sub, fromCache = false) => {
const tokensCache = getLogStores(CacheKeys.OPENID_EXCHANGED_TOKENS);
const onBehalfFlowRequired = isEnabled(process.env.OPENID_ON_BEHALF_FLOW_FOR_USERINFRO_REQUIRED);
const onBehalfFlowRequired = isEnabled(process.env.OPENID_ON_BEHALF_FLOW_FOR_USERINFO_REQUIRED);
if (onBehalfFlowRequired) {
if (fromCache) {
const cachedToken = await tokensCache.get(sub);
@@ -130,7 +130,7 @@ const exchangeAccessTokenIfNeeded = async (config, accessToken, sub, fromCache =
config,
'urn:ietf:params:oauth:grant-type:jwt-bearer',
{
scope: process.env.OPENID_ON_BEHALF_FLOW_USERINFRO_SCOPE || 'user.read',
scope: process.env.OPENID_ON_BEHALF_FLOW_USERINFO_SCOPE || 'user.read',
assertion: accessToken,
requested_token_use: 'on_behalf_of',
},

View File

@@ -1503,7 +1503,6 @@
* @property {boolean|{userProvide: boolean}} [anthropic] - Flag to indicate if Anthropic endpoint is user provided, or its configuration.
* @property {boolean|{userProvide: boolean}} [google] - Flag to indicate if Google endpoint is user provided, or its configuration.
* @property {boolean|{userProvide: boolean, userProvideURL: boolean, name: string}} [custom] - Custom Endpoint configuration.
* @property {boolean|GptPlugins} [gptPlugins] - Configuration for GPT plugins.
* @memberof typedefs
*/

View File

@@ -1,11 +1,9 @@
const loadYaml = require('./loadYaml');
const tokenHelpers = require('./tokens');
const deriveBaseURL = require('./deriveBaseURL');
const extractBaseURL = require('./extractBaseURL');
const findMessageContent = require('./findMessageContent');
module.exports = {
loadYaml,
deriveBaseURL,
extractBaseURL,
...tokenHelpers,

View File

@@ -1,13 +0,0 @@
const fs = require('fs');
const yaml = require('js-yaml');
function loadYaml(filepath) {
try {
let fileContents = fs.readFileSync(filepath, 'utf8');
return yaml.load(fileContents);
} catch (e) {
return e;
}
}
module.exports = loadYaml;

View File

@@ -0,0 +1,37 @@
import { createContext, useContext, useState, ReactNode } from 'react';
interface ActivePanelContextType {
active: string | undefined;
setActive: (id: string) => void;
}
const ActivePanelContext = createContext<ActivePanelContextType | undefined>(undefined);
export function ActivePanelProvider({
children,
defaultActive,
}: {
children: ReactNode;
defaultActive?: string;
}) {
const [active, _setActive] = useState<string | undefined>(defaultActive);
const setActive = (id: string) => {
localStorage.setItem('side:active-panel', id);
_setActive(id);
};
return (
<ActivePanelContext.Provider value={{ active, setActive }}>
{children}
</ActivePanelContext.Provider>
);
}
export function useActivePanel() {
const context = useContext(ActivePanelContext);
if (context === undefined) {
throw new Error('useActivePanel must be used within an ActivePanelProvider');
}
return context;
}

View File

@@ -40,41 +40,40 @@ export function AgentPanelProvider({ children }: { children: React.ReactNode })
agent_id: agent_id || '',
})) || [];
const groupedTools =
tools?.reduce(
(acc, tool) => {
if (tool.tool_id.includes(Constants.mcp_delimiter)) {
const [_toolName, serverName] = tool.tool_id.split(Constants.mcp_delimiter);
const groupKey = `${serverName.toLowerCase()}`;
if (!acc[groupKey]) {
acc[groupKey] = {
tool_id: groupKey,
metadata: {
name: `${serverName}`,
pluginKey: groupKey,
description: `${localize('com_ui_tool_collection_prefix')} ${serverName}`,
icon: tool.metadata.icon || '',
} as TPlugin,
agent_id: agent_id || '',
tools: [],
};
}
acc[groupKey].tools?.push({
tool_id: tool.tool_id,
metadata: tool.metadata,
agent_id: agent_id || '',
});
} else {
acc[tool.tool_id] = {
tool_id: tool.tool_id,
metadata: tool.metadata,
const groupedTools = tools?.reduce(
(acc, tool) => {
if (tool.tool_id.includes(Constants.mcp_delimiter)) {
const [_toolName, serverName] = tool.tool_id.split(Constants.mcp_delimiter);
const groupKey = `${serverName.toLowerCase()}`;
if (!acc[groupKey]) {
acc[groupKey] = {
tool_id: groupKey,
metadata: {
name: `${serverName}`,
pluginKey: groupKey,
description: `${localize('com_ui_tool_collection_prefix')} ${serverName}`,
icon: tool.metadata.icon || '',
} as TPlugin,
agent_id: agent_id || '',
tools: [],
};
}
return acc;
},
{} as Record<string, AgentToolType & { tools?: AgentToolType[] }>,
) || {};
acc[groupKey].tools?.push({
tool_id: tool.tool_id,
metadata: tool.metadata,
agent_id: agent_id || '',
});
} else {
acc[tool.tool_id] = {
tool_id: tool.tool_id,
metadata: tool.metadata,
agent_id: agent_id || '',
};
}
return acc;
},
{} as Record<string, AgentToolType & { tools?: AgentToolType[] }>,
);
const value = {
action,

View File

@@ -1,6 +1,7 @@
import React, { createContext, useContext } from 'react';
import { Tools, LocalStorageKeys } from 'librechat-data-provider';
import { useMCPSelect, useToolToggle, useCodeApiKeyForm, useSearchApiKeyForm } from '~/hooks';
import { useGetStartupConfig } from '~/data-provider';
interface BadgeRowContextType {
conversationId?: string | null;
@@ -10,6 +11,7 @@ interface BadgeRowContextType {
fileSearch: ReturnType<typeof useToolToggle>;
codeApiKeyForm: ReturnType<typeof useCodeApiKeyForm>;
searchApiKeyForm: ReturnType<typeof useSearchApiKeyForm>;
startupConfig: ReturnType<typeof useGetStartupConfig>['data'];
}
const BadgeRowContext = createContext<BadgeRowContextType | undefined>(undefined);
@@ -28,6 +30,9 @@ interface BadgeRowProviderProps {
}
export default function BadgeRowProvider({ children, conversationId }: BadgeRowProviderProps) {
/** Startup config */
const { data: startupConfig } = useGetStartupConfig();
/** MCPSelect hook */
const mcpSelect = useMCPSelect({ conversationId });
@@ -73,6 +78,7 @@ export default function BadgeRowProvider({ children, conversationId }: BadgeRowP
mcpSelect,
webSearch,
fileSearch,
startupConfig,
conversationId,
codeApiKeyForm,
codeInterpreter,

View File

@@ -1,6 +1,7 @@
export { default as AssistantsProvider } from './AssistantsContext';
export { default as AgentsProvider } from './AgentsContext';
export { default as ToastProvider } from './ToastContext';
export * from './ActivePanelContext';
export * from './AgentPanelContext';
export * from './ChatContext';
export * from './ShareContext';

View File

@@ -1,26 +1,13 @@
import {
AuthorizationTypeEnum,
AuthTypeEnum,
TokenExchangeMethodEnum,
} from 'librechat-data-provider';
import { MCPForm } from '~/common/types';
export const defaultMCPFormValues: MCPForm = {
type: AuthTypeEnum.None,
saved_auth_fields: false,
api_key: '',
authorization_type: AuthorizationTypeEnum.Basic,
custom_auth_header: '',
oauth_client_id: '',
oauth_client_secret: '',
authorization_url: '',
client_url: '',
scope: '',
token_exchange_method: TokenExchangeMethodEnum.DefaultPost,
name: '',
description: '',
url: '',
tools: [],
icon: '',
trust: false,
customHeaders: [],
requestTimeout: undefined,
connectionTimeout: undefined,
};

View File

@@ -1,5 +1,5 @@
import { RefObject } from 'react';
import { FileSources, EModelEndpoint } from 'librechat-data-provider';
import { FileSources, EModelEndpoint, TPlugin } from 'librechat-data-provider';
import type { UseMutationResult } from '@tanstack/react-query';
import type * as InputNumberPrimitive from 'rc-input-number';
import type { SetterOrUpdater, RecoilState } from 'recoil';
@@ -167,13 +167,27 @@ export type ActionAuthForm = {
token_exchange_method: t.TokenExchangeMethodEnum;
};
export type MCPForm = ActionAuthForm & {
name?: string;
export type MCPForm = MCPMetadata;
export type MCP = {
mcp_id: string;
metadata: MCPMetadata;
} & ({ assistant_id: string; agent_id?: never } | { assistant_id?: never; agent_id: string });
export type MCPMetadata = {
name: string;
description?: string;
url?: string;
tools?: string[];
url: string;
tools?: TPlugin[];
icon?: string;
trust?: boolean;
customHeaders?: Array<{
id: string;
name: string;
value: string;
}>;
requestTimeout?: number;
connectionTimeout?: number;
};
export type ActionWithNullableMetadata = Omit<t.Action, 'metadata'> & {
@@ -219,11 +233,11 @@ export type AgentPanelContextType = {
mcps?: t.MCP[];
setMcp: React.Dispatch<React.SetStateAction<t.MCP | undefined>>;
setMcps: React.Dispatch<React.SetStateAction<t.MCP[] | undefined>>;
groupedTools: Record<string, t.AgentToolType & { tools?: t.AgentToolType[] }>;
tools: t.AgentToolType[];
activePanel?: string;
setActivePanel: React.Dispatch<React.SetStateAction<Panel>>;
setCurrentAgentId: React.Dispatch<React.SetStateAction<string | undefined>>;
groupedTools?: Record<string, t.AgentToolType & { tools?: t.AgentToolType[] }>;
agent_id?: string;
};

View File

@@ -68,6 +68,7 @@ export default function ExportAndShareMenu({
return (
<>
<DropdownPopup
portal={true}
menuId={menuId}
focusLoop={true}
unmountOnHide={true}

View File

@@ -4,9 +4,9 @@ import {
supportsFiles,
mergeFileConfig,
isAgentsEndpoint,
EndpointFileConfig,
fileConfig as defaultFileConfig,
} from 'librechat-data-provider';
import type { EndpointFileConfig } from 'librechat-data-provider';
import { useGetFileConfig } from '~/data-provider';
import AttachFileMenu from './AttachFileMenu';
import { useChatContext } from '~/Providers';
@@ -14,22 +14,25 @@ import { useChatContext } from '~/Providers';
function AttachFileChat({ disableInputs }: { disableInputs: boolean }) {
const { conversation } = useChatContext();
const conversationId = conversation?.conversationId ?? Constants.NEW_CONVO;
const { endpoint: _endpoint, endpointType } = conversation ?? { endpoint: null };
const isAgents = useMemo(() => isAgentsEndpoint(_endpoint), [_endpoint]);
const { endpoint, endpointType } = conversation ?? { endpoint: null };
const isAgents = useMemo(() => isAgentsEndpoint(endpoint), [endpoint]);
const { data: fileConfig = defaultFileConfig } = useGetFileConfig({
select: (data) => mergeFileConfig(data),
});
const endpointFileConfig = fileConfig.endpoints[_endpoint ?? ''] as
| EndpointFileConfig
| undefined;
const endpointSupportsFiles: boolean = supportsFiles[endpointType ?? _endpoint ?? ''] ?? false;
const endpointFileConfig = fileConfig.endpoints[endpoint ?? ''] as EndpointFileConfig | undefined;
const endpointSupportsFiles: boolean = supportsFiles[endpointType ?? endpoint ?? ''] ?? false;
const isUploadDisabled = (disableInputs || endpointFileConfig?.disabled) ?? false;
if (isAgents || (endpointSupportsFiles && !isUploadDisabled)) {
return <AttachFileMenu disabled={disableInputs} conversationId={conversationId} />;
return (
<AttachFileMenu
disabled={disableInputs}
conversationId={conversationId}
endpointFileConfig={endpointFileConfig}
/>
);
}
return null;

View File

@@ -2,6 +2,7 @@ import { useSetRecoilState } from 'recoil';
import * as Ariakit from '@ariakit/react';
import React, { useRef, useState, useMemo } from 'react';
import { FileSearch, ImageUpIcon, TerminalSquareIcon, FileType2Icon } from 'lucide-react';
import type { EndpointFileConfig } from 'librechat-data-provider';
import { FileUpload, TooltipAnchor, DropdownPopup, AttachmentIcon } from '~/components';
import { EToolResources, EModelEndpoint } from 'librechat-data-provider';
import { useGetEndpointsQuery } from '~/data-provider';
@@ -12,9 +13,10 @@ import { cn } from '~/utils';
interface AttachFileMenuProps {
conversationId: string;
disabled?: boolean | null;
endpointFileConfig?: EndpointFileConfig;
}
const AttachFileMenu = ({ disabled, conversationId }: AttachFileMenuProps) => {
const AttachFileMenu = ({ disabled, conversationId, endpointFileConfig }: AttachFileMenuProps) => {
const localize = useLocalize();
const isUploadDisabled = disabled ?? false;
const inputRef = useRef<HTMLInputElement>(null);
@@ -24,6 +26,7 @@ const AttachFileMenu = ({ disabled, conversationId }: AttachFileMenuProps) => {
const { data: endpointsConfig } = useGetEndpointsQuery();
const { handleFileChange } = useFileHandling({
overrideEndpoint: EModelEndpoint.agents,
overrideEndpointFileConfig: endpointFileConfig,
});
/** TODO: Ephemeral Agent Capabilities

View File

@@ -2,8 +2,7 @@ import React, { memo, useCallback, useState } from 'react';
import { SettingsIcon } from 'lucide-react';
import { Constants } from 'librechat-data-provider';
import { useUpdateUserPluginsMutation } from 'librechat-data-provider/react-query';
import type { TUpdateUserPlugins } from 'librechat-data-provider';
import type { McpServerInfo } from '~/hooks/Plugins/useMCPSelect';
import type { TUpdateUserPlugins, TPlugin } from 'librechat-data-provider';
import MCPConfigDialog, { type ConfigFieldDetail } from '~/components/ui/MCPConfigDialog';
import { useToastContext, useBadgeRowContext } from '~/Providers';
import MultiSelect from '~/components/ui/MultiSelect';
@@ -18,11 +17,11 @@ const getBaseMCPPluginKey = (fullPluginKey: string): string => {
function MCPSelect() {
const localize = useLocalize();
const { showToast } = useToastContext();
const { mcpSelect } = useBadgeRowContext();
const { mcpSelect, startupConfig } = useBadgeRowContext();
const { mcpValues, setMCPValues, mcpServerNames, mcpToolDetails, isPinned } = mcpSelect;
const [isConfigModalOpen, setIsConfigModalOpen] = useState(false);
const [selectedToolForConfig, setSelectedToolForConfig] = useState<McpServerInfo | null>(null);
const [selectedToolForConfig, setSelectedToolForConfig] = useState<TPlugin | null>(null);
const updateUserPluginsMutation = useUpdateUserPluginsMutation({
onSuccess: () => {
@@ -129,6 +128,8 @@ function MCPSelect() {
return null;
}
const placeholderText =
startupConfig?.interface?.mcpServers?.placeholder || localize('com_ui_mcp_servers');
return (
<>
<MultiSelect
@@ -138,7 +139,7 @@ function MCPSelect() {
defaultSelectedValues={mcpValues ?? []}
renderSelectedValues={renderSelectedValues}
renderItemContent={renderItemContent}
placeholder={localize('com_ui_mcp_servers')}
placeholder={placeholderText}
popoverClassName="min-w-fit"
className="badge-icon min-w-fit"
selectIcon={<MCPIcon className="icon-md text-text-primary" />}

View File

@@ -11,6 +11,7 @@ interface MCPSubMenuProps {
mcpValues?: string[];
mcpServerNames: string[];
handleMCPToggle: (serverName: string) => void;
placeholder?: string;
}
const MCPSubMenu = ({
@@ -19,11 +20,13 @@ const MCPSubMenu = ({
mcpServerNames,
setIsMCPPinned,
handleMCPToggle,
placeholder,
...props
}: MCPSubMenuProps) => {
const localize = useLocalize();
const menuStore = Ariakit.useMenuStore({
focusLoop: true,
showTimeout: 100,
placement: 'right',
});
@@ -33,12 +36,18 @@ const MCPSubMenu = ({
<Ariakit.MenuItem
{...props}
render={
<Ariakit.MenuButton className="flex w-full cursor-pointer items-center justify-between rounded-lg p-2 hover:bg-surface-hover" />
<Ariakit.MenuButton
onClick={(e: React.MouseEvent<HTMLButtonElement>) => {
e.stopPropagation();
menuStore.toggle();
}}
className="flex w-full cursor-pointer items-center justify-between rounded-lg p-2 hover:bg-surface-hover"
/>
}
>
<div className="flex items-center gap-2">
<MCPIcon className="icon-md" />
<span>{localize('com_ui_mcp_servers')}</span>
<span>{placeholder || localize('com_ui_mcp_servers')}</span>
<ChevronRight className="ml-auto h-3 w-3" />
</div>
<button
@@ -60,10 +69,8 @@ const MCPSubMenu = ({
</button>
</Ariakit.MenuItem>
<Ariakit.Menu
gutter={-4}
shift={-8}
unmountOnHide
portal={true}
unmountOnHide={true}
className={cn(
'animate-popover-left z-50 ml-3 flex min-w-[200px] flex-col rounded-xl',
'border border-border-light bg-surface-secondary p-1 shadow-lg',

View File

@@ -18,8 +18,15 @@ const ToolsDropdown = ({ disabled }: ToolsDropdownProps) => {
const localize = useLocalize();
const isDisabled = disabled ?? false;
const [isPopoverActive, setIsPopoverActive] = useState(false);
const { webSearch, codeInterpreter, fileSearch, mcpSelect, searchApiKeyForm, codeApiKeyForm } =
useBadgeRowContext();
const {
webSearch,
mcpSelect,
fileSearch,
startupConfig,
codeApiKeyForm,
codeInterpreter,
searchApiKeyForm,
} = useBadgeRowContext();
const { setIsDialogOpen: setIsCodeDialogOpen, menuTriggerRef: codeMenuTriggerRef } =
codeApiKeyForm;
const { setIsDialogOpen: setIsSearchDialogOpen, menuTriggerRef: searchMenuTriggerRef } =
@@ -89,18 +96,10 @@ const ToolsDropdown = ({ disabled }: ToolsDropdownProps) => {
[mcpSelect],
);
const dropdownItems = useMemo(() => {
const items: MenuItemProps[] = [
{
render: () => (
<div className="px-3 py-2 text-xs font-semibold text-text-secondary">
{localize('com_ui_tools')}
</div>
),
hideOnClick: false,
},
];
const mcpPlaceholder = startupConfig?.interface?.mcpServers?.placeholder;
const dropdownItems = useMemo(() => {
const items: MenuItemProps[] = [];
items.push({
onClick: handleFileSearchToggle,
hideOnClick: false,
@@ -246,8 +245,9 @@ const ToolsDropdown = ({ disabled }: ToolsDropdownProps) => {
<MCPSubMenu
{...props}
mcpValues={mcpValues}
mcpServerNames={mcpServerNames}
isMCPPinned={isMCPPinned}
placeholder={mcpPlaceholder}
mcpServerNames={mcpServerNames}
setIsMCPPinned={setIsMCPPinned}
handleMCPToggle={handleMCPToggle}
/>
@@ -262,6 +262,7 @@ const ToolsDropdown = ({ disabled }: ToolsDropdownProps) => {
canRunCode,
isMCPPinned,
isCodePinned,
mcpPlaceholder,
mcpServerNames,
isSearchPinned,
setIsMCPPinned,

View File

@@ -1,14 +1,33 @@
import { useState, useEffect } from 'react';
import { X, ArrowDownToLine, PanelLeftOpen, PanelLeftClose } from 'lucide-react';
import { useState, useEffect, useCallback, useRef } from 'react';
import { X, ArrowDownToLine, PanelLeftOpen, PanelLeftClose, RotateCcw } from 'lucide-react';
import { Button, OGDialog, OGDialogContent, TooltipAnchor } from '~/components';
import { useLocalize } from '~/hooks';
const getQualityStyles = (quality: string): string => {
if (quality === 'high') {
return 'bg-green-100 text-green-800';
}
if (quality === 'low') {
return 'bg-orange-100 text-orange-800';
}
return 'bg-gray-100 text-gray-800';
};
export default function DialogImage({ isOpen, onOpenChange, src = '', downloadImage, args }) {
const localize = useLocalize();
const [isPromptOpen, setIsPromptOpen] = useState(false);
const [imageSize, setImageSize] = useState<string | null>(null);
const getImageSize = async (url: string) => {
// Zoom and pan state
const [zoom, setZoom] = useState(1);
const [panX, setPanX] = useState(0);
const [panY, setPanY] = useState(0);
const [isDragging, setIsDragging] = useState(false);
const [dragStart, setDragStart] = useState({ x: 0, y: 0 });
const containerRef = useRef<HTMLDivElement>(null);
const getImageSize = useCallback(async (url: string) => {
try {
const response = await fetch(url, { method: 'HEAD' });
const contentLength = response.headers.get('Content-Length');
@@ -25,7 +44,7 @@ export default function DialogImage({ isOpen, onOpenChange, src = '', downloadIm
console.error('Error getting image size:', error);
return null;
}
};
}, []);
const formatFileSize = (bytes: number): string => {
if (bytes === 0) return '0 Bytes';
@@ -37,11 +56,129 @@ export default function DialogImage({ isOpen, onOpenChange, src = '', downloadIm
return parseFloat((bytes / Math.pow(k, i)).toFixed(2)) + ' ' + sizes[i];
};
const getImageMaxWidth = () => {
// On mobile (when panel overlays), use full width minus padding
// On desktop, account for the side panel width
if (isPromptOpen) {
return window.innerWidth >= 640 ? 'calc(100vw - 22rem)' : 'calc(100vw - 2rem)';
}
return 'calc(100vw - 2rem)';
};
const resetZoom = useCallback(() => {
setZoom(1);
setPanX(0);
setPanY(0);
}, []);
const getCursor = () => {
if (zoom <= 1) return 'default';
return isDragging ? 'grabbing' : 'grab';
};
const handleDoubleClick = useCallback(() => {
if (zoom > 1) {
resetZoom();
} else {
// Zoom in to 2x on double click when at normal zoom
setZoom(2);
}
}, [zoom, resetZoom]);
const handleWheel = useCallback(
(e: React.WheelEvent<HTMLDivElement>) => {
e.preventDefault();
if (!containerRef.current) return;
const rect = containerRef.current.getBoundingClientRect();
const mouseX = e.clientX - rect.left;
const mouseY = e.clientY - rect.top;
// Calculate zoom factor
const zoomFactor = e.deltaY > 0 ? 0.9 : 1.1;
const newZoom = Math.min(Math.max(zoom * zoomFactor, 1), 5);
if (newZoom === zoom) return;
// If zooming back to 1, reset pan to center the image
if (newZoom === 1) {
setZoom(1);
setPanX(0);
setPanY(0);
return;
}
// Calculate the zoom center relative to the current viewport
const containerCenterX = rect.width / 2;
const containerCenterY = rect.height / 2;
// Calculate new pan position to zoom towards mouse cursor
const zoomRatio = newZoom / zoom;
const deltaX = (mouseX - containerCenterX - panX) * (zoomRatio - 1);
const deltaY = (mouseY - containerCenterY - panY) * (zoomRatio - 1);
setZoom(newZoom);
setPanX(panX - deltaX);
setPanY(panY - deltaY);
},
[zoom, panX, panY],
);
const handleMouseDown = useCallback(
(e: React.MouseEvent<HTMLDivElement>) => {
e.preventDefault();
if (zoom <= 1) return;
setIsDragging(true);
setDragStart({
x: e.clientX - panX,
y: e.clientY - panY,
});
},
[zoom, panX, panY],
);
const handleMouseMove = useCallback(
(e: React.MouseEvent<HTMLDivElement>) => {
if (!isDragging || zoom <= 1) return;
const newPanX = e.clientX - dragStart.x;
const newPanY = e.clientY - dragStart.y;
setPanX(newPanX);
setPanY(newPanY);
},
[isDragging, dragStart, zoom],
);
const handleMouseUp = useCallback(() => {
setIsDragging(false);
}, []);
useEffect(() => {
const onKey = (e: KeyboardEvent) => e.key === 'Escape' && resetZoom();
document.addEventListener('keydown', onKey);
return () => document.removeEventListener('keydown', onKey);
}, [resetZoom]);
useEffect(() => {
if (isOpen && src) {
getImageSize(src).then(setImageSize);
resetZoom();
}
}, [isOpen, src]);
}, [isOpen, src, getImageSize, resetZoom]);
// Ensure image is centered when zoom changes to 1
useEffect(() => {
if (zoom === 1) {
setPanX(0);
setPanY(0);
}
}, [zoom]);
// Reset pan when panel opens/closes to maintain centering
useEffect(() => {
if (zoom === 1) {
setPanX(0);
setPanY(0);
}
}, [isPromptOpen, zoom]);
return (
<OGDialog open={isOpen} onOpenChange={onOpenChange}>
@@ -52,7 +189,7 @@ export default function DialogImage({ isOpen, onOpenChange, src = '', downloadIm
overlayClassName="bg-surface-primary opacity-95 z-50"
>
<div
className={`absolute left-0 top-0 z-10 flex items-center justify-between p-4 transition-all duration-500 ease-in-out ${isPromptOpen ? 'right-80' : 'right-0'}`}
className={`ease-[cubic-bezier(0.175,0.885,0.32,1.275)] absolute left-0 top-0 z-10 flex items-center justify-between p-3 transition-all duration-500 sm:p-4 ${isPromptOpen ? 'right-0 sm:right-80' : 'right-0'}`}
>
<TooltipAnchor
description={localize('com_ui_close')}
@@ -62,11 +199,21 @@ export default function DialogImage({ isOpen, onOpenChange, src = '', downloadIm
variant="ghost"
className="h-10 w-10 p-0 hover:bg-surface-hover"
>
<X className="size-6" />
<X className="size-7 sm:size-6" />
</Button>
}
/>
<div className="flex items-center gap-2">
<div className="flex items-center gap-1 sm:gap-2">
{zoom > 1 && (
<TooltipAnchor
description={localize('com_ui_reset_zoom')}
render={
<Button onClick={resetZoom} variant="ghost" className="h-10 w-10 p-0">
<RotateCcw className="size-6" />
</Button>
}
/>
)}
<TooltipAnchor
description={localize('com_ui_download')}
render={
@@ -88,9 +235,9 @@ export default function DialogImage({ isOpen, onOpenChange, src = '', downloadIm
className="h-10 w-10 p-0"
>
{isPromptOpen ? (
<PanelLeftOpen className="size-6" />
<PanelLeftOpen className="size-7 sm:size-6" />
) : (
<PanelLeftClose className="size-6" />
<PanelLeftClose className="size-7 sm:size-6" />
)}
</Button>
}
@@ -100,36 +247,81 @@ export default function DialogImage({ isOpen, onOpenChange, src = '', downloadIm
{/* Main content area with image */}
<div
className={`flex h-full transition-all duration-500 ease-in-out ${isPromptOpen ? 'mr-80' : 'mr-0'}`}
className={`ease-[cubic-bezier(0.175,0.885,0.32,1.275)] flex h-full transition-all duration-500 ${isPromptOpen ? 'mr-0 sm:mr-80' : 'mr-0'}`}
>
<div className="flex flex-1 items-center justify-center px-4 pb-4 pt-20">
<img
src={src}
alt="Image"
className="max-h-full max-w-full object-contain"
<div
ref={containerRef}
className="flex flex-1 items-center justify-center px-2 pb-4 pt-16 sm:px-4 sm:pt-20"
onWheel={handleWheel}
onMouseDown={handleMouseDown}
onMouseMove={handleMouseMove}
onMouseUp={handleMouseUp}
onMouseLeave={handleMouseUp}
onDoubleClick={handleDoubleClick}
style={{
cursor: getCursor(),
overflow: zoom > 1 ? 'hidden' : 'visible',
minHeight: 0, // Allow flexbox to shrink
}}
>
<div
className="flex items-center justify-center transition-transform duration-100 ease-out"
style={{
maxHeight: 'calc(100vh - 6rem)',
maxWidth: '100%',
transform: `translate(${panX}px, ${panY}px) scale(${zoom})`,
transformOrigin: 'center center',
width: '100%',
height: '100%',
display: 'flex',
alignItems: 'center',
justifyContent: 'center',
}}
/>
>
<img
src={src}
alt="Image"
className="block object-contain"
style={{
maxHeight: 'calc(100vh - 8rem)',
maxWidth: getImageMaxWidth(),
width: 'auto',
height: 'auto',
}}
/>
</div>
</div>
</div>
{/* Side Panel */}
<div
className={`shadow-l-lg fixed right-0 top-0 z-20 h-full w-80 transform rounded-l-2xl border-l border-border-light bg-surface-primary transition-transform duration-500 ease-in-out ${
className={`sm:shadow-l-lg ease-[cubic-bezier(0.175,0.885,0.32,1.275)] fixed right-0 top-0 z-20 h-full w-full transform border-l border-border-light bg-surface-primary shadow-2xl backdrop-blur-sm transition-transform duration-500 sm:w-80 sm:rounded-l-2xl ${
isPromptOpen ? 'translate-x-0' : 'translate-x-full'
}`}
>
<div className="h-full overflow-y-auto p-6">
<div className="mb-4">
{/* Mobile pull handle - removed for cleaner look */}
<div className="h-full overflow-y-auto p-4 sm:p-6">
{/* Mobile close button */}
<div className="mb-4 flex items-center justify-between sm:hidden">
<h3 className="text-lg font-semibold text-text-primary">
{localize('com_ui_image_details')}
</h3>
<Button
onClick={() => setIsPromptOpen(false)}
variant="ghost"
className="h-12 w-12 p-0"
>
<X className="size-6" />
</Button>
</div>
<div className="mb-4 hidden sm:block">
<h3 className="mb-2 text-lg font-semibold text-text-primary">
{localize('com_ui_image_details')}
</h3>
<div className="mb-4 h-px bg-border-medium"></div>
</div>
<div className="space-y-6">
<div className="space-y-4 sm:space-y-6">
{/* Prompt Section */}
<div>
<h4 className="mb-2 text-sm font-medium text-text-primary">
@@ -157,13 +349,7 @@ export default function DialogImage({ isOpen, onOpenChange, src = '', downloadIm
<div className="flex items-center justify-between">
<span className="text-sm text-text-primary">{localize('com_ui_quality')}:</span>
<span
className={`rounded px-2 py-1 text-xs font-medium capitalize ${
args?.quality === 'high'
? 'bg-green-100 text-green-800'
: args?.quality === 'low'
? 'bg-orange-100 text-orange-800'
: 'bg-gray-100 text-gray-800'
}`}
className={`rounded px-2 py-1 text-xs font-medium capitalize ${getQualityStyles(args?.quality || '')}`}
>
{args?.quality || 'Standard'}
</span>

View File

@@ -168,7 +168,7 @@ export default function AgentConfig({
const visibleToolIds = new Set(selectedToolIds);
// Check what group parent tools should be shown if any subtool is present
Object.entries(allTools).forEach(([toolId, toolObj]) => {
Object.entries(allTools ?? {}).forEach(([toolId, toolObj]) => {
if (toolObj.tools?.length) {
// if any subtool of this group is selected, ensure group parent tool rendered
if (toolObj.tools.some((st) => selectedToolIds.includes(st.tool_id))) {
@@ -299,6 +299,7 @@ export default function AgentConfig({
<div className="mb-1">
{/* // Render all visible IDs (including groups with subtools selected) */}
{[...visibleToolIds].map((toolId, i) => {
if (!allTools) return null;
const tool = allTools[toolId];
if (!tool) return null;
return (

View File

@@ -7,7 +7,6 @@ import VersionPanel from './Version/VersionPanel';
import { useChatContext } from '~/Providers';
import ActionsPanel from './ActionsPanel';
import AgentPanel from './AgentPanel';
import MCPPanel from './MCPPanel';
import { Panel } from '~/common';
export default function AgentPanelSwitch() {
@@ -54,8 +53,5 @@ function AgentPanelSwitchWithContext() {
if (activePanel === Panel.version) {
return <VersionPanel />;
}
if (activePanel === Panel.mcp) {
return <MCPPanel />;
}
return <AgentPanel agentsConfig={agentsConfig} endpointsConfig={endpointsConfig} />;
}

View File

@@ -19,7 +19,7 @@ export default function AgentTool({
allTools,
}: {
tool: string;
allTools: Record<string, AgentToolType & { tools?: AgentToolType[] }>;
allTools?: Record<string, AgentToolType & { tools?: AgentToolType[] }>;
agent_id?: string;
}) {
const [isHovering, setIsHovering] = useState(false);
@@ -30,8 +30,10 @@ export default function AgentTool({
const { showToast } = useToastContext();
const updateUserPlugins = useUpdateUserPluginsMutation();
const { getValues, setValue } = useFormContext<AgentForm>();
if (!allTools) {
return null;
}
const currentTool = allTools[tool];
const getSelectedTools = () => {
if (!currentTool?.tools) return [];
const formTools = getValues('tools') || [];

Some files were not shown because too many files have changed in this diff Show More