Compare commits

..

21 Commits

Author SHA1 Message Date
Danny Avila
f7777a2723 v0.7.8 (#7287)
*  v0.7.8

* chore: bump data-provider to v0.7.82

* chore: update CONFIG_VERSION to 1.2.5

* chore: bump librechat-mcp version to 1.2.2

* chore: bump @librechat/data-schemas version to 0.0.7
2025-05-08 13:28:40 -04:00
github-actions[bot]
e5b234bc72 📜 docs: Unreleased Changelog (#7214)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-05-08 13:09:30 -04:00
Josh Nichols
4f2ed46450 🐋 feat: Add python to Dockerfile for increased MCP compatibility (#7270)
Without this, it's not possible to run any MCPs that use python, only node.

So, add these to enable using things that use `uvx` similar to what
the documentation already talks about for `npx`.
2025-05-08 12:32:12 -04:00
Danny Avila
66093b1eb3 💬 refactor: MCP Chat Visibility Option, Google Rates, Remove OpenAPI Plugins (#7286)
* fix: Update Gemini 2.5 Pro Preview Model Name in Token Values

* refactor: Update DeleteButton to close menu when deletion is successful

* refactor: Add unmountOnHide prop to DropdownPopup in multiple components

* chore: linting

* chore: linting

* feat: Add `chatMenu` option for MCP Servers to control visibility in MCPSelect dropdown

* refactor: Update loadManifestTools to return combined tool manifest with MCP tools first

* chore: remove deprecated openapi plugins

* chore: linting

* chore(AgentClient): linting, remove unnecessary `checkVisionRequest` logger

* refactor(AuthService): change logoutUser logging from error to debug level

* chore: new Gemini models token values and rates

* chore(AskController): linting
2025-05-08 12:12:36 -04:00
Danny Avila
d7390d24ec 🔄 fix: Ollama Think Tag Edge Case with Tools (#7275) 2025-05-07 17:49:42 -04:00
Danny Avila
71105cd49c 🔄 fix: Assistants Endpoint & Minor Issues (#7274)
* 🔄 fix: Include usage in stream options for OpenAI and Azure endpoints

* fix: Agents support for Azure serverless endpoints

* fix: Refactor condition for assistants and azureAssistants endpoint handling

* AWS Titan via Bedrock: model doesn't support system messages, Closes #6456

* fix: Add EndpointSchemaKey type to endpoint parameters in buildDefaultConvo and ensure assistantId is always defined

* fix: Handle new conversation state for assistants endpoint in finalHandler

* fix: Add spec and iconURL parameters to `saveAssistantMessage` to persist modelSpec fields

* fix: Handle assistant unlinking even if no valid files to delete

* chore: move type definitions from callbacks.js to typedefs.js

* chore: Add StandardGraph typedef to typedefs.js

* chore: Update parameter type for graph in ModelEndHandler to StandardGraph

---------

Co-authored-by: Andres Restrepo <andres@enric.ai>
2025-05-07 17:11:33 -04:00
Marlon
3606349a0f 📝 docs: Update .env.example Google models (#7254)
This pull request updates the GOOGLE_MODELS and GOOGLE_TITLE_MODEL examples in the .env.example file to reflect the currently available models on Google AI Studio (Gemini API) and Vertex AI.
Many of the models previously listed in the example file have since been deprecated or are no longer the primary recommended versions. This discrepancy could lead to confusion for new users setting up the project, potentially causing them to select non-functional or outdated model identifiers, resulting in errors or suboptimal performance.
The changes in this PR ensure that:
- The model lists for both Gemini API (AI Studio) and Vertex AI are synchronized with the current offerings.
- New users have a more accurate and reliable starting point when configuring their environment.
- The likelihood of encountering issues due to deprecated model names during initial setup is significantly reduced.
2025-05-07 11:19:06 -04:00
glowforge-opensource
e3e796293c 🔍 feat: Additional Tavily API Tool Parameters (#7232)
* feat: expose additional Tavily API parameters for tool

The following parameters are part of Tavily API but were previously not exposed for agents to use via the tool. Now they are. The source documentation is here: https://docs.tavily.com/documentation/api-reference/endpoint/search

include_raw_content - returns the full text of found web pages (default is false)
include_domains - limit search to this list of domains (default is none)
exclude_domains - exclude this list of domains form search (default is none)
topic - enum of "general", "news", or "finance" (default is "general")
time_range - enum of "day", "week", "month", or "year" (default unlimited)
days - number of days to search (default is 7, but only applicable to topic == "news")
include_image_descriptions - include a description of the image in the search results (default is false)

It is a little odd that they have both time_range and days, but there it is.

I have noticed that this change requires a little bit of care in prompting to make sure that it doesn't use "news" when you wanted "general". I've attemtped to hint that in the tool description.

* correct lint error

* more lint

---------

Co-authored-by: Michael Natkin <michaeln@glowforge.com>
2025-05-06 22:50:11 -04:00
Danny Avila
7c4c3a8796 🔄 fix: URL Param Race Condition and File Draft Persistence (#7257)
* chore(useAutoSave): linting

* fix: files attached during streaming disappear when stream finishes

* fix(useQueryParams): query parameter processing race condition with submission handling, add JSDocs to all functions/hooks

* test(useQueryParams): add comprehensive tests for query parameter handling and submission logic
2025-05-06 22:49:12 -04:00
andresgit
20c9f1a783 🎨 style: Improve KaTeX Rendering for LaTeX Equations (#7223) 2025-05-06 10:50:09 -04:00
Danny Avila
8e1012c5aa 🛡️ fix: Deep Clone MCPOptions for User MCP Connections (#7247)
* Fix: Prevent side effects in `processMCPEnv` by deep cloning MCPOptions

The `processMCPEnv` function was modifying the original `MCPOptions` object, leading to unintended side effects where `LIBRECHAT_USER_ID` could be incorrectly shared across different users. This commit addresses this issue by performing a deep clone of the `MCPOptions` object before processing, ensuring that modifications are isolated and do not affect other users.

* ci: Add tests for processMCPEnv to ensure deep cloning, user ID isolation and environment variable processing

---------

Co-authored-by: Alex C <viennadd@users.noreply.github.com>
2025-05-06 10:29:05 -04:00
Danny Avila
7c92cef2b7 🔖 fix: Custom Headers for Initial MCP SSE Connection (#7246)
* refactor: add custom  to  as workaround to include custom headers to the initial connection request

* chore: bump MCP client version to 1.2.1 in package-lock and package.json for librechat-mcp
2025-05-06 10:14:17 -04:00
Danny Avila
4fbb81c774 🔄 fix: o-Series Model Regex for System Messages (#7245)
* fix: no system message only for o1-preview and o1-mini

* chore(OpenAIClient): linting

* fix: update regex to include o1-preview and o1-mini in noSystemModelRegex

* refactor: rename variable for consistency with AgentClient

---------

Co-authored-by: Andres <9771158+andresgit@users.noreply.github.com>
2025-05-06 08:40:00 -04:00
Marco Beretta
fc6e14efe2 feat: Enhance form submission for touch screens (#7198)
*  feat: Enhance form submission for touch screens

* chore: add comment

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* chore: add comment

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* chore: linting in AnthropicClient

* chore: Add anthropic model outputs for Claude 3.7

* refactor: Simplify touch-screen detection in message submission

* fix: Correct button rendering order for chat collapse/expand icons

* Revert "refactor: Simplify touch-screen detection in message submission"

This reverts commit 8638442a4c.

* refactor: Improve touchscreen detection for focus handling in ChatForm and useFocusChatEffect

* chore: EditMessage linting

* refactor: Reorder dropdown items in ExportAndShareMenu

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Danny Avila <danny@librechat.ai>
2025-05-05 09:23:38 -04:00
Danny Avila
6e663b2480 🛠️ fix: Conversation Navigation State (#7210)
* refactor: Enhance initial conversation query condition for better state management and prevent unused network requests

* ifx: Add Prettier plugin to ESLint configuration

* chore: linting and typing in convos.spec.ts

* fix: add back fresh data fetching and improve error handling for  conversation navigation

* fix: set conversation only with  conversation state change intent, to prevent double queries for messages
2025-05-04 10:44:40 -04:00
matt burnett
ddb2141eac 🧰 chore: ESLint configuration to enforce Prettier formatting rules (#7186) 2025-05-02 15:13:31 -04:00
Danny Avila
37b50736bc 🔧 fix: Google Gemma Support & OpenAI Reasoning Instructions (#7196)
* 🔄 chore: Update @langchain/google-vertexai to version 0.2.5 in package.json and package-lock.json

* chore: temp remove agents

* 🔄 chore: Update @langchain/google-genai to version 0.2.5 in package.json and package-lock.json

* 🔄 chore: Update @langchain/community to version 0.3.42 in package.json and package-lock.json

* 🔄 chore: Add license information for @langchain/textsplitters in package-lock.json

* 🔄 chore: Update @langchain/core to version 0.3.51 in package.json and package-lock.json

* 🔄 chore: Update openai dependency to version 4.96.2 in package.json and package-lock.json

* chore: @librechat/agents to v2.4.30

* fix: streaming condition in ModelEndHandler to account for boundModel `disableStreaming` setting

* fix: update regex for noSystemModel and refactor message handling in AgentClient

* feat: Google Gemma models

* chore: remove unnecessary empty JSX fragment in PopoverButtons component
2025-05-02 15:11:50 -04:00
Danny Avila
5d6d13efe8 🌿 refactor: Unmount Fork Popover on Hide for Performance (#7189) 2025-05-02 02:43:59 -04:00
Danny Avila
5efad8f646 📦 chore: Bump Package Security (#7183)
* 🔄 chore: bump supertest to 7.1.0, resolves CVE-2025-46653

* 🔄 chore: update vite to version 6.3.4 and add fdir, picomatch, and tinyglobby as dev dependencies

* 🔄 chore: npm audit fix: remove unused dependencies fdir, picomatch, and tinyglobby from package-lock.json
2025-05-01 15:02:51 -04:00
Danny Avila
9a7f763714 🔄 refactor: Artifact Visibility Management (#7181)
* fix: Reset artifacts on unmount and remove useIdChangeEffect hook

* feat: Replace SVG icons with Lucide icons for improved consistency

* fix: Refactor artifact reset logic on unmount and conversation change

* refactor: Rename artifactsVisible to artifactsVisibility for consistency

* feat: Replace custom SVG icons with Lucide icons for improved consistency

* feat: Add visibleArtifacts atom for managing visibility state

* feat: Implement debounced visibility state management for artifacts

* refactor: Add useIdChangeEffect hook to reset visible artifacts on conversation ID change

* refactor: Remove unnecessary dependency from useMemo in TextPart component

* refactor: Enhance artifact visibility management by incorporating location checks for search path

* refactor: Improve transition effects for artifact visibility in Artifacts component

* chore: Remove preprocessCodeArtifacts function and related tests

* fix: Update regex for detecting enclosed artifacts in latest message

* refactor: Update artifact visibility checks to be more generic (not just search)

* chore: Enhance artifact visibility logging

* refactor: Extract closeArtifacts function to improve button click handling

* refactor: remove nested logic from use artifacts effect

* refactor: Update regex for detecting enclosed artifacts to handle new line variations
2025-05-01 14:40:39 -04:00
github-actions[bot]
e6e7935fd8 📜 docs: CHANGELOG for release v0.7.8-rc1 (#7153)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-04-30 08:54:43 -04:00
90 changed files with 1653 additions and 1606 deletions

View File

@@ -142,12 +142,12 @@ GOOGLE_KEY=user_provided
# GOOGLE_AUTH_HEADER=true
# Gemini API (AI Studio)
# GOOGLE_MODELS=gemini-2.5-pro-exp-03-25,gemini-2.0-flash-exp,gemini-2.0-flash-thinking-exp-1219,gemini-exp-1121,gemini-exp-1114,gemini-1.5-flash-latest,gemini-1.0-pro,gemini-1.0-pro-001,gemini-1.0-pro-latest,gemini-1.0-pro-vision-latest,gemini-1.5-pro-latest,gemini-pro,gemini-pro-vision
# GOOGLE_MODELS=gemini-2.5-pro-preview-05-06,gemini-2.5-flash-preview-04-17,gemini-2.0-flash-001,gemini-2.0-flash-exp,gemini-2.0-flash-lite-001,gemini-1.5-pro-002,gemini-1.5-flash-002
# Vertex AI
# GOOGLE_MODELS=gemini-1.5-flash-preview-0514,gemini-1.5-pro-preview-0514,gemini-1.0-pro-vision-001,gemini-1.0-pro-002,gemini-1.0-pro-001,gemini-pro-vision,gemini-1.0-pro
# GOOGLE_MODELS=gemini-2.5-pro-preview-05-06,gemini-2.5-flash-preview-04-17,gemini-2.0-flash-001,gemini-2.0-flash-exp,gemini-2.0-flash-lite-001,gemini-1.5-pro-002,gemini-1.5-flash-002
# GOOGLE_TITLE_MODEL=gemini-pro
# GOOGLE_TITLE_MODEL=gemini-2.0-flash-lite-001
# GOOGLE_LOC=us-central1

View File

@@ -3,8 +3,29 @@
All notable changes to this project will be documented in this file.
## [Unreleased]
### 🔧 Fixes
- 🔧 fix: Google Gemma Support & OpenAI Reasoning Instructions by **@danny-avila** in [#7196](https://github.com/danny-avila/LibreChat/pull/7196)
- 🛠️ fix: Conversation Navigation State by **@danny-avila** in [#7210](https://github.com/danny-avila/LibreChat/pull/7210)
### ⚙️ Other Changes
- 📜 docs: CHANGELOG for release v0.7.8-rc1 by **@github-actions[bot]** in [#7153](https://github.com/danny-avila/LibreChat/pull/7153)
- 🔄 refactor: Artifact Visibility Management by **@danny-avila** in [#7181](https://github.com/danny-avila/LibreChat/pull/7181)
- 📦 chore: Bump Package Security by **@danny-avila** in [#7183](https://github.com/danny-avila/LibreChat/pull/7183)
- 🌿 refactor: Unmount Fork Popover on Hide for Better Performance by **@danny-avila** in [#7189](https://github.com/danny-avila/LibreChat/pull/7189)
- 🧰 chore: ESLint configuration to enforce Prettier formatting rules by **@mawburn** in [#7186](https://github.com/danny-avila/LibreChat/pull/7186)
---
## [v0.7.8-rc1] -
Changes from v0.7.7 to v0.7.8-rc1.
### ✨ New Features
- 🔍 feat: Mistral OCR API / Upload Files as Text by **@danny-avila** in [#6274](https://github.com/danny-avila/LibreChat/pull/6274)
@@ -136,7 +157,12 @@ All notable changes to this project will be documented in this file.
- 🧭 refactor: Modernize Nav/Header by **@danny-avila** in [#7094](https://github.com/danny-avila/LibreChat/pull/7094)
- 🪶 refactor: Chat Input Focus for Conversation Navigations & ChatForm Optimizations by **@danny-avila** in [#7100](https://github.com/danny-avila/LibreChat/pull/7100)
- 🔃 refactor: Streamline Navigation, Message Loading UX by **@danny-avila** in [#7118](https://github.com/danny-avila/LibreChat/pull/7118)
- 📜 docs: Unreleased changelog by **@github-actions[bot]** in [#6265](https://github.com/danny-avila/LibreChat/pull/6265)
[See full release details][release-v0.7.8-rc1]
[release-v0.7.8-rc1]: https://github.com/danny-avila/LibreChat/releases/tag/v0.7.8-rc1
---

View File

@@ -1,10 +1,11 @@
# v0.7.8-rc1
# v0.7.8
# Base node image
FROM node:20-alpine AS node
# Install jemalloc
RUN apk add --no-cache jemalloc
RUN apk add --no-cache python3 py3-pip uv
# Set environment variable to use jemalloc
ENV LD_PRELOAD=/usr/lib/libjemalloc.so.2

View File

@@ -1,5 +1,5 @@
# Dockerfile.multi
# v0.7.8-rc1
# v0.7.8
# Base for all builds
FROM node:20-alpine AS base-min

View File

@@ -396,13 +396,13 @@ class AnthropicClient extends BaseClient {
const formattedMessages = orderedMessages.map((message, i) => {
const formattedMessage = this.useMessages
? formatMessage({
message,
endpoint: EModelEndpoint.anthropic,
})
message,
endpoint: EModelEndpoint.anthropic,
})
: {
author: message.isCreatedByUser ? this.userLabel : this.assistantLabel,
content: message?.content ?? message.text,
};
author: message.isCreatedByUser ? this.userLabel : this.assistantLabel,
content: message?.content ?? message.text,
};
const needsTokenCount = this.contextStrategy && !orderedMessages[i].tokenCount;
/* If tokens were never counted, or, is a Vision request and the message has files, count again */
@@ -680,7 +680,7 @@ class AnthropicClient extends BaseClient {
}
getCompletion() {
logger.debug('AnthropicClient doesn\'t use getCompletion (all handled in sendCompletion)');
logger.debug("AnthropicClient doesn't use getCompletion (all handled in sendCompletion)");
}
/**
@@ -888,7 +888,7 @@ class AnthropicClient extends BaseClient {
}
getBuildMessagesOptions() {
logger.debug('AnthropicClient doesn\'t use getBuildMessagesOptions');
logger.debug("AnthropicClient doesn't use getBuildMessagesOptions");
}
getEncoding() {

View File

@@ -63,15 +63,15 @@ class BaseClient {
}
setOptions() {
throw new Error('Method \'setOptions\' must be implemented.');
throw new Error("Method 'setOptions' must be implemented.");
}
async getCompletion() {
throw new Error('Method \'getCompletion\' must be implemented.');
throw new Error("Method 'getCompletion' must be implemented.");
}
async sendCompletion() {
throw new Error('Method \'sendCompletion\' must be implemented.');
throw new Error("Method 'sendCompletion' must be implemented.");
}
getSaveOptions() {
@@ -237,11 +237,11 @@ class BaseClient {
const userMessage = opts.isEdited
? this.currentMessages[this.currentMessages.length - 2]
: this.createUserMessage({
messageId: userMessageId,
parentMessageId,
conversationId,
text: message,
});
messageId: userMessageId,
parentMessageId,
conversationId,
text: message,
});
if (typeof opts?.getReqData === 'function') {
opts.getReqData({

View File

@@ -140,8 +140,7 @@ class GoogleClient extends BaseClient {
this.options.attachments?.then((attachments) => this.checkVisionRequest(attachments));
/** @type {boolean} Whether using a "GenerativeAI" Model */
this.isGenerativeModel =
this.modelOptions.model.includes('gemini') || this.modelOptions.model.includes('learnlm');
this.isGenerativeModel = /gemini|learnlm|gemma/.test(this.modelOptions.model);
this.maxContextTokens =
this.options.maxContextTokens ??

View File

@@ -475,7 +475,9 @@ class OpenAIClient extends BaseClient {
promptPrefix = this.augmentedPrompt + promptPrefix;
}
if (promptPrefix && this.isOmni !== true) {
const noSystemModelRegex = /\b(o1-preview|o1-mini)\b/i.test(this.modelOptions.model);
if (promptPrefix && !noSystemModelRegex) {
promptPrefix = `Instructions:\n${promptPrefix.trim()}`;
instructions = {
role: 'system',
@@ -503,7 +505,7 @@ class OpenAIClient extends BaseClient {
};
/** EXPERIMENTAL */
if (promptPrefix && this.isOmni === true) {
if (promptPrefix && noSystemModelRegex) {
const lastUserMessageIndex = payload.findLastIndex((message) => message.role === 'user');
if (lastUserMessageIndex !== -1) {
if (Array.isArray(payload[lastUserMessageIndex].content)) {
@@ -1227,9 +1229,9 @@ ${convo}
opts.baseURL = this.langchainProxy
? constructAzureURL({
baseURL: this.langchainProxy,
azureOptions: this.azure,
})
baseURL: this.langchainProxy,
azureOptions: this.azure,
})
: this.azureEndpoint.split(/(?<!\/)\/(chat|completion)\//)[0];
opts.defaultQuery = { 'api-version': this.azure.azureOpenAIApiVersion };
@@ -1283,6 +1285,14 @@ ${convo}
modelOptions.messages[0].role = 'user';
}
if (
(this.options.endpoint === EModelEndpoint.openAI ||
this.options.endpoint === EModelEndpoint.azureOpenAI) &&
modelOptions.stream === true
) {
modelOptions.stream_options = { include_usage: true };
}
if (this.options.addParams && typeof this.options.addParams === 'object') {
const addParams = { ...this.options.addParams };
modelOptions = {
@@ -1385,12 +1395,6 @@ ${convo}
...modelOptions,
stream: true,
};
if (
this.options.endpoint === EModelEndpoint.openAI ||
this.options.endpoint === EModelEndpoint.azureOpenAI
) {
params.stream_options = { include_usage: true };
}
const stream = await openai.beta.chat.completions
.stream(params)
.on('abort', () => {

View File

@@ -43,9 +43,39 @@ class TavilySearchResults extends Tool {
.boolean()
.optional()
.describe('Whether to include answers in the search results. Default is False.'),
// include_raw_content: z.boolean().optional().describe('Whether to include raw content in the search results. Default is False.'),
// include_domains: z.array(z.string()).optional().describe('A list of domains to specifically include in the search results.'),
// exclude_domains: z.array(z.string()).optional().describe('A list of domains to specifically exclude from the search results.'),
include_raw_content: z
.boolean()
.optional()
.describe('Whether to include raw content in the search results. Default is False.'),
include_domains: z
.array(z.string())
.optional()
.describe('A list of domains to specifically include in the search results.'),
exclude_domains: z
.array(z.string())
.optional()
.describe('A list of domains to specifically exclude from the search results.'),
topic: z
.enum(['general', 'news', 'finance'])
.optional()
.describe(
'The category of the search. Use news ONLY if query SPECIFCALLY mentions the word "news".',
),
time_range: z
.enum(['day', 'week', 'month', 'year', 'd', 'w', 'm', 'y'])
.optional()
.describe('The time range back from the current date to filter results.'),
days: z
.number()
.min(1)
.optional()
.describe('Number of days back from the current date to include. Only if topic is news.'),
include_image_descriptions: z
.boolean()
.optional()
.describe(
'When include_images is true, also add a descriptive text for each image. Default is false.',
),
});
}

View File

@@ -1,30 +0,0 @@
const { loadSpecs } = require('./loadSpecs');
function transformSpec(input) {
return {
name: input.name_for_human,
pluginKey: input.name_for_model,
description: input.description_for_human,
icon: input?.logo_url ?? 'https://placehold.co/70x70.png',
// TODO: add support for authentication
isAuthRequired: 'false',
authConfig: [],
};
}
async function addOpenAPISpecs(availableTools) {
try {
const specs = (await loadSpecs({})).map(transformSpec);
if (specs.length > 0) {
return [...specs, ...availableTools];
}
return availableTools;
} catch (error) {
return availableTools;
}
}
module.exports = {
transformSpec,
addOpenAPISpecs,
};

View File

@@ -1,76 +0,0 @@
const { addOpenAPISpecs, transformSpec } = require('./addOpenAPISpecs');
const { loadSpecs } = require('./loadSpecs');
const { createOpenAPIPlugin } = require('../dynamic/OpenAPIPlugin');
jest.mock('./loadSpecs');
jest.mock('../dynamic/OpenAPIPlugin');
describe('transformSpec', () => {
it('should transform input spec to a desired format', () => {
const input = {
name_for_human: 'Human Name',
name_for_model: 'Model Name',
description_for_human: 'Human Description',
logo_url: 'https://example.com/logo.png',
};
const expectedOutput = {
name: 'Human Name',
pluginKey: 'Model Name',
description: 'Human Description',
icon: 'https://example.com/logo.png',
isAuthRequired: 'false',
authConfig: [],
};
expect(transformSpec(input)).toEqual(expectedOutput);
});
it('should use default icon if logo_url is not provided', () => {
const input = {
name_for_human: 'Human Name',
name_for_model: 'Model Name',
description_for_human: 'Human Description',
};
const expectedOutput = {
name: 'Human Name',
pluginKey: 'Model Name',
description: 'Human Description',
icon: 'https://placehold.co/70x70.png',
isAuthRequired: 'false',
authConfig: [],
};
expect(transformSpec(input)).toEqual(expectedOutput);
});
});
describe('addOpenAPISpecs', () => {
it('should add specs to available tools', async () => {
const availableTools = ['Tool1', 'Tool2'];
const specs = [
{
name_for_human: 'Human Name',
name_for_model: 'Model Name',
description_for_human: 'Human Description',
logo_url: 'https://example.com/logo.png',
},
];
loadSpecs.mockResolvedValue(specs);
createOpenAPIPlugin.mockReturnValue('Plugin');
const result = await addOpenAPISpecs(availableTools);
expect(result).toEqual([...specs.map(transformSpec), ...availableTools]);
});
it('should return available tools if specs loading fails', async () => {
const availableTools = ['Tool1', 'Tool2'];
loadSpecs.mockRejectedValue(new Error('Failed to load specs'));
const result = await addOpenAPISpecs(availableTools);
expect(result).toEqual(availableTools);
});
});

View File

@@ -24,7 +24,6 @@ const { primeFiles: primeCodeFiles } = require('~/server/services/Files/Code/pro
const { createFileSearchTool, primeFiles: primeSearchFiles } = require('./fileSearch');
const { loadAuthValues } = require('~/server/services/Tools/credentials');
const { createMCPTool } = require('~/server/services/MCP');
const { loadSpecs } = require('./loadSpecs');
const { logger } = require('~/config');
const mcpToolPattern = new RegExp(`^.+${Constants.mcp_delimiter}.+$`);
@@ -232,7 +231,6 @@ const loadTools = async ({
/** @type {Record<string, string>} */
const toolContextMap = {};
const remainingTools = [];
const appTools = options.req?.app?.locals?.availableTools ?? {};
for (const tool of tools) {
@@ -292,30 +290,6 @@ const loadTools = async ({
requestedTools[tool] = toolInstance;
continue;
}
if (functions === true) {
remainingTools.push(tool);
}
}
let specs = null;
if (useSpecs === true && functions === true && remainingTools.length > 0) {
specs = await loadSpecs({
llm: model,
user,
message: options.message,
memory: options.memory,
signal: options.signal,
tools: remainingTools,
map: true,
verbose: false,
});
}
for (const tool of remainingTools) {
if (specs && specs[tool]) {
requestedTools[tool] = specs[tool];
}
}
if (returnMap) {

View File

@@ -1,117 +0,0 @@
const fs = require('fs');
const path = require('path');
const { z } = require('zod');
const { logger } = require('~/config');
const { createOpenAPIPlugin } = require('~/app/clients/tools/dynamic/OpenAPIPlugin');
// The minimum Manifest definition
const ManifestDefinition = z.object({
schema_version: z.string().optional(),
name_for_human: z.string(),
name_for_model: z.string(),
description_for_human: z.string(),
description_for_model: z.string(),
auth: z.object({}).optional(),
api: z.object({
// Spec URL or can be the filename of the OpenAPI spec yaml file,
// located in api\app\clients\tools\.well-known\openapi
url: z.string(),
type: z.string().optional(),
is_user_authenticated: z.boolean().nullable().optional(),
has_user_authentication: z.boolean().nullable().optional(),
}),
// use to override any params that the LLM will consistently get wrong
params: z.object({}).optional(),
logo_url: z.string().optional(),
contact_email: z.string().optional(),
legal_info_url: z.string().optional(),
});
function validateJson(json) {
try {
return ManifestDefinition.parse(json);
} catch (error) {
logger.debug('[validateJson] manifest parsing error', error);
return false;
}
}
// omit the LLM to return the well known jsons as objects
async function loadSpecs({ llm, user, message, tools = [], map = false, memory, signal }) {
const directoryPath = path.join(__dirname, '..', '.well-known');
let files = [];
for (let i = 0; i < tools.length; i++) {
const filePath = path.join(directoryPath, tools[i] + '.json');
try {
// If the access Promise is resolved, it means that the file exists
// Then we can add it to the files array
await fs.promises.access(filePath, fs.constants.F_OK);
files.push(tools[i] + '.json');
} catch (err) {
logger.error(`[loadSpecs] File ${tools[i] + '.json'} does not exist`, err);
}
}
if (files.length === 0) {
files = (await fs.promises.readdir(directoryPath)).filter(
(file) => path.extname(file) === '.json',
);
}
const validJsons = [];
const constructorMap = {};
logger.debug('[validateJson] files', files);
for (const file of files) {
if (path.extname(file) === '.json') {
const filePath = path.join(directoryPath, file);
const fileContent = await fs.promises.readFile(filePath, 'utf8');
const json = JSON.parse(fileContent);
if (!validateJson(json)) {
logger.debug('[validateJson] Invalid json', json);
continue;
}
if (llm && map) {
constructorMap[json.name_for_model] = async () =>
await createOpenAPIPlugin({
data: json,
llm,
message,
memory,
signal,
user,
});
continue;
}
if (llm) {
validJsons.push(createOpenAPIPlugin({ data: json, llm }));
continue;
}
validJsons.push(json);
}
}
if (map) {
return constructorMap;
}
const plugins = (await Promise.all(validJsons)).filter((plugin) => plugin);
// logger.debug('[validateJson] plugins', plugins);
// logger.debug(plugins[0].name);
return plugins;
}
module.exports = {
loadSpecs,
validateJson,
ManifestDefinition,
};

View File

@@ -1,101 +0,0 @@
const fs = require('fs');
const { validateJson, loadSpecs, ManifestDefinition } = require('./loadSpecs');
const { createOpenAPIPlugin } = require('../dynamic/OpenAPIPlugin');
jest.mock('../dynamic/OpenAPIPlugin');
describe('ManifestDefinition', () => {
it('should validate correct json', () => {
const json = {
name_for_human: 'Test',
name_for_model: 'Test',
description_for_human: 'Test',
description_for_model: 'Test',
api: {
url: 'http://test.com',
},
};
expect(() => ManifestDefinition.parse(json)).not.toThrow();
});
it('should not validate incorrect json', () => {
const json = {
name_for_human: 'Test',
name_for_model: 'Test',
description_for_human: 'Test',
description_for_model: 'Test',
api: {
url: 123, // incorrect type
},
};
expect(() => ManifestDefinition.parse(json)).toThrow();
});
});
describe('validateJson', () => {
it('should return parsed json if valid', () => {
const json = {
name_for_human: 'Test',
name_for_model: 'Test',
description_for_human: 'Test',
description_for_model: 'Test',
api: {
url: 'http://test.com',
},
};
expect(validateJson(json)).toEqual(json);
});
it('should return false if json is not valid', () => {
const json = {
name_for_human: 'Test',
name_for_model: 'Test',
description_for_human: 'Test',
description_for_model: 'Test',
api: {
url: 123, // incorrect type
},
};
expect(validateJson(json)).toEqual(false);
});
});
describe('loadSpecs', () => {
beforeEach(() => {
jest.spyOn(fs.promises, 'readdir').mockResolvedValue(['test.json']);
jest.spyOn(fs.promises, 'readFile').mockResolvedValue(
JSON.stringify({
name_for_human: 'Test',
name_for_model: 'Test',
description_for_human: 'Test',
description_for_model: 'Test',
api: {
url: 'http://test.com',
},
}),
);
createOpenAPIPlugin.mockResolvedValue({});
});
afterEach(() => {
jest.restoreAllMocks();
});
it('should return plugins', async () => {
const plugins = await loadSpecs({ llm: true, verbose: false });
expect(plugins).toHaveLength(1);
expect(createOpenAPIPlugin).toHaveBeenCalledTimes(1);
});
it('should return constructorMap if map is true', async () => {
const plugins = await loadSpecs({ llm: {}, map: true, verbose: false });
expect(plugins).toHaveProperty('Test');
expect(createOpenAPIPlugin).not.toHaveBeenCalled();
});
});

View File

@@ -111,10 +111,15 @@ const tokenValues = Object.assign(
/* cohere doesn't have rates for the older command models,
so this was from https://artificialanalysis.ai/models/command-light/providers */
command: { prompt: 0.38, completion: 0.38 },
gemma: { prompt: 0, completion: 0 }, // https://ai.google.dev/pricing
'gemma-2': { prompt: 0, completion: 0 }, // https://ai.google.dev/pricing
'gemma-3': { prompt: 0, completion: 0 }, // https://ai.google.dev/pricing
'gemma-3-27b': { prompt: 0, completion: 0 }, // https://ai.google.dev/pricing
'gemini-2.0-flash-lite': { prompt: 0.075, completion: 0.3 },
'gemini-2.0-flash': { prompt: 0.1, completion: 0.4 },
'gemini-2.0': { prompt: 0, completion: 0 }, // https://ai.google.dev/pricing
'gemini-2.5-pro-preview-03-25': { prompt: 1.25, completion: 10 },
'gemini-2.5-pro': { prompt: 1.25, completion: 10 },
'gemini-2.5-flash': { prompt: 0.15, completion: 3.5 },
'gemini-2.5': { prompt: 0, completion: 0 }, // Free for a period of time
'gemini-1.5-flash-8b': { prompt: 0.075, completion: 0.3 },
'gemini-1.5-flash': { prompt: 0.15, completion: 0.6 },

View File

@@ -488,6 +488,9 @@ describe('getCacheMultiplier', () => {
describe('Google Model Tests', () => {
const googleModels = [
'gemini-2.5-pro-preview-05-06',
'gemini-2.5-flash-preview-04-17',
'gemini-2.5-exp',
'gemini-2.0-flash-lite-preview-02-05',
'gemini-2.0-flash-001',
'gemini-2.0-flash-exp',
@@ -525,6 +528,9 @@ describe('Google Model Tests', () => {
it('should map to the correct model keys', () => {
const expected = {
'gemini-2.5-pro-preview-05-06': 'gemini-2.5-pro',
'gemini-2.5-flash-preview-04-17': 'gemini-2.5-flash',
'gemini-2.5-exp': 'gemini-2.5',
'gemini-2.0-flash-lite-preview-02-05': 'gemini-2.0-flash-lite',
'gemini-2.0-flash-001': 'gemini-2.0-flash',
'gemini-2.0-flash-exp': 'gemini-2.0-flash',

View File

@@ -1,6 +1,6 @@
{
"name": "@librechat/backend",
"version": "v0.7.8-rc1",
"version": "v0.7.8",
"description": "",
"scripts": {
"start": "echo 'please run this from the root directory'",
@@ -43,12 +43,12 @@
"@google/generative-ai": "^0.23.0",
"@googleapis/youtube": "^20.0.0",
"@keyv/redis": "^4.3.3",
"@langchain/community": "^0.3.39",
"@langchain/core": "^0.3.43",
"@langchain/google-genai": "^0.2.2",
"@langchain/google-vertexai": "^0.2.3",
"@langchain/community": "^0.3.42",
"@langchain/core": "^0.3.51",
"@langchain/google-genai": "^0.2.5",
"@langchain/google-vertexai": "^0.2.5",
"@langchain/textsplitters": "^0.1.0",
"@librechat/agents": "^2.4.22",
"@librechat/agents": "^2.4.314",
"@librechat/data-schemas": "*",
"@waylaidwanderer/fetch-event-source": "^3.0.1",
"axios": "^1.8.2",
@@ -90,7 +90,7 @@
"nanoid": "^3.3.7",
"nodemailer": "^6.9.15",
"ollama": "^0.5.0",
"openai": "^4.47.1",
"openai": "^4.96.2",
"openai-chat-tokens": "^0.2.8",
"openid-client": "^5.4.2",
"passport": "^0.6.0",
@@ -116,6 +116,6 @@
"jest": "^29.7.0",
"mongodb-memory-server": "^10.1.3",
"nodemon": "^3.0.3",
"supertest": "^7.0.0"
"supertest": "^7.1.0"
}
}

View File

@@ -228,7 +228,7 @@ const AskController = async (req, res, next, initializeClient, addTitle) => {
if (!client?.skipSaveUserMessage && latestUserMessage) {
await saveMessage(req, latestUserMessage, {
context: 'api/server/controllers/AskController.js - don\'t skip saving user message',
context: "api/server/controllers/AskController.js - don't skip saving user message",
});
}

View File

@@ -1,5 +1,4 @@
const { CacheKeys, AuthType } = require('librechat-data-provider');
const { addOpenAPISpecs } = require('~/app/clients/tools/util/addOpenAPISpecs');
const { getToolkitKey } = require('~/server/services/ToolService');
const { getCustomConfig } = require('~/server/services/Config');
const { availableTools } = require('~/app/clients/tools');
@@ -70,7 +69,7 @@ const getAvailablePluginsController = async (req, res) => {
);
}
let plugins = await addOpenAPISpecs(authenticatedPlugins);
let plugins = authenticatedPlugins;
if (includedTools.length > 0) {
plugins = plugins.filter((plugin) => includedTools.includes(plugin.pluginKey));
@@ -106,11 +105,11 @@ const getAvailableTools = async (req, res) => {
return;
}
const pluginManifest = availableTools;
let pluginManifest = availableTools;
const customConfig = await getCustomConfig();
if (customConfig?.mcpServers != null) {
const mcpManager = getMCPManager();
await mcpManager.loadManifestTools(pluginManifest);
pluginManifest = await mcpManager.loadManifestTools(pluginManifest);
}
/** @type {TPlugin[]} */

View File

@@ -14,15 +14,6 @@ const { loadAuthValues } = require('~/server/services/Tools/credentials');
const { saveBase64Image } = require('~/server/services/Files/process');
const { logger, sendEvent } = require('~/config');
/** @typedef {import('@librechat/agents').Graph} Graph */
/** @typedef {import('@librechat/agents').EventHandler} EventHandler */
/** @typedef {import('@librechat/agents').ModelEndData} ModelEndData */
/** @typedef {import('@librechat/agents').ToolEndData} ToolEndData */
/** @typedef {import('@librechat/agents').ToolEndCallback} ToolEndCallback */
/** @typedef {import('@librechat/agents').ChatModelStreamHandler} ChatModelStreamHandler */
/** @typedef {import('@librechat/agents').ContentAggregatorResult['aggregateContent']} ContentAggregator */
/** @typedef {import('@librechat/agents').GraphEvents} GraphEvents */
class ModelEndHandler {
/**
* @param {Array<UsageMetadata>} collectedUsage
@@ -38,7 +29,7 @@ class ModelEndHandler {
* @param {string} event
* @param {ModelEndData | undefined} data
* @param {Record<string, unknown> | undefined} metadata
* @param {Graph} graph
* @param {StandardGraph} graph
* @returns
*/
handle(event, data, metadata, graph) {
@@ -61,7 +52,10 @@ class ModelEndHandler {
}
this.collectedUsage.push(usage);
if (!graph.clientOptions?.disableStreaming) {
const streamingDisabled = !!(
graph.clientOptions?.disableStreaming || graph?.boundModel?.disableStreaming
);
if (!streamingDisabled) {
return;
}
if (!data.output.content) {

View File

@@ -58,7 +58,7 @@ const payloadParser = ({ req, agent, endpoint }) => {
const legacyContentEndpoints = new Set([KnownEndpoints.groq, KnownEndpoints.deepseek]);
const noSystemModelRegex = [/\b(o\d)\b/gi];
const noSystemModelRegex = [/\b(o1-preview|o1-mini|amazon\.titan-text)\b/gi];
// const { processMemory, memoryInstructions } = require('~/server/services/Endpoints/agents/memory');
// const { getFormattedMemories } = require('~/models/Memory');
@@ -148,19 +148,13 @@ class AgentClient extends BaseClient {
* @param {MongoFile[]} attachments
*/
checkVisionRequest(attachments) {
logger.info(
'[api/server/controllers/agents/client.js #checkVisionRequest] not implemented',
attachments,
);
// if (!attachments) {
// return;
// }
// const availableModels = this.options.modelsConfig?.[this.options.endpoint];
// if (!availableModels) {
// return;
// }
// let visionRequestDetected = false;
// for (const file of attachments) {
// if (file?.type?.includes('image')) {
@@ -171,13 +165,11 @@ class AgentClient extends BaseClient {
// if (!visionRequestDetected) {
// return;
// }
// this.isVisionModel = validateVisionModel({ model: this.modelOptions.model, availableModels });
// if (this.isVisionModel) {
// delete this.modelOptions.stop;
// return;
// }
// for (const model of availableModels) {
// if (!validateVisionModel({ model, availableModels })) {
// continue;
@@ -187,14 +179,12 @@ class AgentClient extends BaseClient {
// delete this.modelOptions.stop;
// return;
// }
// if (!availableModels.includes(this.defaultVisionModel)) {
// return;
// }
// if (!validateVisionModel({ model: this.defaultVisionModel, availableModels })) {
// return;
// }
// this.modelOptions.model = this.defaultVisionModel;
// this.isVisionModel = true;
// delete this.modelOptions.stop;
@@ -728,12 +718,14 @@ class AgentClient extends BaseClient {
}
if (noSystemMessages === true && systemContent?.length) {
let latestMessage = _messages.pop().content;
const latestMessageContent = _messages.pop().content;
if (typeof latestMessage !== 'string') {
latestMessage = latestMessage[0].text;
latestMessageContent[0].text = [systemContent, latestMessageContent[0].text].join('\n');
_messages.push(new HumanMessage({ content: latestMessageContent }));
} else {
const text = [systemContent, latestMessageContent].join('\n');
_messages.push(new HumanMessage(text));
}
latestMessage = [systemContent, latestMessage].join('\n');
_messages.push(new HumanMessage(latestMessage));
}
let messages = _messages;

View File

@@ -119,7 +119,7 @@ const chatV1 = async (req, res) => {
} else if (/Files.*are invalid/.test(error.message)) {
const errorMessage = `Files are invalid, or may not have uploaded yet.${
endpoint === EModelEndpoint.azureAssistants
? ' If using Azure OpenAI, files are only available in the region of the assistant\'s model at the time of upload.'
? " If using Azure OpenAI, files are only available in the region of the assistant's model at the time of upload."
: ''
}`;
return sendResponse(req, res, messageData, errorMessage);
@@ -379,8 +379,8 @@ const chatV1 = async (req, res) => {
body.additional_instructions ? `${body.additional_instructions}\n` : ''
}The user has uploaded ${imageCount} image${pluralized}.
Use the \`${ImageVisionTool.function.name}\` tool to retrieve ${
plural ? '' : 'a '
}detailed text description${pluralized} for ${plural ? 'each' : 'the'} image${pluralized}.`;
plural ? '' : 'a '
}detailed text description${pluralized} for ${plural ? 'each' : 'the'} image${pluralized}.`;
return files;
};
@@ -576,6 +576,8 @@ const chatV1 = async (req, res) => {
thread_id,
model: assistant_id,
endpoint,
spec: endpointOption.spec,
iconURL: endpointOption.iconURL,
};
sendMessage(res, {

View File

@@ -428,6 +428,8 @@ const chatV2 = async (req, res) => {
thread_id,
model: assistant_id,
endpoint,
spec: endpointOption.spec,
iconURL: endpointOption.iconURL,
};
sendMessage(res, {

View File

@@ -21,6 +21,7 @@ const { getOpenAIClient } = require('~/server/controllers/assistants/helpers');
const { loadAuthValues } = require('~/server/services/Tools/credentials');
const { refreshS3FileUrls } = require('~/server/services/Files/S3/crud');
const { getFiles, batchUpdateFiles } = require('~/models/File');
const { getAssistant } = require('~/models/Assistant');
const { getAgent } = require('~/models/Agent');
const { getLogStores } = require('~/cache');
const { logger } = require('~/config');
@@ -94,7 +95,7 @@ router.delete('/', async (req, res) => {
});
}
/* Handle entity unlinking even if no valid files to delete */
/* Handle agent unlinking even if no valid files to delete */
if (req.body.agent_id && req.body.tool_resource && dbFiles.length === 0) {
const agent = await getAgent({
id: req.body.agent_id,
@@ -104,7 +105,21 @@ router.delete('/', async (req, res) => {
const agentFiles = files.filter((f) => toolResourceFiles.includes(f.file_id));
await processDeleteRequest({ req, files: agentFiles });
res.status(200).json({ message: 'File associations removed successfully' });
res.status(200).json({ message: 'File associations removed successfully from agent' });
return;
}
/* Handle assistant unlinking even if no valid files to delete */
if (req.body.assistant_id && req.body.tool_resource && dbFiles.length === 0) {
const assistant = await getAssistant({
id: req.body.assistant_id,
});
const toolResourceFiles = assistant.tool_resources?.[req.body.tool_resource]?.file_ids ?? [];
const assistantFiles = files.filter((f) => toolResourceFiles.includes(f.file_id));
await processDeleteRequest({ req, files: assistantFiles });
res.status(200).json({ message: 'File associations removed successfully from assistant' });
return;
}

View File

@@ -56,7 +56,7 @@ const logoutUser = async (req, refreshToken) => {
try {
req.session.destroy();
} catch (destroyErr) {
logger.error('[logoutUser] Failed to destroy session.', destroyErr);
logger.debug('[logoutUser] Failed to destroy session.', destroyErr);
}
return { status: 200, message: 'Logout successful' };

View File

@@ -233,6 +233,13 @@ const initializeAgentOptions = async ({
endpointOption: _endpointOption,
});
if (
agent.endpoint === EModelEndpoint.azureOpenAI &&
options.llmConfig?.azureOpenAIApiInstanceName == null
) {
agent.provider = Providers.OPENAI;
}
if (options.provider != null) {
agent.provider = options.provider;
}

View File

@@ -3,7 +3,6 @@ const generateArtifactsPrompt = require('~/app/clients/prompts/artifacts');
const { getAssistant } = require('~/models/Assistant');
const buildOptions = async (endpoint, parsedBody) => {
const { promptPrefix, assistant_id, iconURL, greeting, spec, artifacts, ...modelOptions } =
parsedBody;
const endpointOption = removeNullishValues({

View File

@@ -132,6 +132,8 @@ async function saveUserMessage(req, params) {
* @param {string} params.endpoint - The conversation endpoint
* @param {string} params.parentMessageId - The latest user message that triggered this response.
* @param {string} [params.instructions] - Optional: from preset for `instructions` field.
* @param {string} [params.spec] - Optional: Model spec identifier.
* @param {string} [params.iconURL]
* Overrides the instructions of the assistant.
* @param {string} [params.promptPrefix] - Optional: from preset for `additional_instructions` field.
* @return {Promise<Run>} A promise that resolves to the created run object.
@@ -154,6 +156,8 @@ async function saveAssistantMessage(req, params) {
text: params.text,
unfinished: false,
// tokenCount,
iconURL: params.iconURL,
spec: params.spec,
});
await saveConvo(
@@ -165,6 +169,8 @@ async function saveAssistantMessage(req, params) {
instructions: params.instructions,
assistant_id: params.assistant_id,
model: params.model,
iconURL: params.iconURL,
spec: params.spec,
},
{ context: 'api/server/services/Threads/manage.js #saveAssistantMessage' },
);

View File

@@ -43,6 +43,60 @@
* @memberof typedefs
*/
/**
* @exports Graph
* @typedef {import('@librechat/agents').Graph} Graph
* @memberof typedefs
*/
/**
* @exports StandardGraph
* @typedef {import('@librechat/agents').StandardGraph} StandardGraph
* @memberof typedefs
*/
/**
* @exports EventHandler
* @typedef {import('@librechat/agents').EventHandler} EventHandler
* @memberof typedefs
*/
/**
* @exports ModelEndData
* @typedef {import('@librechat/agents').ModelEndData} ModelEndData
* @memberof typedefs
*/
/**
* @exports ToolEndData
* @typedef {import('@librechat/agents').ToolEndData} ToolEndData
* @memberof typedefs
*/
/**
* @exports ToolEndCallback
* @typedef {import('@librechat/agents').ToolEndCallback} ToolEndCallback
* @memberof typedefs
*/
/**
* @exports ChatModelStreamHandler
* @typedef {import('@librechat/agents').ChatModelStreamHandler} ChatModelStreamHandler
* @memberof typedefs
*/
/**
* @exports ContentAggregator
* @typedef {import('@librechat/agents').ContentAggregatorResult['aggregateContent']} ContentAggregator
* @memberof typedefs
*/
/**
* @exports GraphEvents
* @typedef {import('@librechat/agents').GraphEvents} GraphEvents
* @memberof typedefs
*/
/**
* @exports AgentRun
* @typedef {import('@librechat/agents').Run} AgentRun
@@ -97,12 +151,6 @@
* @memberof typedefs
*/
/**
* @exports ToolEndData
* @typedef {import('@librechat/agents').ToolEndData} ToolEndData
* @memberof typedefs
*/
/**
* @exports BaseMessage
* @typedef {import('@langchain/core/messages').BaseMessage} BaseMessage

View File

@@ -60,10 +60,16 @@ const cohereModels = {
const googleModels = {
/* Max I/O is combined so we subtract the amount from max response tokens for actual total */
gemma: 8196,
'gemma-2': 32768,
'gemma-3': 32768,
'gemma-3-27b': 131072,
gemini: 30720, // -2048 from max
'gemini-pro-vision': 12288,
'gemini-exp': 2000000,
'gemini-2.5': 1000000, // 1M input tokens, 64k output tokens
'gemini-2.5-pro': 1000000,
'gemini-2.5-flash': 1000000,
'gemini-2.0': 2000000,
'gemini-2.0-flash': 1000000,
'gemini-2.0-flash-lite': 1000000,
@@ -235,12 +241,15 @@ const modelMaxOutputs = {
system_default: 1024,
};
/** Outputs from https://docs.anthropic.com/en/docs/about-claude/models/all-models#model-names */
const anthropicMaxOutputs = {
'claude-3-haiku': 4096,
'claude-3-sonnet': 4096,
'claude-3-opus': 4096,
'claude-3.5-sonnet': 8192,
'claude-3-5-sonnet': 8192,
'claude-3.7-sonnet': 128000,
'claude-3-7-sonnet': 128000,
};
const maxOutputTokensMap = {

View File

@@ -1,6 +1,6 @@
{
"name": "@librechat/frontend",
"version": "v0.7.8-rc1",
"version": "v0.7.8",
"description": "",
"type": "module",
"scripts": {
@@ -141,7 +141,7 @@
"tailwindcss": "^3.4.1",
"ts-jest": "^29.2.5",
"typescript": "^5.3.3",
"vite": "^6.2.5",
"vite": "^6.3.4",
"vite-plugin-compression2": "^1.3.3",
"vite-plugin-node-polyfills": "^0.23.0",
"vite-plugin-pwa": "^0.21.2"

View File

@@ -2,6 +2,7 @@ import React, { useEffect, useCallback, useRef, useState } from 'react';
import throttle from 'lodash/throttle';
import { visit } from 'unist-util-visit';
import { useSetRecoilState } from 'recoil';
import { useLocation } from 'react-router-dom';
import type { Pluggable } from 'unified';
import type { Artifact } from '~/common';
import { useMessageContext, useArtifactContext } from '~/Providers';
@@ -45,6 +46,7 @@ export function Artifact({
children: React.ReactNode | { props: { children: React.ReactNode } };
node: unknown;
}) {
const location = useLocation();
const { messageId } = useMessageContext();
const { getNextIndex, resetCounter } = useArtifactContext();
const artifactIndex = useRef(getNextIndex(false)).current;
@@ -86,6 +88,10 @@ export function Artifact({
lastUpdateTime: now,
};
if (!location.pathname.includes('/c/')) {
return setArtifact(currentArtifact);
}
setArtifacts((prevArtifacts) => {
if (
prevArtifacts?.[artifactKey] != null &&
@@ -110,6 +116,7 @@ export function Artifact({
props.identifier,
messageId,
artifactIndex,
location.pathname,
]);
useEffect(() => {

View File

@@ -1,15 +1,52 @@
import { useSetRecoilState, useResetRecoilState } from 'recoil';
import { useEffect, useRef } from 'react';
import debounce from 'lodash/debounce';
import { useLocation } from 'react-router-dom';
import { useRecoilState, useSetRecoilState, useResetRecoilState } from 'recoil';
import type { Artifact } from '~/common';
import FilePreview from '~/components/Chat/Input/Files/FilePreview';
import { getFileType, logger } from '~/utils';
import { useLocalize } from '~/hooks';
import { getFileType } from '~/utils';
import store from '~/store';
const ArtifactButton = ({ artifact }: { artifact: Artifact | null }) => {
const localize = useLocalize();
const setVisible = useSetRecoilState(store.artifactsVisible);
const location = useLocation();
const setVisible = useSetRecoilState(store.artifactsVisibility);
const [artifacts, setArtifacts] = useRecoilState(store.artifactsState);
const setCurrentArtifactId = useSetRecoilState(store.currentArtifactId);
const resetCurrentArtifactId = useResetRecoilState(store.currentArtifactId);
const [visibleArtifacts, setVisibleArtifacts] = useRecoilState(store.visibleArtifacts);
const debouncedSetVisibleRef = useRef(
debounce((artifactToSet: Artifact) => {
logger.log(
'artifacts_visibility',
'Setting artifact to visible state from Artifact button',
artifactToSet,
);
setVisibleArtifacts((prev) => ({
...prev,
[artifactToSet.id]: artifactToSet,
}));
}, 750),
);
useEffect(() => {
if (artifact == null || artifact?.id == null || artifact.id === '') {
return;
}
if (!location.pathname.includes('/c/')) {
return;
}
const debouncedSetVisible = debouncedSetVisibleRef.current;
debouncedSetVisible(artifact);
return () => {
debouncedSetVisible.cancel();
};
}, [artifact, location.pathname]);
if (artifact === null || artifact === undefined) {
return null;
}
@@ -20,8 +57,14 @@ const ArtifactButton = ({ artifact }: { artifact: Artifact | null }) => {
<button
type="button"
onClick={() => {
if (!location.pathname.includes('/c/')) {
return;
}
resetCurrentArtifactId();
setVisible(true);
if (artifacts?.[artifact.id] == null) {
setArtifacts(visibleArtifacts);
}
setTimeout(() => {
setCurrentArtifactId(artifact.id);
}, 15);

View File

@@ -1,7 +1,7 @@
import { useRef, useState, useEffect } from 'react';
import { RefreshCw } from 'lucide-react';
import { useSetRecoilState } from 'recoil';
import * as Tabs from '@radix-ui/react-tabs';
import { ArrowLeft, ChevronLeft, ChevronRight, RefreshCw, X } from 'lucide-react';
import type { SandpackPreviewRef, CodeEditorRef } from '@codesandbox/sandpack-react';
import useArtifacts from '~/hooks/Artifacts/useArtifacts';
import DownloadArtifact from './DownloadArtifact';
@@ -18,7 +18,7 @@ export default function Artifacts() {
const previewRef = useRef<SandpackPreviewRef>();
const [isVisible, setIsVisible] = useState(false);
const [isRefreshing, setIsRefreshing] = useState(false);
const setArtifactsVisible = useSetRecoilState(store.artifactsVisible);
const setArtifactsVisible = useSetRecoilState(store.artifactsVisibility);
useEffect(() => {
setIsVisible(true);
@@ -48,37 +48,26 @@ export default function Artifacts() {
setTimeout(() => setIsRefreshing(false), 750);
};
const closeArtifacts = () => {
setIsVisible(false);
setTimeout(() => setArtifactsVisible(false), 300);
};
return (
<Tabs.Root value={activeTab} onValueChange={setActiveTab} asChild>
{/* Main Parent */}
<div className="flex h-full w-full items-center justify-center">
{/* Main Container */}
<div
className={`flex h-full w-full flex-col overflow-hidden border border-border-medium bg-surface-primary text-xl text-text-primary shadow-xl transition-all duration-300 ease-in-out ${
isVisible
? 'translate-x-0 scale-100 opacity-100'
: 'translate-x-full scale-95 opacity-0'
className={`flex h-full w-full flex-col overflow-hidden border border-border-medium bg-surface-primary text-xl text-text-primary shadow-xl transition-all duration-500 ease-in-out ${
isVisible ? 'scale-100 opacity-100 blur-0' : 'scale-105 opacity-0 blur-sm'
}`}
>
{/* Header */}
<div className="flex items-center justify-between border-b border-border-medium bg-surface-primary-alt p-2">
<div className="flex items-center">
<button
className="mr-2 text-text-secondary"
onClick={() => {
setIsVisible(false);
setTimeout(() => setArtifactsVisible(false), 300);
}}
>
<svg
xmlns="http://www.w3.org/2000/svg"
width="16"
height="16"
fill="currentColor"
viewBox="0 0 256 256"
>
<path d="M224,128a8,8,0,0,1-8,8H59.31l58.35,58.34a8,8,0,0,1-11.32,11.32l-72-72a8,8,0,0,1,0-11.32l72-72a8,8,0,0,1,11.32,11.32L59.31,120H216A8,8,0,0,1,224,128Z" />
</svg>
<button className="mr-2 text-text-secondary" onClick={closeArtifacts}>
<ArrowLeft className="h-4 w-4" />
</button>
<h3 className="truncate text-sm text-text-primary">{currentArtifact.title}</h3>
</div>
@@ -118,22 +107,8 @@ export default function Artifacts() {
{localize('com_ui_code')}
</Tabs.Trigger>
</Tabs.List>
<button
className="text-text-secondary"
onClick={() => {
setIsVisible(false);
setTimeout(() => setArtifactsVisible(false), 300);
}}
>
<svg
xmlns="http://www.w3.org/2000/svg"
width="16"
height="16"
fill="currentColor"
viewBox="0 0 256 256"
>
<path d="M205.66,194.34a8,8,0,0,1-11.32,11.32L128,139.31,61.66,205.66a8,8,0,0,1-11.32-11.32L116.69,128,50.34,61.66A8,8,0,0,1,61.66,50.34L128,116.69l66.34-66.35a8,8,0,0,1,11.32,11.32L139.31,128Z" />
</svg>
<button className="text-text-secondary" onClick={closeArtifacts}>
<X className="h-4 w-4" />
</button>
</div>
</div>
@@ -149,29 +124,13 @@ export default function Artifacts() {
<div className="flex items-center justify-between border-t border-border-medium bg-surface-primary-alt p-2 text-sm text-text-secondary">
<div className="flex items-center">
<button onClick={() => cycleArtifact('prev')} className="mr-2 text-text-secondary">
<svg
xmlns="http://www.w3.org/2000/svg"
width="16"
height="16"
fill="currentColor"
viewBox="0 0 256 256"
>
<path d="M165.66,202.34a8,8,0,0,1-11.32,11.32l-80-80a8,8,0,0,1,0-11.32l80-80a8,8,0,0,1,11.32,11.32L91.31,128Z" />
</svg>
<ChevronLeft className="h-4 w-4" />
</button>
<span className="text-xs">{`${currentIndex + 1} / ${
orderedArtifactIds.length
}`}</span>
<button onClick={() => cycleArtifact('next')} className="ml-2 text-text-secondary">
<svg
xmlns="http://www.w3.org/2000/svg"
width="16"
height="16"
fill="currentColor"
viewBox="0 0 256 256"
>
<path d="M181.66,133.66l-80,80a8,8,0,0,1-11.32-11.32L164.69,128,90.34,53.66a8,8,0,0,1,11.32-11.32l80,80A8,8,0,0,1,181.66,133.66Z" />
</svg>
<ChevronRight className="h-4 w-4" />
</button>
</div>
<div className="flex items-center gap-2">

View File

@@ -35,7 +35,7 @@ export const CodeMarkdown = memo(
const [userScrolled, setUserScrolled] = useState(false);
const currentContent = content;
const rehypePlugins = [
[rehypeKatex, { output: 'mathml' }],
[rehypeKatex],
[
rehypeHighlight,
{

View File

@@ -44,15 +44,6 @@ export default function ExportAndShareMenu({
};
const dropdownItems: t.MenuItemProps[] = [
{
label: localize('com_endpoint_export'),
onClick: exportHandler,
icon: <Upload className="icon-md mr-2 text-text-secondary" />,
/** NOTE: THE FOLLOWING PROPS ARE REQUIRED FOR MENU ITEMS THAT OPEN DIALOGS */
hideOnClick: false,
ref: exportButtonRef,
render: (props) => <button {...props} />,
},
{
label: localize('com_ui_share'),
onClick: shareHandler,
@@ -63,6 +54,15 @@ export default function ExportAndShareMenu({
ref: shareButtonRef,
render: (props) => <button {...props} />,
},
{
label: localize('com_endpoint_export'),
onClick: exportHandler,
icon: <Upload className="icon-md mr-2 text-text-secondary" />,
/** NOTE: THE FOLLOWING PROPS ARE REQUIRED FOR MENU ITEMS THAT OPEN DIALOGS */
hideOnClick: false,
ref: exportButtonRef,
render: (props) => <button {...props} />,
},
];
return (
@@ -70,6 +70,7 @@ export default function ExportAndShareMenu({
<DropdownPopup
menuId={menuId}
focusLoop={true}
unmountOnHide={true}
isOpen={isPopoverActive}
setIsOpen={setIsPopoverActive}
trigger={
@@ -81,7 +82,7 @@ export default function ExportAndShareMenu({
aria-label="Export options"
className="inline-flex size-10 flex-shrink-0 items-center justify-center rounded-xl border border-border-light bg-transparent text-text-primary transition-all ease-in-out hover:bg-surface-tertiary disabled:pointer-events-none disabled:opacity-50 radix-state-open:bg-surface-tertiary"
>
<Upload
<Share2
className="icon-md text-text-secondary"
aria-hidden="true"
focusable="false"

View File

@@ -108,6 +108,10 @@ const ChatForm = memo(({ index = 0 }: { index?: number }) => {
);
const handleContainerClick = useCallback(() => {
/** Check if the device is a touchscreen */
if (window.matchMedia?.('(pointer: coarse)').matches) {
return;
}
textAreaRef.current?.focus();
}, []);
@@ -126,6 +130,7 @@ const ChatForm = memo(({ index = 0 }: { index?: number }) => {
});
const { submitMessage, submitPrompt } = useSubmitMessage();
const handleKeyUp = useHandleKeyUp({
index,
textAreaRef,

View File

@@ -41,9 +41,9 @@ const CollapseChat = ({
)}
>
{isCollapsed ? (
<ChevronDown className="h-full w-full" />
) : (
<ChevronUp className="h-full w-full" />
) : (
<ChevronDown className="h-full w-full" />
)}
</button>
}

View File

@@ -119,6 +119,7 @@ const AttachFile = ({ disabled }: AttachFileProps) => {
isOpen={isPopoverActive}
setIsOpen={setIsPopoverActive}
modal={true}
unmountOnHide={true}
trigger={menuTrigger}
items={dropdownItems}
iconClassName="mr-0"

View File

@@ -31,7 +31,8 @@ function MCPSelect({ conversationId }: { conversationId?: string | null }) {
select: (data) => {
const serverNames = new Set<string>();
data.forEach((tool) => {
if (tool.pluginKey.includes(Constants.mcp_delimiter)) {
const isMCP = tool.pluginKey.includes(Constants.mcp_delimiter);
if (isMCP && tool.chatMenu !== false) {
const parts = tool.pluginKey.split(Constants.mcp_delimiter);
serverNames.add(parts[parts.length - 1]);
}

View File

@@ -44,7 +44,7 @@ export default function PopoverButtons({
const endpoint = overrideEndpoint ?? endpointType ?? _endpoint ?? '';
const model = overrideModel ?? _model;
const isGenerativeModel = model?.toLowerCase().includes('gemini') ?? false;
const isGenerativeModel = /gemini|learnlm|gemma/.test(model ?? '') ?? false;
const isChatModel = (!isGenerativeModel && model?.toLowerCase().includes('chat')) ?? false;
const isTextModel = !isGenerativeModel && !isChatModel && /code|text/.test(model ?? '');
@@ -133,7 +133,6 @@ export default function PopoverButtons({
</Button>
))}
</div>
{/* eslint-disable-next-line @typescript-eslint/no-unnecessary-condition */}
{disabled ? null : (
<div className="flex w-[150px] items-center justify-end">
{additionalButtons[settingsView].map((button, index) => (

View File

@@ -160,6 +160,7 @@ const BookmarkMenu: FC = () => {
focusLoop={true}
menuId={menuId}
isOpen={isMenuOpen}
unmountOnHide={true}
setIsOpen={setIsMenuOpen}
keyPrefix={`${conversationId}-bookmark-`}
trigger={

View File

@@ -113,9 +113,9 @@ const EditMessage = ({
messages.map((msg) =>
msg.messageId === messageId
? {
...msg,
text: data.text,
}
...msg,
text: data.text,
}
: msg,
),
);

View File

@@ -184,7 +184,7 @@ const Markdown = memo(({ content = '', isLatestMessage }: TContentProps) => {
const rehypePlugins = useMemo(
() => [
[rehypeKatex, { output: 'mathml' }],
[rehypeKatex],
[
rehypeHighlight,
{

View File

@@ -13,7 +13,7 @@ import { langSubset } from '~/utils';
const MarkdownLite = memo(
({ content = '', codeExecution = true }: { content?: string; codeExecution?: boolean }) => {
const rehypePlugins: PluggableList = [
[rehypeKatex, { output: 'mathml' }],
[rehypeKatex],
[
rehypeHighlight,
{

View File

@@ -35,7 +35,7 @@ const TextPart = memo(({ text, isCreatedByUser, showCursor }: TextPartProps) =>
} else {
return <>{text}</>;
}
}, [isCreatedByUser, enableUserMsgMarkdown, text, showCursorState, isLatestMessage]);
}, [isCreatedByUser, enableUserMsgMarkdown, text, isLatestMessage]);
return (
<div

View File

@@ -12,7 +12,7 @@ import store from '~/store';
export default function Presentation({ children }: { children: React.ReactNode }) {
const artifacts = useRecoilValue(store.artifactsState);
const artifactsVisible = useRecoilValue(store.artifactsVisible);
const artifactsVisibility = useRecoilValue(store.artifactsVisibility);
const setFilesToDelete = useSetFilesToDelete();
@@ -64,7 +64,7 @@ export default function Presentation({ children }: { children: React.ReactNode }
fullPanelCollapse={fullCollapse}
defaultCollapsed={defaultCollapsed}
artifacts={
artifactsVisible === true && Object.keys(artifacts ?? {}).length > 0 ? (
artifactsVisibility === true && Object.keys(artifacts ?? {}).length > 0 ? (
<EditorProvider>
<Artifacts />
</EditorProvider>

View File

@@ -235,10 +235,11 @@ function ConvoOptions({
<DeleteButton
title={title ?? ''}
retainView={retainView}
conversationId={conversationId ?? ''}
showDeleteDialog={showDeleteDialog}
setShowDeleteDialog={setShowDeleteDialog}
triggerRef={deleteButtonRef}
setMenuOpen={setIsPopoverActive}
showDeleteDialog={showDeleteDialog}
conversationId={conversationId ?? ''}
setShowDeleteDialog={setShowDeleteDialog}
/>
)}
</>

View File

@@ -4,13 +4,12 @@ import { useQueryClient } from '@tanstack/react-query';
import { useParams, useNavigate } from 'react-router-dom';
import type { TMessage } from 'librechat-data-provider';
import {
Label,
OGDialog,
OGDialogTitle,
OGDialogContent,
OGDialogHeader,
Button,
Spinner,
OGDialog,
OGDialogTitle,
OGDialogHeader,
OGDialogContent,
} from '~/components';
import { useDeleteConversationMutation } from '~/data-provider';
import { useLocalize, useNewConvo } from '~/hooks';
@@ -24,14 +23,17 @@ type DeleteButtonProps = {
showDeleteDialog?: boolean;
setShowDeleteDialog?: (value: boolean) => void;
triggerRef?: React.RefObject<HTMLButtonElement>;
setMenuOpen?: React.Dispatch<React.SetStateAction<boolean>>;
};
export function DeleteConversationDialog({
setShowDeleteDialog,
conversationId,
setMenuOpen,
retainView,
title,
}: {
setMenuOpen?: React.Dispatch<React.SetStateAction<boolean>>;
setShowDeleteDialog: (value: boolean) => void;
conversationId: string;
retainView: () => void;
@@ -51,6 +53,7 @@ export function DeleteConversationDialog({
newConversation();
navigate('/c/new', { replace: true });
}
setMenuOpen?.(false);
retainView();
},
onError: () => {
@@ -98,6 +101,7 @@ export default function DeleteButton({
conversationId,
retainView,
title,
setMenuOpen,
showDeleteDialog,
setShowDeleteDialog,
triggerRef,
@@ -115,6 +119,7 @@ export default function DeleteButton({
<DeleteConversationDialog
setShowDeleteDialog={setShowDeleteDialog}
conversationId={conversationId}
setMenuOpen={setMenuOpen}
retainView={retainView}
title={title}
/>

View File

@@ -95,6 +95,7 @@ const PopoverButton: React.FC<PopoverButtonProps> = ({
gutter={16}
className="z-[999] w-80 rounded-md border border-border-medium bg-surface-secondary p-4 text-text-primary shadow-md"
portal={true}
unmountOnHide={true}
>
<div className="space-y-2">
<p className="flex flex-col gap-2 text-sm text-text-secondary">
@@ -179,33 +180,38 @@ export default function Fork({
return (
<>
<Ariakit.PopoverAnchor store={popoverStore}>
<button
className={cn(
'hover-button active rounded-md p-1 text-gray-500 hover:bg-gray-100 hover:text-gray-500 dark:text-gray-400/70 dark:hover:bg-gray-700 dark:hover:text-gray-200 disabled:dark:hover:text-gray-400 md:invisible md:group-hover:visible',
'data-[state=open]:active focus:opacity-100 data-[state=open]:bg-gray-100 data-[state=open]:text-gray-500 data-[state=open]:dark:bg-gray-700 data-[state=open]:dark:text-gray-200',
!isLast ? 'data-[state=open]:opacity-100 md:opacity-0 md:group-hover:opacity-100' : '',
)}
onClick={(e) => {
if (rememberGlobal) {
e.preventDefault();
forkConvo.mutate({
messageId,
splitAtTarget,
conversationId,
option: forkSetting,
latestMessageId,
});
} else {
popoverStore.toggle();
}
}}
type="button"
aria-label={localize('com_ui_fork')}
>
<GitFork className="h-4 w-4 hover:text-gray-500 dark:hover:bg-gray-700 dark:hover:text-gray-200 disabled:dark:hover:text-gray-400" />
</button>
</Ariakit.PopoverAnchor>
<Ariakit.PopoverAnchor
store={popoverStore}
render={
<button
className={cn(
'hover-button active rounded-md p-1 text-gray-500 hover:bg-gray-100 hover:text-gray-500 dark:text-gray-400/70 dark:hover:bg-gray-700 dark:hover:text-gray-200 disabled:dark:hover:text-gray-400 md:invisible md:group-hover:visible',
'data-[state=open]:active focus:opacity-100 data-[state=open]:bg-gray-100 data-[state=open]:text-gray-500 data-[state=open]:dark:bg-gray-700 data-[state=open]:dark:text-gray-200',
!isLast
? 'data-[state=open]:opacity-100 md:opacity-0 md:group-hover:opacity-100'
: '',
)}
onClick={(e) => {
if (rememberGlobal) {
e.preventDefault();
forkConvo.mutate({
messageId,
splitAtTarget,
conversationId,
option: forkSetting,
latestMessageId,
});
} else {
popoverStore.toggle();
}
}}
type="button"
aria-label={localize('com_ui_fork')}
>
<GitFork className="h-4 w-4 hover:text-gray-500 dark:hover:bg-gray-700 dark:hover:text-gray-200 disabled:dark:hover:text-gray-400" />
</button>
}
/>
<Ariakit.Popover
store={popoverStore}
gutter={5}
@@ -216,6 +222,7 @@ export default function Fork({
zIndex: 50,
}}
portal={true}
unmountOnHide={true}
>
<div className="flex h-8 w-full items-center justify-center text-sm text-text-primary">
{localize(activeSetting)}
@@ -240,6 +247,7 @@ export default function Fork({
gutter={19}
className="z-[999] w-80 rounded-md border border-border-medium bg-surface-secondary p-4 text-text-primary shadow-md"
portal={true}
unmountOnHide={true}
>
<div className="flex flex-col gap-2 space-y-2 text-sm text-text-secondary">
<span>{localize('com_ui_fork_info_1')}</span>
@@ -336,6 +344,7 @@ export default function Fork({
gutter={32}
className="z-[999] w-80 select-none rounded-md border border-border-medium bg-surface-secondary p-4 text-text-primary shadow-md"
portal={true}
unmountOnHide={true}
>
<div className="space-y-2">
<p className="text-sm text-text-secondary">{localize('com_ui_fork_info_start')}</p>
@@ -386,6 +395,7 @@ export default function Fork({
gutter={14}
className="z-[999] w-80 rounded-md border border-border-medium bg-surface-secondary p-4 text-text-primary shadow-md"
portal={true}
unmountOnHide={true}
>
<div className="space-y-2">
<p className="text-sm text-text-secondary">{localize('com_ui_fork_info_remember')}</p>

View File

@@ -34,10 +34,7 @@ function getOpenAIColor(_model: string | null | undefined) {
function getGoogleIcon(model: string | null | undefined, size: number) {
if (model?.toLowerCase().includes('code') === true) {
return <CodeyIcon size={size * 0.75} />;
} else if (
model?.toLowerCase().includes('gemini') === true ||
model?.toLowerCase().includes('learnlm') === true
) {
} else if (/gemini|learnlm|gemma/.test(model?.toLowerCase() ?? '')) {
return <GeminiIcon size={size * 0.7} />;
} else {
return <PaLMIcon size={size * 0.7} />;
@@ -52,6 +49,8 @@ function getGoogleModelName(model: string | null | undefined) {
model?.toLowerCase().includes('learnlm') === true
) {
return 'Gemini';
} else if (model?.toLowerCase().includes('gemma') === true) {
return 'Gemma';
} else {
return 'PaLM2';
}

View File

@@ -66,7 +66,7 @@ const AdminSettings = () => {
const [confirmAdminUseChange, setConfirmAdminUseChange] = useState<{
newValue: boolean;
callback: (value: boolean) => void;
} | null>(null);
} | null>(null);
const { mutate, isLoading } = useUpdatePromptPermissionsMutation({
onSuccess: () => {
showToast({ status: 'success', message: localize('com_ui_saved') });
@@ -166,6 +166,7 @@ const AdminSettings = () => {
<div className="flex items-center gap-2">
<span className="font-medium">{localize('com_ui_role_select')}:</span>
<DropdownPopup
unmountOnHide={true}
menuId="prompt-role-dropdown"
isOpen={isRoleMenuOpen}
setIsOpen={setIsRoleMenuOpen}
@@ -191,11 +192,11 @@ const AdminSettings = () => {
setValue={setValue}
{...(selectedRole === SystemRoles.ADMIN && promptPerm === Permissions.USE
? {
confirmChange: (
newValue: boolean,
onChange: (value: boolean) => void,
) => setConfirmAdminUseChange({ newValue, callback: onChange }),
}
confirmChange: (
newValue: boolean,
onChange: (value: boolean) => void,
) => setConfirmAdminUseChange({ newValue, callback: onChange }),
}
: {})}
/>
{selectedRole === SystemRoles.ADMIN && promptPerm === Permissions.USE && (

View File

@@ -146,7 +146,7 @@ export default function VariableForm({
remarkPlugins={[supersub, remarkGfm, [remarkMath, { singleDollarTextMath: true }]]}
rehypePlugins={[
/** @ts-ignore */
[rehypeKatex, { output: 'mathml' }],
[rehypeKatex],
/** @ts-ignore */
[rehypeHighlight, { ignoreMissing: true }],
]}

View File

@@ -59,7 +59,7 @@ const PromptDetails = ({ group }: { group?: TPromptGroup }) => {
]}
rehypePlugins={[
/** @ts-ignore */
[rehypeKatex, { output: 'mathml' }],
[rehypeKatex],
/** @ts-ignore */
[rehypeHighlight, { ignoreMissing: true }],
]}

View File

@@ -43,7 +43,7 @@ const PromptEditor: React.FC<Props> = ({ name, isEditing, setIsEditing }) => {
}, [isEditing, prompt]);
const rehypePlugins: PluggableList = [
[rehypeKatex, { output: 'mathml' }],
[rehypeKatex],
[
rehypeHighlight,
{

View File

@@ -157,6 +157,7 @@ const AdminSettings = () => {
<div className="flex items-center gap-2">
<span className="font-medium">{localize('com_ui_role_select')}:</span>
<DropdownPopup
unmountOnHide={true}
menuId="role-dropdown"
isOpen={isRoleMenuOpen}
setIsOpen={setIsRoleMenuOpen}

View File

@@ -30,6 +30,7 @@ import type {
SharedLinksResponse,
} from 'librechat-data-provider';
import type { ConversationCursorData } from '~/utils/convos';
import { findConversationInInfinite } from '~/utils';
export const useGetPresetsQuery = (
config?: UseQueryOptions<TPreset[]>,
@@ -68,14 +69,13 @@ export const useGetConvoIdQuery = (
[QueryKeys.conversation, id],
() => {
// Try to find in all fetched infinite pages
const convosQuery = queryClient.getQueryData<InfiniteData<ConversationCursorData>>([
QueryKeys.allConversations,
]);
const found = convosQuery?.pages
.flatMap((page) => page.conversations)
.find((c) => c.conversationId === id);
const convosQuery = queryClient.getQueryData<InfiniteData<ConversationCursorData>>(
[QueryKeys.allConversations],
{ exact: false },
);
const found = findConversationInInfinite(convosQuery, id);
if (found) {
if (found && found.messages != null) {
return found;
}
// Otherwise, fetch from API

View File

@@ -1,9 +1,9 @@
import { useMemo, useState, useEffect, useRef } from 'react';
import { Constants } from 'librechat-data-provider';
import { useRecoilState, useRecoilValue, useResetRecoilState } from 'recoil';
import { getLatestText, logger } from '~/utils';
import { useChatContext } from '~/Providers';
import { getKey } from '~/utils/artifacts';
import { getLatestText } from '~/utils';
import store from '~/store';
export default function useArtifacts() {
@@ -37,16 +37,20 @@ export default function useArtifacts() {
hasEnclosedArtifactRef.current = false;
};
if (
conversation &&
conversation.conversationId !== prevConversationIdRef.current &&
conversation?.conversationId !== prevConversationIdRef.current &&
prevConversationIdRef.current != null
) {
resetState();
} else if (conversation && conversation.conversationId === Constants.NEW_CONVO) {
} else if (conversation?.conversationId === Constants.NEW_CONVO) {
resetState();
}
prevConversationIdRef.current = conversation?.conversationId ?? null;
}, [conversation, resetArtifacts, resetCurrentArtifactId]);
/** Resets artifacts when unmounting */
return () => {
logger.log('artifacts_visibility', 'Unmounting artifacts');
resetState();
};
}, [conversation?.conversationId, resetArtifacts, resetCurrentArtifactId]);
useEffect(() => {
if (orderedArtifactIds.length > 0) {
@@ -56,30 +60,39 @@ export default function useArtifacts() {
}, [setCurrentArtifactId, orderedArtifactIds]);
useEffect(() => {
if (isSubmitting && orderedArtifactIds.length > 0 && latestMessage) {
const latestArtifactId = orderedArtifactIds[orderedArtifactIds.length - 1];
const latestArtifact = artifacts?.[latestArtifactId];
if (!isSubmitting) {
return;
}
if (orderedArtifactIds.length === 0) {
return;
}
if (latestMessage == null) {
return;
}
const latestArtifactId = orderedArtifactIds[orderedArtifactIds.length - 1];
const latestArtifact = artifacts?.[latestArtifactId];
if (latestArtifact?.content === lastContentRef.current) {
return;
}
if (latestArtifact?.content !== lastContentRef.current) {
setCurrentArtifactId(latestArtifactId);
lastContentRef.current = latestArtifact?.content ?? null;
setCurrentArtifactId(latestArtifactId);
lastContentRef.current = latestArtifact?.content ?? null;
const latestMessageText = getLatestText(latestMessage);
const hasEnclosedArtifact = /:::artifact[\s\S]*?(```|:::)\s*$/.test(
latestMessageText.trim(),
);
const latestMessageText = getLatestText(latestMessage);
const hasEnclosedArtifact =
/:::artifact(?:\{[^}]*\})?(?:\s|\n)*(?:```[\s\S]*?```(?:\s|\n)*)?:::/m.test(
latestMessageText.trim(),
);
if (hasEnclosedArtifact && !hasEnclosedArtifactRef.current) {
setActiveTab('preview');
hasEnclosedArtifactRef.current = true;
hasAutoSwitchedToCodeRef.current = false;
} else if (!hasEnclosedArtifactRef.current && !hasAutoSwitchedToCodeRef.current) {
const artifactStartContent = latestArtifact?.content?.slice(0, 50) ?? '';
if (artifactStartContent.length > 0 && latestMessageText.includes(artifactStartContent)) {
setActiveTab('code');
hasAutoSwitchedToCodeRef.current = true;
}
}
if (hasEnclosedArtifact && !hasEnclosedArtifactRef.current) {
setActiveTab('preview');
hasEnclosedArtifactRef.current = true;
hasAutoSwitchedToCodeRef.current = false;
} else if (!hasEnclosedArtifactRef.current && !hasAutoSwitchedToCodeRef.current) {
const artifactStartContent = latestArtifact?.content?.slice(0, 50) ?? '';
if (artifactStartContent.length > 0 && latestMessageText.includes(artifactStartContent)) {
setActiveTab('code');
hasAutoSwitchedToCodeRef.current = true;
}
}
}, [setCurrentArtifactId, isSubmitting, orderedArtifactIds, artifacts, latestMessage]);

View File

@@ -11,7 +11,10 @@ export default function useFocusChatEffect(textAreaRef: React.RefObject<HTMLText
'conversation',
`Focusing textarea on location state change: ${location.pathname}`,
);
textAreaRef.current?.focus();
/** Check if the device is not a touchscreen */
if (!window.matchMedia?.('(pointer: coarse)').matches) {
textAreaRef.current?.focus();
}
navigate(`${location.pathname}${location.search ?? ''}`, { replace: true, state: {} });
}
}, [navigate, textAreaRef, location.pathname, location.state?.focusChat, location.search]);

View File

@@ -4,18 +4,18 @@ import { logger } from '~/utils';
import store from '~/store';
/**
* Hook to reset artifacts when the conversation ID changes
* Hook to reset visible artifacts when the conversation ID changes
* @param conversationId - The current conversation ID
*/
export default function useIdChangeEffect(conversationId: string) {
const lastConvoId = useRef<string | null>(null);
const resetArtifacts = useResetRecoilState(store.artifactsState);
const resetVisibleArtifacts = useResetRecoilState(store.visibleArtifacts);
useEffect(() => {
if (conversationId !== lastConvoId.current) {
logger.log('conversation', 'Conversation ID change');
resetArtifacts();
resetVisibleArtifacts();
}
lastConvoId.current = conversationId;
}, [conversationId, resetArtifacts]);
}, [conversationId, resetVisibleArtifacts]);
}

View File

@@ -1,7 +1,7 @@
import { useSetRecoilState } from 'recoil';
import { useNavigate } from 'react-router-dom';
import { useQueryClient } from '@tanstack/react-query';
import { QueryKeys, Constants } from 'librechat-data-provider';
import { QueryKeys, Constants, dataService } from 'librechat-data-provider';
import type { TConversation, TEndpointsConfig, TModelsConfig } from 'librechat-data-provider';
import { buildDefaultConvo, getDefaultEndpoint, getEndpointField, logger } from '~/utils';
import store from '~/store';
@@ -14,6 +14,27 @@ const useNavigateToConvo = (index = 0) => {
const clearAllLatestMessages = store.useClearLatestMessages(`useNavigateToConvo ${index}`);
const { hasSetConversation, setConversation } = store.useCreateConversationAtom(index);
const fetchFreshData = async (conversation?: Partial<TConversation>) => {
const conversationId = conversation?.conversationId;
if (!conversationId) {
return;
}
try {
const data = await queryClient.fetchQuery([QueryKeys.conversation, conversationId], () =>
dataService.getConversationById(conversationId),
);
logger.log('conversation', 'Fetched fresh conversation data', data);
setConversation(data);
navigate(`/c/${conversationId ?? Constants.NEW_CONVO}`, { state: { focusChat: true } });
} catch (error) {
console.error('Error fetching conversation data on navigation', error);
if (conversation) {
setConversation(conversation as TConversation);
navigate(`/c/${conversationId}`, { state: { focusChat: true } });
}
}
};
const navigateToConvo = (
conversation?: TConversation | null,
options?: {
@@ -58,9 +79,14 @@ const useNavigateToConvo = (index = 0) => {
});
}
clearAllConversations(true);
setConversation(convo);
queryClient.setQueryData([QueryKeys.messages, currentConvoId], []);
navigate(`/c/${convo.conversationId ?? Constants.NEW_CONVO}`, { state: { focusChat: true } });
if (convo.conversationId !== Constants.NEW_CONVO && convo.conversationId) {
queryClient.invalidateQueries([QueryKeys.conversation, convo.conversationId]);
fetchFreshData(convo);
} else {
setConversation(convo);
navigate(`/c/${convo.conversationId ?? Constants.NEW_CONVO}`, { state: { focusChat: true } });
}
};
return {

View File

@@ -75,9 +75,9 @@ export const useAutoSave = ({
const { fileToRecover, fileIdToRecover } = fileData
? { fileToRecover: fileData, fileIdToRecover: fileId }
: {
fileToRecover: tempFileData,
fileIdToRecover: (tempFileData?.temp_file_id ?? '') || fileId,
};
fileToRecover: tempFileData,
fileIdToRecover: (tempFileData?.temp_file_id ?? '') || fileId,
};
if (fileToRecover) {
setFiles((currentFiles) => {
@@ -188,7 +188,7 @@ export const useAutoSave = ({
`${LocalStorageKeys.TEXT_DRAFT}${Constants.PENDING_CONVO}`,
);
// Clear the pending draft, if it exists, and save the current draft to the new conversationId;
// Clear the pending text draft, if it exists, and save the current draft to the new conversationId;
// otherwise, save the current text area value to the new conversationId
localStorage.removeItem(`${LocalStorageKeys.TEXT_DRAFT}${Constants.PENDING_CONVO}`);
if (pendingDraft) {
@@ -199,6 +199,21 @@ export const useAutoSave = ({
encodeBase64(textAreaRef.current.value),
);
}
const pendingFileDraft = localStorage.getItem(
`${LocalStorageKeys.FILES_DRAFT}${Constants.PENDING_CONVO}`,
);
if (pendingFileDraft) {
localStorage.setItem(
`${LocalStorageKeys.FILES_DRAFT}${conversationId}`,
pendingFileDraft,
);
localStorage.removeItem(`${LocalStorageKeys.FILES_DRAFT}${Constants.PENDING_CONVO}`);
const filesDraft = JSON.parse(pendingFileDraft || '[]') as string[];
if (filesDraft.length > 0) {
restoreFiles(conversationId);
}
}
} else if (currentConversationId != null && currentConversationId) {
saveText(currentConversationId);
}

View File

@@ -0,0 +1,489 @@
// useQueryParams.spec.ts
jest.mock('recoil', () => {
const originalModule = jest.requireActual('recoil');
return {
...originalModule,
atom: jest.fn().mockImplementation((config) => ({
key: config.key,
default: config.default,
})),
useRecoilValue: jest.fn(),
};
});
// Move mock store definition after the mocks
jest.mock('~/store', () => ({
modularChat: { key: 'modularChat', default: false },
availableTools: { key: 'availableTools', default: [] },
}));
import { renderHook, act } from '@testing-library/react';
import { useSearchParams } from 'react-router-dom';
import { useQueryClient } from '@tanstack/react-query';
import { useRecoilValue } from 'recoil';
import useQueryParams from './useQueryParams';
import { useChatContext, useChatFormContext } from '~/Providers';
import useSubmitMessage from '~/hooks/Messages/useSubmitMessage';
import useDefaultConvo from '~/hooks/Conversations/useDefaultConvo';
import store from '~/store';
// Other mocks
jest.mock('react-router-dom', () => ({
useSearchParams: jest.fn(),
}));
jest.mock('@tanstack/react-query', () => ({
useQueryClient: jest.fn(),
}));
jest.mock('~/Providers', () => ({
useChatContext: jest.fn(),
useChatFormContext: jest.fn(),
}));
jest.mock('~/hooks/Messages/useSubmitMessage', () => ({
__esModule: true,
default: jest.fn(),
}));
jest.mock('~/hooks/Conversations/useDefaultConvo', () => ({
__esModule: true,
default: jest.fn(),
}));
jest.mock('~/utils', () => ({
getConvoSwitchLogic: jest.fn(() => ({
template: {},
shouldSwitch: false,
isNewModular: false,
newEndpointType: null,
isCurrentModular: false,
isExistingConversation: false,
})),
getModelSpecIconURL: jest.fn(() => 'icon-url'),
removeUnavailableTools: jest.fn((preset) => preset),
logger: { log: jest.fn() },
}));
// Mock the tQueryParamsSchema
jest.mock('librechat-data-provider', () => ({
...jest.requireActual('librechat-data-provider'),
tQueryParamsSchema: {
shape: {
model: { parse: jest.fn((value) => value) },
endpoint: { parse: jest.fn((value) => value) },
temperature: { parse: jest.fn((value) => value) },
// Add other schema shapes as needed
},
},
isAgentsEndpoint: jest.fn(() => false),
isAssistantsEndpoint: jest.fn(() => false),
QueryKeys: { startupConfig: 'startupConfig', endpoints: 'endpoints' },
EModelEndpoint: { custom: 'custom', assistants: 'assistants', agents: 'agents' },
}));
// Mock global window.history
global.window = Object.create(window);
global.window.history = {
replaceState: jest.fn(),
pushState: jest.fn(),
go: jest.fn(),
back: jest.fn(),
forward: jest.fn(),
length: 1,
scrollRestoration: 'auto',
state: null,
};
describe('useQueryParams', () => {
// Setup common mocks before each test
beforeEach(() => {
jest.useFakeTimers();
// Reset mock for window.history.replaceState
jest.spyOn(window.history, 'replaceState').mockClear();
// Create mocks for all dependencies
const mockSearchParams = new URLSearchParams();
(useSearchParams as jest.Mock).mockReturnValue([mockSearchParams, jest.fn()]);
const mockQueryClient = {
getQueryData: jest.fn().mockImplementation((key) => {
if (key === 'startupConfig') {
return { modelSpecs: { list: [] } };
}
if (key === 'endpoints') {
return {};
}
return null;
}),
};
(useQueryClient as jest.Mock).mockReturnValue(mockQueryClient);
(useRecoilValue as jest.Mock).mockImplementation((atom) => {
if (atom === store.modularChat) return false;
if (atom === store.availableTools) return [];
return null;
});
const mockConversation = { model: null, endpoint: null };
const mockNewConversation = jest.fn();
(useChatContext as jest.Mock).mockReturnValue({
conversation: mockConversation,
newConversation: mockNewConversation,
});
const mockMethods = {
setValue: jest.fn(),
getValues: jest.fn().mockReturnValue(''),
handleSubmit: jest.fn((callback) => () => callback({ text: 'test message' })),
};
(useChatFormContext as jest.Mock).mockReturnValue(mockMethods);
const mockSubmitMessage = jest.fn();
(useSubmitMessage as jest.Mock).mockReturnValue({
submitMessage: mockSubmitMessage,
});
const mockGetDefaultConversation = jest.fn().mockReturnValue({});
(useDefaultConvo as jest.Mock).mockReturnValue(mockGetDefaultConversation);
});
afterEach(() => {
jest.clearAllMocks();
jest.useRealTimers();
});
// Helper function to set URL parameters for testing
const setUrlParams = (params: Record<string, string>) => {
const searchParams = new URLSearchParams();
Object.entries(params).forEach(([key, value]) => {
searchParams.set(key, value);
});
(useSearchParams as jest.Mock).mockReturnValue([searchParams, jest.fn()]);
};
// Test cases remain the same
it('should process query parameters on initial render', () => {
// Setup
const mockSetValue = jest.fn();
const mockTextAreaRef = {
current: {
focus: jest.fn(),
setSelectionRange: jest.fn(),
} as unknown as HTMLTextAreaElement,
};
(useChatFormContext as jest.Mock).mockReturnValue({
setValue: mockSetValue,
getValues: jest.fn().mockReturnValue(''),
handleSubmit: jest.fn((callback) => () => callback({ text: 'test message' })),
});
// Mock startup config to allow processing
(useQueryClient as jest.Mock).mockReturnValue({
getQueryData: jest.fn().mockReturnValue({ modelSpecs: { list: [] } }),
});
setUrlParams({ q: 'hello world' });
// Execute
renderHook(() => useQueryParams({ textAreaRef: mockTextAreaRef }));
// Advance timer to trigger interval
act(() => {
jest.advanceTimersByTime(100);
});
// Assert
expect(mockSetValue).toHaveBeenCalledWith(
'text',
'hello world',
expect.objectContaining({ shouldValidate: true }),
);
expect(window.history.replaceState).toHaveBeenCalled();
});
it('should auto-submit message when submit=true and no settings to apply', () => {
// Setup
const mockSetValue = jest.fn();
const mockHandleSubmit = jest.fn((callback) => () => callback({ text: 'test message' }));
const mockSubmitMessage = jest.fn();
const mockTextAreaRef = {
current: {
focus: jest.fn(),
setSelectionRange: jest.fn(),
} as unknown as HTMLTextAreaElement,
};
(useChatFormContext as jest.Mock).mockReturnValue({
setValue: mockSetValue,
getValues: jest.fn().mockReturnValue(''),
handleSubmit: mockHandleSubmit,
});
(useSubmitMessage as jest.Mock).mockReturnValue({
submitMessage: mockSubmitMessage,
});
// Mock startup config to allow processing
(useQueryClient as jest.Mock).mockReturnValue({
getQueryData: jest.fn().mockReturnValue({ modelSpecs: { list: [] } }),
});
setUrlParams({ q: 'hello world', submit: 'true' });
// Execute
renderHook(() => useQueryParams({ textAreaRef: mockTextAreaRef }));
// Advance timer to trigger interval
act(() => {
jest.advanceTimersByTime(100);
});
// Assert
expect(mockSetValue).toHaveBeenCalledWith(
'text',
'hello world',
expect.objectContaining({ shouldValidate: true }),
);
expect(mockHandleSubmit).toHaveBeenCalled();
expect(mockSubmitMessage).toHaveBeenCalled();
});
it('should defer submission when settings need to be applied first', () => {
// Setup
const mockSetValue = jest.fn();
const mockHandleSubmit = jest.fn((callback) => () => callback({ text: 'test message' }));
const mockSubmitMessage = jest.fn();
const mockNewConversation = jest.fn();
const mockTextAreaRef = {
current: {
focus: jest.fn(),
setSelectionRange: jest.fn(),
} as unknown as HTMLTextAreaElement,
};
// Mock getQueryData to return array format for startupConfig
const mockGetQueryData = jest.fn().mockImplementation((key) => {
if (Array.isArray(key) && key[0] === 'startupConfig') {
return { modelSpecs: { list: [] } };
}
if (key === 'startupConfig') {
return { modelSpecs: { list: [] } };
}
return null;
});
(useChatFormContext as jest.Mock).mockReturnValue({
setValue: mockSetValue,
getValues: jest.fn().mockReturnValue(''),
handleSubmit: mockHandleSubmit,
});
(useSubmitMessage as jest.Mock).mockReturnValue({
submitMessage: mockSubmitMessage,
});
(useChatContext as jest.Mock).mockReturnValue({
conversation: { model: null, endpoint: null },
newConversation: mockNewConversation,
});
(useQueryClient as jest.Mock).mockReturnValue({
getQueryData: mockGetQueryData,
});
setUrlParams({ q: 'hello world', submit: 'true', model: 'gpt-4' });
// Execute
const { rerender } = renderHook(() => useQueryParams({ textAreaRef: mockTextAreaRef }));
// First interval tick should process params but not submit
act(() => {
jest.advanceTimersByTime(100);
});
// Assert initial state
expect(mockGetQueryData).toHaveBeenCalledWith(expect.anything());
expect(mockNewConversation).toHaveBeenCalled();
expect(mockSubmitMessage).not.toHaveBeenCalled(); // Not submitted yet
// Now mock conversation update to trigger settings application check
(useChatContext as jest.Mock).mockReturnValue({
conversation: { model: 'gpt-4', endpoint: null },
newConversation: mockNewConversation,
});
// Re-render to trigger the effect that watches for settings
rerender();
// Now the message should be submitted
expect(mockSetValue).toHaveBeenCalledWith(
'text',
'hello world',
expect.objectContaining({ shouldValidate: true }),
);
expect(mockHandleSubmit).toHaveBeenCalled();
expect(mockSubmitMessage).toHaveBeenCalled();
});
it('should submit after timeout if settings never get applied', () => {
// Setup
const mockSetValue = jest.fn();
const mockHandleSubmit = jest.fn((callback) => () => callback({ text: 'test message' }));
const mockSubmitMessage = jest.fn();
const mockNewConversation = jest.fn();
const mockTextAreaRef = {
current: {
focus: jest.fn(),
setSelectionRange: jest.fn(),
} as unknown as HTMLTextAreaElement,
};
(useChatFormContext as jest.Mock).mockReturnValue({
setValue: mockSetValue,
getValues: jest.fn().mockReturnValue(''),
handleSubmit: mockHandleSubmit,
});
(useSubmitMessage as jest.Mock).mockReturnValue({
submitMessage: mockSubmitMessage,
});
(useChatContext as jest.Mock).mockReturnValue({
conversation: { model: null, endpoint: null },
newConversation: mockNewConversation,
});
// Mock startup config to allow processing
(useQueryClient as jest.Mock).mockReturnValue({
getQueryData: jest.fn().mockImplementation((key) => {
if (Array.isArray(key) && key[0] === 'startupConfig') {
return { modelSpecs: { list: [] } };
}
if (key === 'startupConfig') {
return { modelSpecs: { list: [] } };
}
return null;
}),
});
setUrlParams({ q: 'hello world', submit: 'true', model: 'non-existent-model' });
// Execute
renderHook(() => useQueryParams({ textAreaRef: mockTextAreaRef }));
// First interval tick should process params but not submit
act(() => {
jest.advanceTimersByTime(100);
});
// Assert initial state
expect(mockSubmitMessage).not.toHaveBeenCalled(); // Not submitted yet
// Let the timeout happen naturally
act(() => {
// Advance timer to trigger the timeout in the hook
jest.advanceTimersByTime(3000); // MAX_SETTINGS_WAIT_MS
});
// Now the message should be submitted due to timeout
expect(mockSubmitMessage).toHaveBeenCalled();
});
it('should mark as submitted when no submit parameter is present', () => {
// Setup
const mockSetValue = jest.fn();
const mockHandleSubmit = jest.fn((callback) => () => callback({ text: 'test message' }));
const mockSubmitMessage = jest.fn();
const mockTextAreaRef = {
current: {
focus: jest.fn(),
setSelectionRange: jest.fn(),
} as unknown as HTMLTextAreaElement,
};
(useChatFormContext as jest.Mock).mockReturnValue({
setValue: mockSetValue,
getValues: jest.fn().mockReturnValue(''),
handleSubmit: mockHandleSubmit,
});
(useSubmitMessage as jest.Mock).mockReturnValue({
submitMessage: mockSubmitMessage,
});
// Mock startup config to allow processing
(useQueryClient as jest.Mock).mockReturnValue({
getQueryData: jest.fn().mockReturnValue({ modelSpecs: { list: [] } }),
});
setUrlParams({ model: 'gpt-4' }); // No submit=true
// Execute
renderHook(() => useQueryParams({ textAreaRef: mockTextAreaRef }));
// First interval tick should process params
act(() => {
jest.advanceTimersByTime(100);
});
// Assert initial state - submission should be marked as handled
expect(mockSubmitMessage).not.toHaveBeenCalled();
// Try to advance timer past the timeout
act(() => {
jest.advanceTimersByTime(4000);
});
// Submission still shouldn't happen
expect(mockSubmitMessage).not.toHaveBeenCalled();
});
it('should handle empty query parameters', () => {
// Setup
const mockSetValue = jest.fn();
const mockHandleSubmit = jest.fn();
const mockSubmitMessage = jest.fn();
// Force replaceState to be called
window.history.replaceState = jest.fn();
(useChatFormContext as jest.Mock).mockReturnValue({
setValue: mockSetValue,
getValues: jest.fn().mockReturnValue(''),
handleSubmit: mockHandleSubmit,
});
(useSubmitMessage as jest.Mock).mockReturnValue({
submitMessage: mockSubmitMessage,
});
// Mock startup config to allow processing
(useQueryClient as jest.Mock).mockReturnValue({
getQueryData: jest.fn().mockReturnValue({ modelSpecs: { list: [] } }),
});
setUrlParams({}); // Empty params
const mockTextAreaRef = {
current: {
focus: jest.fn(),
setSelectionRange: jest.fn(),
} as unknown as HTMLTextAreaElement,
};
// Execute
renderHook(() => useQueryParams({ textAreaRef: mockTextAreaRef }));
act(() => {
jest.advanceTimersByTime(100);
});
// Assert
expect(mockSetValue).not.toHaveBeenCalled();
expect(mockHandleSubmit).not.toHaveBeenCalled();
expect(mockSubmitMessage).not.toHaveBeenCalled();
expect(window.history.replaceState).toHaveBeenCalled();
});
});

View File

@@ -17,6 +17,10 @@ import { useChatContext, useChatFormContext } from '~/Providers';
import useSubmitMessage from '~/hooks/Messages/useSubmitMessage';
import store from '~/store';
/**
* Parses query parameter values, converting strings to their appropriate types.
* Handles boolean strings, numbers, and preserves regular strings.
*/
const parseQueryValue = (value: string) => {
if (value === 'true') {
return true;
@@ -30,6 +34,11 @@ const parseQueryValue = (value: string) => {
return value;
};
/**
* Processes and validates URL query parameters using schema definitions.
* Extracts valid settings based on tQueryParamsSchema and handles special endpoint cases
* for assistants and agents.
*/
const processValidSettings = (queryParams: Record<string, string>) => {
const validSettings = {} as TPreset;
@@ -64,6 +73,11 @@ const processValidSettings = (queryParams: Record<string, string>) => {
return validSettings;
};
/**
* Hook that processes URL query parameters to initialize chat with specified settings and prompt.
* Handles model switching, prompt auto-filling, and optional auto-submission with race condition protection.
* Supports immediate or deferred submission based on whether settings need to be applied first.
*/
export default function useQueryParams({
textAreaRef,
}: {
@@ -71,7 +85,15 @@ export default function useQueryParams({
}) {
const maxAttempts = 50;
const attemptsRef = useRef(0);
const MAX_SETTINGS_WAIT_MS = 3000;
const processedRef = useRef(false);
const pendingSubmitRef = useRef(false);
const settingsAppliedRef = useRef(false);
const submissionHandledRef = useRef(false);
const promptTextRef = useRef<string | null>(null);
const validSettingsRef = useRef<TPreset | null>(null);
const settingsTimeoutRef = useRef<ReturnType<typeof setTimeout> | null>(null);
const methods = useChatFormContext();
const [searchParams] = useSearchParams();
const getDefaultConversation = useDefaultConvo();
@@ -82,6 +104,11 @@ export default function useQueryParams({
const queryClient = useQueryClient();
const { conversation, newConversation } = useChatContext();
/**
* Applies settings from URL query parameters to create a new conversation.
* Handles model spec lookup, endpoint normalization, and conversation switching logic.
* Ensures tools compatibility and preserves existing conversation when appropriate.
*/
const newQueryConvo = useCallback(
(_newPreset?: TPreset) => {
if (!_newPreset) {
@@ -181,6 +208,85 @@ export default function useQueryParams({
],
);
/**
* Checks if all settings from URL parameters have been successfully applied to the conversation.
* Compares values from validSettings against the current conversation state, handling special properties.
* Returns true only when all relevant settings match the target values.
*/
const areSettingsApplied = useCallback(() => {
if (!validSettingsRef.current || !conversation) {
return false;
}
for (const [key, value] of Object.entries(validSettingsRef.current)) {
if (['presetOverride', 'iconURL', 'spec', 'modelLabel'].includes(key)) {
continue;
}
if (conversation[key] !== value) {
return false;
}
}
return true;
}, [conversation]);
/**
* Processes message submission exactly once, preventing duplicate submissions.
* Sets the prompt text, submits the message, and cleans up URL parameters afterward.
* Has internal guards to ensure it only executes once regardless of how many times it's called.
*/
const processSubmission = useCallback(() => {
if (submissionHandledRef.current || !pendingSubmitRef.current || !promptTextRef.current) {
return;
}
submissionHandledRef.current = true;
pendingSubmitRef.current = false;
methods.setValue('text', promptTextRef.current, { shouldValidate: true });
methods.handleSubmit((data) => {
if (data.text?.trim()) {
submitMessage(data);
const newUrl = window.location.pathname;
window.history.replaceState({}, '', newUrl);
console.log('Message submitted with conversation state:', conversation);
}
})();
}, [methods, submitMessage, conversation]);
useEffect(() => {
// Only proceed if we've already processed URL parameters but haven't yet handled submission
if (
!processedRef.current ||
submissionHandledRef.current ||
settingsAppliedRef.current ||
!validSettingsRef.current ||
!conversation
) {
return;
}
const allSettingsApplied = areSettingsApplied();
if (allSettingsApplied) {
settingsAppliedRef.current = true;
if (pendingSubmitRef.current) {
if (settingsTimeoutRef.current) {
clearTimeout(settingsTimeoutRef.current);
settingsTimeoutRef.current = null;
}
console.log('Settings fully applied, processing submission');
processSubmission();
}
}
}, [conversation, processSubmission, areSettingsApplied]);
useEffect(() => {
const processQueryParams = () => {
const queryParams: Record<string, string> = {};
@@ -217,31 +323,68 @@ export default function useQueryParams({
if (!startupConfig) {
return;
}
const { decodedPrompt, validSettings, shouldAutoSubmit } = processQueryParams();
const currentText = methods.getValues('text');
/** Clean up URL parameters after successful processing */
const { decodedPrompt, validSettings, shouldAutoSubmit } = processQueryParams();
if (!shouldAutoSubmit) {
submissionHandledRef.current = true;
}
/** Mark processing as complete and clean up as needed */
const success = () => {
const newUrl = window.location.pathname;
window.history.replaceState({}, '', newUrl);
processedRef.current = true;
console.log('Parameters processed successfully');
clearInterval(intervalId);
// Only clean URL if there's no pending submission
if (!pendingSubmitRef.current) {
const newUrl = window.location.pathname;
window.history.replaceState({}, '', newUrl);
}
};
if (!currentText && decodedPrompt) {
methods.setValue('text', decodedPrompt, { shouldValidate: true });
textAreaRef.current.focus();
textAreaRef.current.setSelectionRange(decodedPrompt.length, decodedPrompt.length);
// Store settings for later comparison
if (Object.keys(validSettings).length > 0) {
validSettingsRef.current = validSettings;
}
// Save the prompt text for later use if needed
if (decodedPrompt) {
promptTextRef.current = decodedPrompt;
}
// Handle auto-submission
if (shouldAutoSubmit && decodedPrompt) {
if (Object.keys(validSettings).length > 0) {
// Settings are changing, defer submission
pendingSubmitRef.current = true;
// Set a timeout to handle the case where settings might never fully apply
settingsTimeoutRef.current = setTimeout(() => {
if (!submissionHandledRef.current && pendingSubmitRef.current) {
console.warn(
'Settings application timeout reached, proceeding with submission anyway',
);
processSubmission();
}
}, MAX_SETTINGS_WAIT_MS);
} else {
methods.setValue('text', decodedPrompt, { shouldValidate: true });
textAreaRef.current.focus();
textAreaRef.current.setSelectionRange(decodedPrompt.length, decodedPrompt.length);
// Auto-submit if the submit parameter is true
if (shouldAutoSubmit) {
methods.handleSubmit((data) => {
if (data.text?.trim()) {
submitMessage(data);
}
})();
}
} else if (decodedPrompt) {
methods.setValue('text', decodedPrompt, { shouldValidate: true });
textAreaRef.current.focus();
textAreaRef.current.setSelectionRange(decodedPrompt.length, decodedPrompt.length);
} else {
submissionHandledRef.current = true;
}
if (Object.keys(validSettings).length > 0) {
@@ -253,6 +396,18 @@ export default function useQueryParams({
return () => {
clearInterval(intervalId);
if (settingsTimeoutRef.current) {
clearTimeout(settingsTimeoutRef.current);
}
};
}, [searchParams, methods, textAreaRef, newQueryConvo, newConversation, submitMessage]);
}, [
searchParams,
methods,
textAreaRef,
newQueryConvo,
newConversation,
submitMessage,
queryClient,
processSubmission,
]);
}

View File

@@ -55,8 +55,12 @@ export default function useSideNavLinks({
const links: NavLink[] = [];
if (
isAssistantsEndpoint(endpoint) &&
endpointsConfig?.[EModelEndpoint.assistants] &&
endpointsConfig[EModelEndpoint.assistants].disableBuilder !== true &&
((endpoint === EModelEndpoint.assistants &&
endpointsConfig?.[EModelEndpoint.assistants] &&
endpointsConfig[EModelEndpoint.assistants].disableBuilder !== true) ||
(endpoint === EModelEndpoint.azureAssistants &&
endpointsConfig?.[EModelEndpoint.azureAssistants] &&
endpointsConfig[EModelEndpoint.azureAssistants].disableBuilder !== true)) &&
keyProvided
) {
links.push({

View File

@@ -467,6 +467,14 @@ export default function useEventHandlers({
[QueryKeys.messages, conversation.conversationId],
finalMessages,
);
} else if (
isAssistantsEndpoint(submissionConvo.endpoint) &&
(!submissionConvo.conversationId || submissionConvo.conversationId === Constants.NEW_CONVO)
) {
queryClient.setQueryData<TMessage[]>(
[QueryKeys.messages, conversation.conversationId],
[...currentMessages],
);
}
const isNewConvo = conversation.conversationId !== submissionConvo.conversationId;

View File

@@ -5,6 +5,7 @@ import App from './App';
import './style.css';
import './mobile.css';
import { ApiErrorBoundaryProvider } from './hooks/ApiErrorBoundaryContext';
import 'katex/dist/katex.min.css';
const container = document.getElementById('root');
const root = createRoot(container);

View File

@@ -43,7 +43,8 @@ export default function ChatRoute() {
refetchOnMount: 'always',
});
const initialConvoQuery = useGetConvoIdQuery(conversationId, {
enabled: isAuthenticated && conversationId !== Constants.NEW_CONVO,
enabled:
isAuthenticated && conversationId !== Constants.NEW_CONVO && !hasSetConversation.current,
});
const endpointsQuery = useGetEndpointsQuery({ enabled: isAuthenticated });
const assistantListMap = useAssistantListMap();

View File

@@ -32,13 +32,28 @@ export const currentArtifactId = atom<string | null>({
] as const,
});
export const artifactsVisible = atom<boolean>({
key: 'artifactsVisible',
export const artifactsVisibility = atom<boolean>({
key: 'artifactsVisibility',
default: true,
effects: [
({ onSet, node }) => {
onSet(async (newValue) => {
logger.log('artifacts', 'Recoil Effect: Setting artifactsVisible', {
logger.log('artifacts', 'Recoil Effect: Setting artifactsVisibility', {
key: node.key,
newValue,
});
});
},
] as const,
});
export const visibleArtifacts = atom<Record<string, Artifact | undefined> | null>({
key: 'visibleArtifacts',
default: null,
effects: [
({ onSet, node }) => {
onSet(async (newValue) => {
logger.log('artifacts', 'Recoil Effect: Setting `visibleArtifacts`', {
key: node.key,
newValue,
});

View File

@@ -1,94 +0,0 @@
import { preprocessCodeArtifacts } from './artifacts';
describe('preprocessCodeArtifacts', () => {
test('should return non-string inputs unchanged', () => {
expect(preprocessCodeArtifacts(123 as unknown as string)).toBe('');
expect(preprocessCodeArtifacts(null as unknown as string)).toBe('');
expect(preprocessCodeArtifacts(undefined)).toBe('');
expect(preprocessCodeArtifacts({} as unknown as string)).toEqual('');
});
test('should remove <thinking> tags and their content', () => {
const input = '<thinking>This should be removed</thinking>Some content';
const expected = 'Some content';
expect(preprocessCodeArtifacts(input)).toBe(expected);
});
test('should remove unclosed <thinking> tags and their content', () => {
const input = '<thinking>This should be removed\nSome content';
const expected = '';
expect(preprocessCodeArtifacts(input)).toBe(expected);
});
test('should remove artifact headers up to and including empty code block', () => {
const input = ':::artifact{identifier="test"}\n```\n```\nSome content';
const expected = ':::artifact{identifier="test"}\n```\n```\nSome content';
expect(preprocessCodeArtifacts(input)).toBe(expected);
});
test('should keep artifact headers when followed by empty code block and content', () => {
const input = ':::artifact{identifier="test"}\n```\n```\nSome content';
const expected = ':::artifact{identifier="test"}\n```\n```\nSome content';
expect(preprocessCodeArtifacts(input)).toBe(expected);
});
test('should handle multiple artifact headers correctly', () => {
const input = ':::artifact{id="1"}\n```\n```\n:::artifact{id="2"}\n```\ncode\n```\nContent';
const expected = ':::artifact{id="1"}\n```\n```\n:::artifact{id="2"}\n```\ncode\n```\nContent';
expect(preprocessCodeArtifacts(input)).toBe(expected);
});
test('should handle complex input with multiple patterns', () => {
const input = `
<thinking>Remove this</thinking>
Some text
:::artifact{id="1"}
\`\`\`
\`\`\`
<thinking>And this</thinking>
:::artifact{id="2"}
\`\`\`
keep this code
\`\`\`
More text
`;
const expected = `
Some text
:::artifact{id="1"}
\`\`\`
\`\`\`
:::artifact{id="2"}
\`\`\`
keep this code
\`\`\`
More text
`;
expect(preprocessCodeArtifacts(input)).toBe(expected);
});
test('should remove artifact headers without code blocks', () => {
const input = ':::artifact{identifier="test"}\nSome content without code block';
const expected = '';
expect(preprocessCodeArtifacts(input)).toBe(expected);
});
test('should remove artifact headers up to incomplete code block', () => {
const input = ':::artifact{identifier="react-cal';
const expected = '';
expect(preprocessCodeArtifacts(input)).toBe(expected);
});
test('should keep artifact headers when any character follows code block', () => {
const input = ':::artifact{identifier="react-calculator"}\n```t';
const expected = ':::artifact{identifier="react-calculator"}\n```t';
expect(preprocessCodeArtifacts(input)).toBe(expected);
});
test('should keep artifact headers when whitespace follows code block', () => {
const input = ':::artifact{identifier="react-calculator"}\n``` ';
const expected = ':::artifact{identifier="react-calculator"}\n``` ';
expect(preprocessCodeArtifacts(input)).toBe(expected);
});
});

View File

@@ -214,23 +214,3 @@ export const sharedFiles = {
</html>
`,
};
export function preprocessCodeArtifacts(text?: string): string {
if (typeof text !== 'string') {
return '';
}
// Remove <thinking> tags and their content
text = text.replace(/<thinking>[\s\S]*?<\/thinking>|<thinking>[\s\S]*/g, '');
// Process artifact headers
const regex = /(^|\n)(:::artifact[\s\S]*?(?:```[\s\S]*?```|$))/g;
return text.replace(regex, (match, newline, artifactBlock) => {
if (artifactBlock.includes('```') === true) {
// Keep artifact headers with code blocks (empty or not)
return newline + artifactBlock;
}
// Remove artifact headers without code blocks, but keep the newline
return newline;
});
}

View File

@@ -4,7 +4,7 @@ import {
isAssistantsEndpoint,
isAgentsEndpoint,
} from 'librechat-data-provider';
import type { TConversation } from 'librechat-data-provider';
import type { TConversation, EndpointSchemaKey } from 'librechat-data-provider';
import { getLocalStorageItems } from './localStorage';
const buildDefaultConvo = ({
@@ -51,8 +51,8 @@ const buildDefaultConvo = ({
}
const convo = parseConvo({
endpoint,
endpointType,
endpoint: endpoint as EndpointSchemaKey,
endpointType: endpointType as EndpointSchemaKey,
conversation: lastConversationSetup,
possibleValues: {
models: possibleModels,
@@ -68,7 +68,7 @@ const buildDefaultConvo = ({
};
// Ensures assistant_id is always defined
const assistantId = convo?.assistant_id ?? '';
const assistantId = convo?.assistant_id ?? conversation?.assistant_id ?? '';
const defaultAssistantId = lastConversationSetup?.assistant_id ?? '';
if (isAssistantsEndpoint(endpoint) && !defaultAssistantId && assistantId) {
defaultConvo.assistant_id = assistantId;

View File

@@ -431,14 +431,14 @@ describe('Conversation Utilities', () => {
pageParams: [],
};
const newConvo = makeConversation('new');
const updated = addConversationToInfinitePages(data, newConvo);
const updated = addConversationToInfinitePages(data, newConvo as TConversation);
expect(updated.pages[0].conversations[0].conversationId).toBe('new');
expect(updated.pages[0].conversations[1].conversationId).toBe('1');
});
it('creates new InfiniteData if data is undefined', () => {
const newConvo = makeConversation('new');
const updated = addConversationToInfinitePages(undefined, newConvo);
const updated = addConversationToInfinitePages(undefined, newConvo as TConversation);
expect(updated.pages[0].conversations[0].conversationId).toBe('new');
expect(updated.pageParams).toEqual([undefined]);
});
@@ -531,12 +531,12 @@ describe('Conversation Utilities', () => {
it('stores model for endpoint', () => {
const conversation = {
conversationId: '1',
endpoint: 'openai',
endpoint: 'openAI',
model: 'gpt-3',
};
storeEndpointSettings(conversation as any);
const stored = JSON.parse(localStorage.getItem('lastModel') || '{}');
expect([undefined, 'gpt-3']).toContain(stored.openai);
expect([undefined, 'gpt-3']).toContain(stored.openAI);
});
it('stores secondaryModel for gptPlugins endpoint', () => {
@@ -574,14 +574,14 @@ describe('Conversation Utilities', () => {
conversationId: 'a',
updatedAt: '2024-01-01T12:00:00Z',
createdAt: '2024-01-01T10:00:00Z',
endpoint: 'openai',
endpoint: 'openAI',
model: 'gpt-3',
title: 'Conversation A',
} as TConversation;
convoB = {
conversationId: 'b',
updatedAt: '2024-01-02T12:00:00Z',
endpoint: 'openai',
endpoint: 'openAI',
model: 'gpt-3',
} as TConversation;
queryClient.setQueryData(['allConversations'], {

View File

@@ -280,11 +280,11 @@ export function updateConvoFieldsInfinite(
pages: data.pages.map((page, pi) =>
pi === pageIdx
? {
...page,
conversations: page.conversations.map((c, ci) =>
ci === convoIdx ? { ...c, ...updatedConversation } : c,
),
}
...page,
conversations: page.conversations.map((c, ci) =>
ci === convoIdx ? { ...c, ...updatedConversation } : c,
),
}
: page,
),
};

View File

@@ -103,12 +103,15 @@ export function processPlugins(
export function mapToolCalls(toolCalls: t.ToolCallResults = []): {
[key: string]: t.ToolCallResult[] | undefined;
} {
return toolCalls.reduce((acc, call) => {
const key = `${call.messageId}_${call.partIndex ?? 0}_${call.blockIndex ?? 0}_${call.toolId}`;
const array = acc[key] ?? [];
array.push(call);
acc[key] = array;
return toolCalls.reduce(
(acc, call) => {
const key = `${call.messageId}_${call.partIndex ?? 0}_${call.blockIndex ?? 0}_${call.toolId}`;
const array = acc[key] ?? [];
array.push(call);
acc[key] = array;
return acc;
}, {} as { [key: string]: t.ToolCallResult[] | undefined });
return acc;
},
{} as { [key: string]: t.ToolCallResult[] | undefined },
);
}

View File

@@ -1,3 +1,3 @@
// v0.7.8-rc1
// v0.7.8
// See .env.test.example for an example of the '.env.test' file.
require('dotenv').config({ path: './e2e/.env.test' });

View File

@@ -4,6 +4,7 @@ import typescriptEslintEslintPlugin from '@typescript-eslint/eslint-plugin';
import { fixupConfigRules, fixupPluginRules } from '@eslint/compat';
// import perfectionist from 'eslint-plugin-perfectionist';
import reactHooks from 'eslint-plugin-react-hooks';
import prettier from 'eslint-plugin-prettier';
import tsParser from '@typescript-eslint/parser';
import importPlugin from 'eslint-plugin-import';
import { FlatCompat } from '@eslint/eslintrc';
@@ -62,6 +63,7 @@ export default [
'import/parsers': tsParser,
i18next,
// perfectionist,
prettier: fixupPluginRules(prettier),
},
languageOptions: {
@@ -101,6 +103,7 @@ export default [
},
rules: {
'prettier/prettier': 'error',
'react/react-in-jsx-scope': 'off',
'@typescript-eslint/ban-ts-comment': [
@@ -121,28 +124,6 @@ export default [
// Also disable the core no-unused-vars rule globally.
'no-unused-vars': 'warn',
indent: ['error', 2, { SwitchCase: 1 }],
'max-len': [
'error',
{
code: 120,
ignoreStrings: true,
ignoreTemplateLiterals: true,
ignoreComments: true,
},
],
'linebreak-style': 0,
curly: ['error', 'all'],
semi: ['error', 'always'],
'object-curly-spacing': ['error', 'always'],
'no-multiple-empty-lines': [
'error',
{
max: 1,
},
],
'no-trailing-spaces': 'error',
'comma-dangle': ['error', 'always-multiline'],
'no-console': 'off',
'import/no-cycle': 'error',
'import/no-self-import': 'error',
@@ -153,8 +134,6 @@ export default [
'no-restricted-syntax': 'off',
'react/prop-types': 'off',
'react/display-name': 'off',
quotes: ['error', 'single'],
'key-spacing': ['error', { beforeColon: false, afterColon: true }],
// 'perfectionist/sort-imports': [
// 'error',
@@ -380,4 +359,4 @@ export default [
},
},
},
];
];

1041
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
{
"name": "LibreChat",
"version": "v0.7.8-rc1",
"version": "v0.7.8",
"description": "",
"workspaces": [
"api",

View File

@@ -1,6 +1,6 @@
{
"name": "librechat-data-provider",
"version": "0.7.81",
"version": "0.7.82",
"description": "data services for librechat apps",
"main": "dist/index.js",
"module": "dist/index.es.js",

View File

@@ -1,4 +1,4 @@
import { StdioOptionsSchema } from '../src/mcp';
import { StdioOptionsSchema, processMCPEnv, MCPOptions } from '../src/mcp';
describe('Environment Variable Extraction (MCP)', () => {
const originalEnv = process.env;
@@ -49,4 +49,129 @@ describe('Environment Variable Extraction (MCP)', () => {
expect(result.env).toBeUndefined();
});
});
describe('processMCPEnv', () => {
it('should create a deep clone of the input object', () => {
const originalObj: MCPOptions = {
command: 'node',
args: ['server.js'],
env: {
API_KEY: '${TEST_API_KEY}',
PLAIN_VALUE: 'plain-value',
},
};
const result = processMCPEnv(originalObj);
// Verify it's not the same object reference
expect(result).not.toBe(originalObj);
// Modify the result and ensure original is unchanged
if ('env' in result && result.env) {
result.env.API_KEY = 'modified-value';
}
expect(originalObj.env?.API_KEY).toBe('${TEST_API_KEY}');
});
it('should process environment variables in env field', () => {
const obj: MCPOptions = {
command: 'node',
args: ['server.js'],
env: {
API_KEY: '${TEST_API_KEY}',
ANOTHER_KEY: '${ANOTHER_SECRET}',
PLAIN_VALUE: 'plain-value',
NON_EXISTENT: '${NON_EXISTENT_VAR}',
},
};
const result = processMCPEnv(obj);
expect('env' in result && result.env).toEqual({
API_KEY: 'test-api-key-value',
ANOTHER_KEY: 'another-secret-value',
PLAIN_VALUE: 'plain-value',
NON_EXISTENT: '${NON_EXISTENT_VAR}',
});
});
it('should process user ID in headers field', () => {
const userId = 'test-user-123';
const obj: MCPOptions = {
type: 'sse',
url: 'https://example.com',
headers: {
Authorization: '${TEST_API_KEY}',
'User-Id': '{{LIBRECHAT_USER_ID}}',
'Content-Type': 'application/json',
},
};
const result = processMCPEnv(obj, userId);
expect('headers' in result && result.headers).toEqual({
Authorization: 'test-api-key-value',
'User-Id': 'test-user-123',
'Content-Type': 'application/json',
});
});
it('should handle null or undefined input', () => {
// @ts-ignore - Testing null/undefined handling
expect(processMCPEnv(null)).toBeNull();
// @ts-ignore - Testing null/undefined handling
expect(processMCPEnv(undefined)).toBeUndefined();
});
it('should not modify objects without env or headers', () => {
const obj: MCPOptions = {
command: 'node',
args: ['server.js'],
timeout: 5000,
};
const result = processMCPEnv(obj);
expect(result).toEqual(obj);
expect(result).not.toBe(obj); // Still a different object (deep clone)
});
it('should ensure different users with same starting config get separate values', () => {
// Create a single base configuration
const baseConfig: MCPOptions = {
type: 'sse',
url: 'https://example.com',
headers: {
'User-Id': '{{LIBRECHAT_USER_ID}}',
'API-Key': '${TEST_API_KEY}',
},
};
// Process for two different users
const user1Id = 'user-123';
const user2Id = 'user-456';
const resultUser1 = processMCPEnv(baseConfig, user1Id);
const resultUser2 = processMCPEnv(baseConfig, user2Id);
// Verify each has the correct user ID
expect('headers' in resultUser1 && resultUser1.headers?.['User-Id']).toBe(user1Id);
expect('headers' in resultUser2 && resultUser2.headers?.['User-Id']).toBe(user2Id);
// Verify they're different objects
expect(resultUser1).not.toBe(resultUser2);
// Modify one result and ensure it doesn't affect the other
if ('headers' in resultUser1 && resultUser1.headers) {
resultUser1.headers['User-Id'] = 'modified-user';
}
// Original config should be unchanged
expect(baseConfig.headers?.['User-Id']).toBe('{{LIBRECHAT_USER_ID}}');
// Second user's config should be unchanged
expect('headers' in resultUser2 && resultUser2.headers?.['User-Id']).toBe(user2Id);
});
});
});

View File

@@ -865,6 +865,7 @@ export const visionModels = [
'llava-13b',
'gemini-pro-vision',
'claude-3',
'gemma',
'gemini-exp',
'gemini-1.5',
'gemini-2.0',
@@ -1227,9 +1228,9 @@ export enum TTSProviders {
/** Enum for app-wide constants */
export enum Constants {
/** Key for the app's version. */
VERSION = 'v0.7.8-rc1',
VERSION = 'v0.7.8',
/** Key for the Custom Config's version (librechat.yaml). */
CONFIG_VERSION = '1.2.4',
CONFIG_VERSION = '1.2.5',
/** Standard value for the first message's `parentMessageId` value, to indicate no parent exists. */
NO_PARENT = '00000000-0000-0000-0000-000000000000',
/** Standard value for the initial conversationId before a request is sent */

View File

@@ -5,6 +5,8 @@ const BaseOptionsSchema = z.object({
iconPath: z.string().optional(),
timeout: z.number().optional(),
initTimeout: z.number().optional(),
/** Controls visibility in chat dropdown menu (MCPSelect) */
chatMenu: z.boolean().optional(),
});
export const StdioOptionsSchema = BaseOptionsSchema.extend({
@@ -96,28 +98,30 @@ export type MCPOptions = z.infer<typeof MCPOptionsSchema>;
* @param {string} [userId] - The user ID
* @returns {MCPOptions} - The processed object with environment variables replaced
*/
export function processMCPEnv(obj: MCPOptions, userId?: string): MCPOptions {
export function processMCPEnv(obj: Readonly<MCPOptions>, userId?: string): MCPOptions {
if (obj === null || obj === undefined) {
return obj;
}
if ('env' in obj && obj.env) {
const newObj: MCPOptions = structuredClone(obj);
if ('env' in newObj && newObj.env) {
const processedEnv: Record<string, string> = {};
for (const [key, value] of Object.entries(obj.env)) {
for (const [key, value] of Object.entries(newObj.env)) {
processedEnv[key] = extractEnvVariable(value);
}
obj.env = processedEnv;
} else if ('headers' in obj && obj.headers) {
newObj.env = processedEnv;
} else if ('headers' in newObj && newObj.headers) {
const processedHeaders: Record<string, string> = {};
for (const [key, value] of Object.entries(obj.headers)) {
for (const [key, value] of Object.entries(newObj.headers)) {
if (value === '{{LIBRECHAT_USER_ID}}' && userId != null && userId) {
processedHeaders[key] = userId;
continue;
}
processedHeaders[key] = extractEnvVariable(value);
}
obj.headers = processedHeaders;
newObj.headers = processedHeaders;
}
return obj;
return newObj;
}

View File

@@ -275,6 +275,8 @@ export const getResponseSender = (endpointOption: t.TEndpointOption): string =>
return modelLabel;
} else if (model && (model.includes('gemini') || model.includes('learnlm'))) {
return 'Gemini';
} else if (model?.toLowerCase().includes('gemma') === true) {
return 'Gemma';
} else if (model && model.includes('code')) {
return 'Codey';
}

View File

@@ -417,6 +417,7 @@ export const tPluginSchema = z.object({
icon: z.string().optional(),
authConfig: z.array(tPluginAuthConfigSchema).optional(),
authenticated: z.boolean().optional(),
chatMenu: z.boolean().optional(),
isButton: z.boolean().optional(),
toolkit: z.boolean().optional(),
});

View File

@@ -1,6 +1,6 @@
{
"name": "@librechat/data-schemas",
"version": "0.0.6",
"version": "0.0.7",
"description": "Mongoose schemas and models for LibreChat",
"type": "module",
"main": "dist/index.cjs",

View File

@@ -1,6 +1,6 @@
{
"name": "librechat-mcp",
"version": "1.2.0",
"version": "1.2.2",
"type": "commonjs",
"description": "MCP services for LibreChat",
"main": "dist/index.js",

View File

@@ -68,7 +68,7 @@ export class MCPConnection extends EventEmitter {
this.client = new Client(
{
name: 'librechat-mcp-client',
version: '1.2.0',
version: '1.2.2',
},
{
capabilities: {},
@@ -159,6 +159,15 @@ export class MCPConnection extends EventEmitter {
headers: options.headers,
signal: abortController.signal,
},
eventSourceInit: {
fetch: (url, init) => {
const headers = new Headers(Object.assign({}, init?.headers, options.headers));
return fetch(url, {
...init,
headers,
});
},
},
});
transport.onclose = () => {

View File

@@ -370,7 +370,9 @@ export class MCPManager {
/**
* Loads tools from all app-level connections into the manifest.
*/
public async loadManifestTools(manifestTools: t.LCToolManifest): Promise<void> {
public async loadManifestTools(manifestTools: t.LCToolManifest): Promise<t.LCToolManifest> {
const mcpTools: t.LCManifestTool[] = [];
for (const [serverName, connection] of this.connections.entries()) {
try {
if (connection.isConnected() !== true) {
@@ -383,17 +385,24 @@ export class MCPManager {
const tools = await connection.fetchTools();
for (const tool of tools) {
const pluginKey = `${tool.name}${CONSTANTS.mcp_delimiter}${serverName}`;
manifestTools.push({
const manifestTool: t.LCManifestTool = {
name: tool.name,
pluginKey,
description: tool.description ?? '',
icon: connection.iconPath,
});
};
const config = this.mcpConfigs[serverName];
if (config?.chatMenu === false) {
manifestTool.chatMenu = false;
}
mcpTools.push(manifestTool);
}
} catch (error) {
this.logger.error(`[MCP][${serverName}] Error fetching tools for manifest:`, error);
}
}
return [...mcpTools, ...manifestTools];
}
/**

View File

@@ -33,7 +33,7 @@ export interface LCFunctionTool {
}
export type LCAvailableTools = Record<string, LCFunctionTool>;
export type LCManifestTool = TPlugin;
export type LCToolManifest = TPlugin[];
export interface MCPPrompt {
name: string;
@@ -84,7 +84,10 @@ export type FormattedContent =
};
};
export type FormattedContentResult = [string | FormattedContent[], undefined | { content: FormattedContent[] }];
export type FormattedContentResult = [
string | FormattedContent[],
undefined | { content: FormattedContent[] },
];
export type ImageFormatter = (item: ImageContent) => FormattedContent;