Compare commits
13 Commits
refactor/a
...
add-model-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4fe600990c | ||
|
|
543b617e1c | ||
|
|
c827fdd10e | ||
|
|
ac608ded46 | ||
|
|
0e00f357a6 | ||
|
|
c465d7b732 | ||
|
|
aba0a93d1d | ||
|
|
a49b2b2833 | ||
|
|
e0ebb7097e | ||
|
|
9a79635012 | ||
|
|
ce19abc968 | ||
|
|
49cd3894aa | ||
|
|
da4aa37493 |
147
MODEL_SPEC_FOLDERS.md
Normal file
147
MODEL_SPEC_FOLDERS.md
Normal file
@@ -0,0 +1,147 @@
|
||||
# Model Spec Subfolder Support
|
||||
|
||||
This enhancement adds the ability to organize model specs into subfolders/categories for better organization and user experience.
|
||||
|
||||
## Feature Overview
|
||||
|
||||
Model specs can now be grouped into folders by adding an optional `folder` field to each spec. This helps organize related models together, making it easier for users to find and select the appropriate model for their needs.
|
||||
|
||||
## Configuration
|
||||
|
||||
### Basic Usage
|
||||
|
||||
Add a `folder` field to any model spec in your `librechat.yaml`:
|
||||
|
||||
```yaml
|
||||
modelSpecs:
|
||||
list:
|
||||
- name: "gpt4_turbo"
|
||||
label: "GPT-4 Turbo"
|
||||
folder: "OpenAI Models" # This spec will appear under "OpenAI Models" folder
|
||||
preset:
|
||||
endpoint: "openAI"
|
||||
model: "gpt-4-turbo-preview"
|
||||
```
|
||||
|
||||
### Folder Structure
|
||||
|
||||
- **With Folder**: Model specs with the `folder` field will be grouped under that folder name
|
||||
- **Without Folder**: Model specs without the `folder` field appear at the root level
|
||||
- **Multiple Folders**: You can create as many folders as needed to organize your models
|
||||
- **Alphabetical Sorting**: Folders are sorted alphabetically, and specs within folders are sorted by their `order` field or label
|
||||
|
||||
### Example Configuration
|
||||
|
||||
```yaml
|
||||
modelSpecs:
|
||||
list:
|
||||
# OpenAI Models Category
|
||||
- name: "gpt4_turbo"
|
||||
label: "GPT-4 Turbo"
|
||||
folder: "OpenAI Models"
|
||||
preset:
|
||||
endpoint: "openAI"
|
||||
model: "gpt-4-turbo-preview"
|
||||
|
||||
- name: "gpt35_turbo"
|
||||
label: "GPT-3.5 Turbo"
|
||||
folder: "OpenAI Models"
|
||||
preset:
|
||||
endpoint: "openAI"
|
||||
model: "gpt-3.5-turbo"
|
||||
|
||||
# Anthropic Models Category
|
||||
- name: "claude3_opus"
|
||||
label: "Claude 3 Opus"
|
||||
folder: "Anthropic Models"
|
||||
preset:
|
||||
endpoint: "anthropic"
|
||||
model: "claude-3-opus-20240229"
|
||||
|
||||
# Root level model (no folder)
|
||||
- name: "quick_chat"
|
||||
label: "Quick Chat"
|
||||
preset:
|
||||
endpoint: "openAI"
|
||||
model: "gpt-3.5-turbo"
|
||||
```
|
||||
|
||||
## UI Features
|
||||
|
||||
### Folder Display
|
||||
- Folders are displayed with expand/collapse functionality
|
||||
- Folder icons change between open/closed states
|
||||
- Indentation shows the hierarchy clearly
|
||||
|
||||
### Search Integration
|
||||
- When searching for models, the folder path is shown for context
|
||||
- Search works across all models regardless of folder structure
|
||||
|
||||
### User Experience
|
||||
- Folders start expanded by default for easy access
|
||||
- Click on folder header to expand/collapse
|
||||
- Selected model is highlighted with a checkmark
|
||||
- Folder state is preserved during the session
|
||||
|
||||
## Benefits
|
||||
|
||||
1. **Better Organization**: Group related models together (e.g., by provider, capability, or use case)
|
||||
2. **Improved Navigation**: Users can quickly find models in organized categories
|
||||
3. **Scalability**: Handles large numbers of model specs without overwhelming the UI
|
||||
4. **Backward Compatible**: Existing configurations without folders continue to work
|
||||
5. **Flexible Structure**: Mix foldered and non-foldered specs as needed
|
||||
|
||||
## Use Cases
|
||||
|
||||
### By Provider
|
||||
```yaml
|
||||
folder: "OpenAI Models"
|
||||
folder: "Anthropic Models"
|
||||
folder: "Google Models"
|
||||
```
|
||||
|
||||
### By Capability
|
||||
```yaml
|
||||
folder: "Vision Models"
|
||||
folder: "Code Models"
|
||||
folder: "Creative Writing"
|
||||
```
|
||||
|
||||
### By Performance Tier
|
||||
```yaml
|
||||
folder: "Premium Models"
|
||||
folder: "Standard Models"
|
||||
folder: "Budget Models"
|
||||
```
|
||||
|
||||
### By Department/Team
|
||||
```yaml
|
||||
folder: "Engineering Team"
|
||||
folder: "Marketing Team"
|
||||
folder: "Research Team"
|
||||
```
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### Type Changes
|
||||
- Added optional `folder?: string` field to `TModelSpec` type
|
||||
- Updated `tModelSpecSchema` to include the folder field validation
|
||||
|
||||
### Components
|
||||
- Created `ModelSpecFolder` component for rendering folder structure
|
||||
- Updated `ModelSelector` to use folder-aware rendering
|
||||
- Enhanced search results to show folder context
|
||||
|
||||
### Behavior
|
||||
- Folders are collapsible with state management
|
||||
- Models are sorted within folders by order/label
|
||||
- Root-level models appear after all folders
|
||||
|
||||
## Migration
|
||||
|
||||
No migration needed - the feature is fully backward compatible. Existing model specs without the `folder` field will continue to work and appear at the root level.
|
||||
|
||||
## See Also
|
||||
|
||||
- `librechat.example.subfolder.yaml` - Complete example configuration
|
||||
- GitHub Issue #9165 - Original feature request
|
||||
@@ -11,7 +11,6 @@ const {
|
||||
Constants,
|
||||
} = require('librechat-data-provider');
|
||||
const { getMessages, saveMessage, updateMessage, saveConvo, getConvo } = require('~/models');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const { checkBalance } = require('~/models/balanceMethods');
|
||||
const { truncateToolCallOutputs } = require('./prompts');
|
||||
const { getFiles } = require('~/models/File');
|
||||
@@ -113,15 +112,13 @@ class BaseClient {
|
||||
* If a correction to the token usage is needed, the method should return an object with the corrected token counts.
|
||||
* Should only be used if `recordCollectedUsage` was not used instead.
|
||||
* @param {string} [model]
|
||||
* @param {AppConfig['balance']} [balance]
|
||||
* @param {number} promptTokens
|
||||
* @param {number} completionTokens
|
||||
* @returns {Promise<void>}
|
||||
*/
|
||||
async recordTokenUsage({ model, balance, promptTokens, completionTokens }) {
|
||||
async recordTokenUsage({ model, promptTokens, completionTokens }) {
|
||||
logger.debug('[BaseClient] `recordTokenUsage` not implemented.', {
|
||||
model,
|
||||
balance,
|
||||
promptTokens,
|
||||
completionTokens,
|
||||
});
|
||||
@@ -190,7 +187,8 @@ class BaseClient {
|
||||
this.user = user;
|
||||
const saveOptions = this.getSaveOptions();
|
||||
this.abortController = opts.abortController ?? new AbortController();
|
||||
const conversationId = overrideConvoId ?? opts.conversationId ?? crypto.randomUUID();
|
||||
const requestConvoId = overrideConvoId ?? opts.conversationId;
|
||||
const conversationId = requestConvoId ?? crypto.randomUUID();
|
||||
const parentMessageId = opts.parentMessageId ?? Constants.NO_PARENT;
|
||||
const userMessageId =
|
||||
overrideUserMessageId ?? opts.overrideParentMessageId ?? crypto.randomUUID();
|
||||
@@ -215,11 +213,12 @@ class BaseClient {
|
||||
...opts,
|
||||
user,
|
||||
head,
|
||||
saveOptions,
|
||||
userMessageId,
|
||||
requestConvoId,
|
||||
conversationId,
|
||||
parentMessageId,
|
||||
userMessageId,
|
||||
responseMessageId,
|
||||
saveOptions,
|
||||
};
|
||||
}
|
||||
|
||||
@@ -238,11 +237,12 @@ class BaseClient {
|
||||
const {
|
||||
user,
|
||||
head,
|
||||
saveOptions,
|
||||
userMessageId,
|
||||
requestConvoId,
|
||||
conversationId,
|
||||
parentMessageId,
|
||||
userMessageId,
|
||||
responseMessageId,
|
||||
saveOptions,
|
||||
} = await this.setMessageOptions(opts);
|
||||
|
||||
const userMessage = opts.isEdited
|
||||
@@ -264,7 +264,8 @@ class BaseClient {
|
||||
}
|
||||
|
||||
if (typeof opts?.onStart === 'function') {
|
||||
opts.onStart(userMessage, responseMessageId);
|
||||
const isNewConvo = !requestConvoId && parentMessageId === Constants.NO_PARENT;
|
||||
opts.onStart(userMessage, responseMessageId, isNewConvo);
|
||||
}
|
||||
|
||||
return {
|
||||
@@ -570,7 +571,6 @@ class BaseClient {
|
||||
}
|
||||
|
||||
async sendMessage(message, opts = {}) {
|
||||
const appConfig = await getAppConfig({ role: this.options.req?.user?.role });
|
||||
/** @type {Promise<TMessage>} */
|
||||
let userMessagePromise;
|
||||
const { user, head, isEdited, conversationId, responseMessageId, saveOptions, userMessage } =
|
||||
@@ -657,7 +657,7 @@ class BaseClient {
|
||||
}
|
||||
}
|
||||
|
||||
const balance = appConfig?.balance;
|
||||
const balance = this.options.req?.app?.locals?.balance;
|
||||
if (
|
||||
balance?.enabled &&
|
||||
supportsBalanceCheck[this.options.endpointType ?? this.options.endpoint]
|
||||
@@ -756,7 +756,6 @@ class BaseClient {
|
||||
completionTokens = responseMessage.tokenCount;
|
||||
await this.recordTokenUsage({
|
||||
usage,
|
||||
balance,
|
||||
promptTokens,
|
||||
completionTokens,
|
||||
model: responseMessage.model,
|
||||
|
||||
@@ -34,14 +34,13 @@ const {
|
||||
const { extractBaseURL, getModelMaxTokens, getModelMaxOutputTokens } = require('~/utils');
|
||||
const { encodeAndFormat } = require('~/server/services/Files/images/encode');
|
||||
const { addSpaceIfNeeded, sleep } = require('~/server/utils');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const { spendTokens } = require('~/models/spendTokens');
|
||||
const { handleOpenAIErrors } = require('./tools/util');
|
||||
const { createLLM, RunManager } = require('./llm');
|
||||
const { summaryBuffer } = require('./memory');
|
||||
const { runTitleChain } = require('./chains');
|
||||
const { tokenSplit } = require('./document');
|
||||
const BaseClient = require('./BaseClient');
|
||||
const { createLLM } = require('./llm');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
class OpenAIClient extends BaseClient {
|
||||
@@ -619,6 +618,10 @@ class OpenAIClient extends BaseClient {
|
||||
temperature = 0.2,
|
||||
max_tokens,
|
||||
streaming,
|
||||
context,
|
||||
tokenBuffer,
|
||||
initialMessageCount,
|
||||
conversationId,
|
||||
}) {
|
||||
const modelOptions = {
|
||||
modelName: modelName ?? model,
|
||||
@@ -663,12 +666,22 @@ class OpenAIClient extends BaseClient {
|
||||
configOptions.httpsAgent = new HttpsProxyAgent(this.options.proxy);
|
||||
}
|
||||
|
||||
const { req, res, debug } = this.options;
|
||||
const runManager = new RunManager({ req, res, debug, abortController: this.abortController });
|
||||
this.runManager = runManager;
|
||||
|
||||
const llm = createLLM({
|
||||
modelOptions,
|
||||
configOptions,
|
||||
openAIApiKey: this.apiKey,
|
||||
azure: this.azure,
|
||||
streaming,
|
||||
callbacks: runManager.createCallbacks({
|
||||
context,
|
||||
tokenBuffer,
|
||||
conversationId: this.conversationId ?? conversationId,
|
||||
initialMessageCount,
|
||||
}),
|
||||
});
|
||||
|
||||
return llm;
|
||||
@@ -689,7 +702,6 @@ class OpenAIClient extends BaseClient {
|
||||
* In case of failure, it will return the default title, "New Chat".
|
||||
*/
|
||||
async titleConvo({ text, conversationId, responseText = '' }) {
|
||||
const appConfig = await getAppConfig({ role: this.options.req?.user?.role });
|
||||
this.conversationId = conversationId;
|
||||
|
||||
if (this.options.attachments) {
|
||||
@@ -718,7 +730,8 @@ class OpenAIClient extends BaseClient {
|
||||
max_tokens: 16,
|
||||
};
|
||||
|
||||
const azureConfig = appConfig?.endpoints?.[EModelEndpoint.azureOpenAI];
|
||||
/** @type {TAzureConfig | undefined} */
|
||||
const azureConfig = this.options?.req?.app?.locals?.[EModelEndpoint.azureOpenAI];
|
||||
|
||||
const resetTitleOptions = !!(
|
||||
(this.azure && azureConfig) ||
|
||||
@@ -1107,7 +1120,6 @@ ${convo}
|
||||
}
|
||||
|
||||
async chatCompletion({ payload, onProgress, abortController = null }) {
|
||||
const appConfig = await getAppConfig({ role: this.options.req?.user?.role });
|
||||
let error = null;
|
||||
let intermediateReply = [];
|
||||
const errorCallback = (err) => (error = err);
|
||||
@@ -1153,7 +1165,8 @@ ${convo}
|
||||
opts.fetchOptions.agent = new HttpsProxyAgent(this.options.proxy);
|
||||
}
|
||||
|
||||
const azureConfig = appConfig?.endpoints?.[EModelEndpoint.azureOpenAI];
|
||||
/** @type {TAzureConfig | undefined} */
|
||||
const azureConfig = this.options?.req?.app?.locals?.[EModelEndpoint.azureOpenAI];
|
||||
|
||||
if (
|
||||
(this.azure && this.isVisionModel && azureConfig) ||
|
||||
|
||||
95
api/app/clients/callbacks/createStartHandler.js
Normal file
95
api/app/clients/callbacks/createStartHandler.js
Normal file
@@ -0,0 +1,95 @@
|
||||
const { promptTokensEstimate } = require('openai-chat-tokens');
|
||||
const { EModelEndpoint, supportsBalanceCheck } = require('librechat-data-provider');
|
||||
const { formatFromLangChain } = require('~/app/clients/prompts');
|
||||
const { getBalanceConfig } = require('~/server/services/Config');
|
||||
const { checkBalance } = require('~/models/balanceMethods');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
const createStartHandler = ({
|
||||
context,
|
||||
conversationId,
|
||||
tokenBuffer = 0,
|
||||
initialMessageCount,
|
||||
manager,
|
||||
}) => {
|
||||
return async (_llm, _messages, runId, parentRunId, extraParams) => {
|
||||
const { invocation_params } = extraParams;
|
||||
const { model, functions, function_call } = invocation_params;
|
||||
const messages = _messages[0].map(formatFromLangChain);
|
||||
|
||||
logger.debug(`[createStartHandler] handleChatModelStart: ${context}`, {
|
||||
model,
|
||||
function_call,
|
||||
});
|
||||
|
||||
if (context !== 'title') {
|
||||
logger.debug(`[createStartHandler] handleChatModelStart: ${context}`, {
|
||||
functions,
|
||||
});
|
||||
}
|
||||
|
||||
const payload = { messages };
|
||||
let prelimPromptTokens = 1;
|
||||
|
||||
if (functions) {
|
||||
payload.functions = functions;
|
||||
prelimPromptTokens += 2;
|
||||
}
|
||||
|
||||
if (function_call) {
|
||||
payload.function_call = function_call;
|
||||
prelimPromptTokens -= 5;
|
||||
}
|
||||
|
||||
prelimPromptTokens += promptTokensEstimate(payload);
|
||||
logger.debug('[createStartHandler]', {
|
||||
prelimPromptTokens,
|
||||
tokenBuffer,
|
||||
});
|
||||
prelimPromptTokens += tokenBuffer;
|
||||
|
||||
try {
|
||||
const balance = await getBalanceConfig();
|
||||
if (balance?.enabled && supportsBalanceCheck[EModelEndpoint.openAI]) {
|
||||
const generations =
|
||||
initialMessageCount && messages.length > initialMessageCount
|
||||
? messages.slice(initialMessageCount)
|
||||
: null;
|
||||
await checkBalance({
|
||||
req: manager.req,
|
||||
res: manager.res,
|
||||
txData: {
|
||||
user: manager.user,
|
||||
tokenType: 'prompt',
|
||||
amount: prelimPromptTokens,
|
||||
debug: manager.debug,
|
||||
generations,
|
||||
model,
|
||||
endpoint: EModelEndpoint.openAI,
|
||||
},
|
||||
});
|
||||
}
|
||||
} catch (err) {
|
||||
logger.error(`[createStartHandler][${context}] checkBalance error`, err);
|
||||
manager.abortController.abort();
|
||||
if (context === 'summary' || context === 'plugins') {
|
||||
manager.addRun(runId, { conversationId, error: err.message });
|
||||
throw new Error(err);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
manager.addRun(runId, {
|
||||
model,
|
||||
messages,
|
||||
functions,
|
||||
function_call,
|
||||
runId,
|
||||
parentRunId,
|
||||
conversationId,
|
||||
prelimPromptTokens,
|
||||
});
|
||||
};
|
||||
};
|
||||
|
||||
module.exports = createStartHandler;
|
||||
5
api/app/clients/callbacks/index.js
Normal file
5
api/app/clients/callbacks/index.js
Normal file
@@ -0,0 +1,5 @@
|
||||
const createStartHandler = require('./createStartHandler');
|
||||
|
||||
module.exports = {
|
||||
createStartHandler,
|
||||
};
|
||||
105
api/app/clients/llm/RunManager.js
Normal file
105
api/app/clients/llm/RunManager.js
Normal file
@@ -0,0 +1,105 @@
|
||||
const { createStartHandler } = require('~/app/clients/callbacks');
|
||||
const { spendTokens } = require('~/models/spendTokens');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
class RunManager {
|
||||
constructor(fields) {
|
||||
const { req, res, abortController, debug } = fields;
|
||||
this.abortController = abortController;
|
||||
this.user = req.user.id;
|
||||
this.req = req;
|
||||
this.res = res;
|
||||
this.debug = debug;
|
||||
this.runs = new Map();
|
||||
this.convos = new Map();
|
||||
}
|
||||
|
||||
addRun(runId, runData) {
|
||||
if (!this.runs.has(runId)) {
|
||||
this.runs.set(runId, runData);
|
||||
if (runData.conversationId) {
|
||||
this.convos.set(runData.conversationId, runId);
|
||||
}
|
||||
return runData;
|
||||
} else {
|
||||
const existingData = this.runs.get(runId);
|
||||
const update = { ...existingData, ...runData };
|
||||
this.runs.set(runId, update);
|
||||
if (update.conversationId) {
|
||||
this.convos.set(update.conversationId, runId);
|
||||
}
|
||||
return update;
|
||||
}
|
||||
}
|
||||
|
||||
removeRun(runId) {
|
||||
if (this.runs.has(runId)) {
|
||||
this.runs.delete(runId);
|
||||
} else {
|
||||
logger.error(`[api/app/clients/llm/RunManager] Run with ID ${runId} does not exist.`);
|
||||
}
|
||||
}
|
||||
|
||||
getAllRuns() {
|
||||
return Array.from(this.runs.values());
|
||||
}
|
||||
|
||||
getRunById(runId) {
|
||||
return this.runs.get(runId);
|
||||
}
|
||||
|
||||
getRunByConversationId(conversationId) {
|
||||
const runId = this.convos.get(conversationId);
|
||||
return { run: this.runs.get(runId), runId };
|
||||
}
|
||||
|
||||
createCallbacks(metadata) {
|
||||
return [
|
||||
{
|
||||
handleChatModelStart: createStartHandler({ ...metadata, manager: this }),
|
||||
handleLLMEnd: async (output, runId, _parentRunId) => {
|
||||
const { llmOutput, ..._output } = output;
|
||||
logger.debug(`[RunManager] handleLLMEnd: ${JSON.stringify(metadata)}`, {
|
||||
runId,
|
||||
_parentRunId,
|
||||
llmOutput,
|
||||
});
|
||||
|
||||
if (metadata.context !== 'title') {
|
||||
logger.debug('[RunManager] handleLLMEnd:', {
|
||||
output: _output,
|
||||
});
|
||||
}
|
||||
|
||||
const { tokenUsage } = output.llmOutput;
|
||||
const run = this.getRunById(runId);
|
||||
this.removeRun(runId);
|
||||
|
||||
const txData = {
|
||||
user: this.user,
|
||||
model: run?.model ?? 'gpt-3.5-turbo',
|
||||
...metadata,
|
||||
};
|
||||
|
||||
await spendTokens(txData, tokenUsage);
|
||||
},
|
||||
handleLLMError: async (err) => {
|
||||
logger.error(`[RunManager] handleLLMError: ${JSON.stringify(metadata)}`, err);
|
||||
if (metadata.context === 'title') {
|
||||
return;
|
||||
} else if (metadata.context === 'plugins') {
|
||||
throw new Error(err);
|
||||
}
|
||||
const { conversationId } = metadata;
|
||||
const { run } = this.getRunByConversationId(conversationId);
|
||||
if (run && run.error) {
|
||||
const { error } = run;
|
||||
throw new Error(error);
|
||||
}
|
||||
},
|
||||
},
|
||||
];
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = RunManager;
|
||||
@@ -1,7 +1,9 @@
|
||||
const createLLM = require('./createLLM');
|
||||
const RunManager = require('./RunManager');
|
||||
const createCoherePayload = require('./createCoherePayload');
|
||||
|
||||
module.exports = {
|
||||
createLLM,
|
||||
RunManager,
|
||||
createCoherePayload,
|
||||
};
|
||||
|
||||
@@ -2,14 +2,6 @@ const { Constants } = require('librechat-data-provider');
|
||||
const { initializeFakeClient } = require('./FakeClient');
|
||||
|
||||
jest.mock('~/db/connect');
|
||||
jest.mock('~/server/services/Config', () => ({
|
||||
getAppConfig: jest.fn().mockResolvedValue({
|
||||
// Default app config for tests
|
||||
paths: { uploads: '/tmp' },
|
||||
fileStrategy: 'local',
|
||||
memory: { disabled: false },
|
||||
}),
|
||||
}));
|
||||
jest.mock('~/models', () => ({
|
||||
User: jest.fn(),
|
||||
Key: jest.fn(),
|
||||
@@ -587,6 +579,8 @@ describe('BaseClient', () => {
|
||||
expect(onStart).toHaveBeenCalledWith(
|
||||
expect.objectContaining({ text: 'Hello, world!' }),
|
||||
expect.any(String),
|
||||
/** `isNewConvo` */
|
||||
true,
|
||||
);
|
||||
});
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
const manifest = require('./manifest');
|
||||
const availableTools = require('./manifest.json');
|
||||
|
||||
// Structured Tools
|
||||
const DALLE3 = require('./structured/DALLE3');
|
||||
@@ -13,8 +13,23 @@ const TraversaalSearch = require('./structured/TraversaalSearch');
|
||||
const createOpenAIImageTools = require('./structured/OpenAIImageTools');
|
||||
const TavilySearchResults = require('./structured/TavilySearchResults');
|
||||
|
||||
/** @type {Record<string, TPlugin | undefined>} */
|
||||
const manifestToolMap = {};
|
||||
|
||||
/** @type {Array<TPlugin>} */
|
||||
const toolkits = [];
|
||||
|
||||
availableTools.forEach((tool) => {
|
||||
manifestToolMap[tool.pluginKey] = tool;
|
||||
if (tool.toolkit === true) {
|
||||
toolkits.push(tool);
|
||||
}
|
||||
});
|
||||
|
||||
module.exports = {
|
||||
...manifest,
|
||||
toolkits,
|
||||
availableTools,
|
||||
manifestToolMap,
|
||||
// Structured Tools
|
||||
DALLE3,
|
||||
FluxAPI,
|
||||
|
||||
@@ -1,20 +0,0 @@
|
||||
const availableTools = require('./manifest.json');
|
||||
|
||||
/** @type {Record<string, TPlugin | undefined>} */
|
||||
const manifestToolMap = {};
|
||||
|
||||
/** @type {Array<TPlugin>} */
|
||||
const toolkits = [];
|
||||
|
||||
availableTools.forEach((tool) => {
|
||||
manifestToolMap[tool.pluginKey] = tool;
|
||||
if (tool.toolkit === true) {
|
||||
toolkits.push(tool);
|
||||
}
|
||||
});
|
||||
|
||||
module.exports = {
|
||||
toolkits,
|
||||
availableTools,
|
||||
manifestToolMap,
|
||||
};
|
||||
@@ -5,10 +5,10 @@ const fetch = require('node-fetch');
|
||||
const { v4: uuidv4 } = require('uuid');
|
||||
const { ProxyAgent } = require('undici');
|
||||
const { Tool } = require('@langchain/core/tools');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { getImageBasename } = require('@librechat/api');
|
||||
const { FileContext, ContentTypes } = require('librechat-data-provider');
|
||||
const { getImageBasename } = require('~/server/services/Files/images');
|
||||
const extractBaseURL = require('~/utils/extractBaseURL');
|
||||
const logger = require('~/config/winston');
|
||||
|
||||
const displayMessage =
|
||||
"DALL-E displayed an image. All generated images are already plainly visible, so don't repeat the descriptions in detail. Do not list download links as they are available in the UI already. The user may download the images by clicking on them, but do not mention anything about downloading to the user.";
|
||||
|
||||
@@ -1,16 +1,69 @@
|
||||
const { z } = require('zod');
|
||||
const axios = require('axios');
|
||||
const { v4 } = require('uuid');
|
||||
const OpenAI = require('openai');
|
||||
const FormData = require('form-data');
|
||||
const { ProxyAgent } = require('undici');
|
||||
const { tool } = require('@langchain/core/tools');
|
||||
const { logAxiosError } = require('@librechat/api');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { logAxiosError, oaiToolkit } = require('@librechat/api');
|
||||
const { ContentTypes, EImageOutputType } = require('librechat-data-provider');
|
||||
const { getStrategyFunctions } = require('~/server/services/Files/strategies');
|
||||
const extractBaseURL = require('~/utils/extractBaseURL');
|
||||
const { extractBaseURL } = require('~/utils');
|
||||
const { getFiles } = require('~/models/File');
|
||||
|
||||
/** Default descriptions for image generation tool */
|
||||
const DEFAULT_IMAGE_GEN_DESCRIPTION = `
|
||||
Generates high-quality, original images based solely on text, not using any uploaded reference images.
|
||||
|
||||
When to use \`image_gen_oai\`:
|
||||
- To create entirely new images from detailed text descriptions that do NOT reference any image files.
|
||||
|
||||
When NOT to use \`image_gen_oai\`:
|
||||
- If the user has uploaded any images and requests modifications, enhancements, or remixing based on those uploads → use \`image_edit_oai\` instead.
|
||||
|
||||
Generated image IDs will be returned in the response, so you can refer to them in future requests made to \`image_edit_oai\`.
|
||||
`.trim();
|
||||
|
||||
/** Default description for image editing tool */
|
||||
const DEFAULT_IMAGE_EDIT_DESCRIPTION =
|
||||
`Generates high-quality, original images based on text and one or more uploaded/referenced images.
|
||||
|
||||
When to use \`image_edit_oai\`:
|
||||
- The user wants to modify, extend, or remix one **or more** uploaded images, either:
|
||||
- Previously generated, or in the current request (both to be included in the \`image_ids\` array).
|
||||
- Always when the user refers to uploaded images for editing, enhancement, remixing, style transfer, or combining elements.
|
||||
- Any current or existing images are to be used as visual guides.
|
||||
- If there are any files in the current request, they are more likely than not expected as references for image edit requests.
|
||||
|
||||
When NOT to use \`image_edit_oai\`:
|
||||
- Brand-new generations that do not rely on an existing image → use \`image_gen_oai\` instead.
|
||||
|
||||
Both generated and referenced image IDs will be returned in the response, so you can refer to them in future requests made to \`image_edit_oai\`.
|
||||
`.trim();
|
||||
|
||||
/** Default prompt descriptions */
|
||||
const DEFAULT_IMAGE_GEN_PROMPT_DESCRIPTION = `Describe the image you want in detail.
|
||||
Be highly specific—break your idea into layers:
|
||||
(1) main concept and subject,
|
||||
(2) composition and position,
|
||||
(3) lighting and mood,
|
||||
(4) style, medium, or camera details,
|
||||
(5) important features (age, expression, clothing, etc.),
|
||||
(6) background.
|
||||
Use positive, descriptive language and specify what should be included, not what to avoid.
|
||||
List number and characteristics of people/objects, and mention style/technical requirements (e.g., "DSLR photo, 85mm lens, golden hour").
|
||||
Do not reference any uploaded images—use for new image creation from text only.`;
|
||||
|
||||
const DEFAULT_IMAGE_EDIT_PROMPT_DESCRIPTION = `Describe the changes, enhancements, or new ideas to apply to the uploaded image(s).
|
||||
Be highly specific—break your request into layers:
|
||||
(1) main concept or transformation,
|
||||
(2) specific edits/replacements or composition guidance,
|
||||
(3) desired style, mood, or technique,
|
||||
(4) features/items to keep, change, or add (such as objects, people, clothing, lighting, etc.).
|
||||
Use positive, descriptive language and clarify what should be included or changed, not what to avoid.
|
||||
Always base this prompt on the most recently uploaded reference images.`;
|
||||
|
||||
const displayMessage =
|
||||
"The tool displayed an image. All generated images are already plainly visible, so don't repeat the descriptions in detail. Do not list download links as they are available in the UI already. The user may download the images by clicking on them, but do not mention anything about downloading to the user.";
|
||||
|
||||
@@ -38,6 +91,22 @@ function returnValue(value) {
|
||||
return value;
|
||||
}
|
||||
|
||||
const getImageGenDescription = () => {
|
||||
return process.env.IMAGE_GEN_OAI_DESCRIPTION || DEFAULT_IMAGE_GEN_DESCRIPTION;
|
||||
};
|
||||
|
||||
const getImageEditDescription = () => {
|
||||
return process.env.IMAGE_EDIT_OAI_DESCRIPTION || DEFAULT_IMAGE_EDIT_DESCRIPTION;
|
||||
};
|
||||
|
||||
const getImageGenPromptDescription = () => {
|
||||
return process.env.IMAGE_GEN_OAI_PROMPT_DESCRIPTION || DEFAULT_IMAGE_GEN_PROMPT_DESCRIPTION;
|
||||
};
|
||||
|
||||
const getImageEditPromptDescription = () => {
|
||||
return process.env.IMAGE_EDIT_OAI_PROMPT_DESCRIPTION || DEFAULT_IMAGE_EDIT_PROMPT_DESCRIPTION;
|
||||
};
|
||||
|
||||
function createAbortHandler() {
|
||||
return function () {
|
||||
logger.debug('[ImageGenOAI] Image generation aborted');
|
||||
@@ -52,9 +121,7 @@ function createAbortHandler() {
|
||||
* @param {string} fields.IMAGE_GEN_OAI_API_KEY - The OpenAI API key
|
||||
* @param {boolean} [fields.override] - Whether to override the API key check, necessary for app initialization
|
||||
* @param {MongoFile[]} [fields.imageFiles] - The images to be used for editing
|
||||
* @param {string} [fields.imageOutputType] - The image output type configuration
|
||||
* @param {string} [fields.fileStrategy] - The file storage strategy
|
||||
* @returns {Array<ReturnType<tool>>} - Array of image tools
|
||||
* @returns {Array} - Array of image tools
|
||||
*/
|
||||
function createOpenAIImageTools(fields = {}) {
|
||||
/** @type {boolean} Used to initialize the Tool without necessary variables. */
|
||||
@@ -64,8 +131,8 @@ function createOpenAIImageTools(fields = {}) {
|
||||
throw new Error('This tool is only available for agents.');
|
||||
}
|
||||
const { req } = fields;
|
||||
const imageOutputType = fields.imageOutputType || EImageOutputType.PNG;
|
||||
const appFileStrategy = fields.fileStrategy;
|
||||
const imageOutputType = req?.app.locals.imageOutputType || EImageOutputType.PNG;
|
||||
const appFileStrategy = req?.app.locals.fileStrategy;
|
||||
|
||||
const getApiKey = () => {
|
||||
const apiKey = process.env.IMAGE_GEN_OAI_API_KEY ?? '';
|
||||
@@ -218,7 +285,46 @@ Error Message: ${error.message}`);
|
||||
];
|
||||
return [response, { content, file_ids }];
|
||||
},
|
||||
oaiToolkit.image_gen_oai,
|
||||
{
|
||||
name: 'image_gen_oai',
|
||||
description: getImageGenDescription(),
|
||||
schema: z.object({
|
||||
prompt: z.string().max(32000).describe(getImageGenPromptDescription()),
|
||||
background: z
|
||||
.enum(['transparent', 'opaque', 'auto'])
|
||||
.optional()
|
||||
.describe(
|
||||
'Sets transparency for the background. Must be one of transparent, opaque or auto (default). When transparent, the output format should be png or webp.',
|
||||
),
|
||||
/*
|
||||
n: z
|
||||
.number()
|
||||
.int()
|
||||
.min(1)
|
||||
.max(10)
|
||||
.optional()
|
||||
.describe('The number of images to generate. Must be between 1 and 10.'),
|
||||
output_compression: z
|
||||
.number()
|
||||
.int()
|
||||
.min(0)
|
||||
.max(100)
|
||||
.optional()
|
||||
.describe('The compression level (0-100%) for webp or jpeg formats. Defaults to 100.'),
|
||||
*/
|
||||
quality: z
|
||||
.enum(['auto', 'high', 'medium', 'low'])
|
||||
.optional()
|
||||
.describe('The quality of the image. One of auto (default), high, medium, or low.'),
|
||||
size: z
|
||||
.enum(['auto', '1024x1024', '1536x1024', '1024x1536'])
|
||||
.optional()
|
||||
.describe(
|
||||
'The size of the generated image. One of 1024x1024, 1536x1024 (landscape), 1024x1536 (portrait), or auto (default).',
|
||||
),
|
||||
}),
|
||||
responseFormat: 'content_and_artifact',
|
||||
},
|
||||
);
|
||||
|
||||
/**
|
||||
@@ -411,7 +517,48 @@ Error Message: ${error.message || 'Unknown error'}`);
|
||||
}
|
||||
}
|
||||
},
|
||||
oaiToolkit.image_edit_oai,
|
||||
{
|
||||
name: 'image_edit_oai',
|
||||
description: getImageEditDescription(),
|
||||
schema: z.object({
|
||||
image_ids: z
|
||||
.array(z.string())
|
||||
.min(1)
|
||||
.describe(
|
||||
`
|
||||
IDs (image ID strings) of previously generated or uploaded images that should guide the edit.
|
||||
|
||||
Guidelines:
|
||||
- If the user's request depends on any prior image(s), copy their image IDs into the \`image_ids\` array (in the same order the user refers to them).
|
||||
- Never invent or hallucinate IDs; only use IDs that are still visible in the conversation context.
|
||||
- If no earlier image is relevant, omit the field entirely.
|
||||
`.trim(),
|
||||
),
|
||||
prompt: z.string().max(32000).describe(getImageEditPromptDescription()),
|
||||
/*
|
||||
n: z
|
||||
.number()
|
||||
.int()
|
||||
.min(1)
|
||||
.max(10)
|
||||
.optional()
|
||||
.describe('The number of images to generate. Must be between 1 and 10. Defaults to 1.'),
|
||||
*/
|
||||
quality: z
|
||||
.enum(['auto', 'high', 'medium', 'low'])
|
||||
.optional()
|
||||
.describe(
|
||||
'The quality of the image. One of auto (default), high, medium, or low. High/medium/low only supported for gpt-image-1.',
|
||||
),
|
||||
size: z
|
||||
.enum(['auto', '1024x1024', '1536x1024', '1024x1536', '256x256', '512x512'])
|
||||
.optional()
|
||||
.describe(
|
||||
'The size of the generated images. For gpt-image-1: auto (default), 1024x1024, 1536x1024, 1024x1536. For dall-e-2: 256x256, 512x512, 1024x1024.',
|
||||
),
|
||||
}),
|
||||
responseFormat: 'content_and_artifact',
|
||||
},
|
||||
);
|
||||
|
||||
return [imageGenTool, imageEditTool];
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
const { ytToolkit } = require('@librechat/api');
|
||||
const { z } = require('zod');
|
||||
const { tool } = require('@langchain/core/tools');
|
||||
const { youtube } = require('@googleapis/youtube');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { YoutubeTranscript } = require('youtube-transcript');
|
||||
const { getApiKey } = require('./credentials');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
function extractVideoId(url) {
|
||||
const rawIdRegex = /^[a-zA-Z0-9_-]{11}$/;
|
||||
@@ -29,7 +29,7 @@ function parseTranscript(transcriptResponse) {
|
||||
.map((entry) => entry.text.trim())
|
||||
.filter((text) => text)
|
||||
.join(' ')
|
||||
.replaceAll('&#39;', "'");
|
||||
.replaceAll('&#39;', '\'');
|
||||
}
|
||||
|
||||
function createYouTubeTools(fields = {}) {
|
||||
@@ -42,94 +42,160 @@ function createYouTubeTools(fields = {}) {
|
||||
auth: apiKey,
|
||||
});
|
||||
|
||||
const searchTool = tool(async ({ query, maxResults = 5 }) => {
|
||||
const response = await youtubeClient.search.list({
|
||||
part: 'snippet',
|
||||
q: query,
|
||||
type: 'video',
|
||||
maxResults: maxResults || 5,
|
||||
});
|
||||
const result = response.data.items.map((item) => ({
|
||||
title: item.snippet.title,
|
||||
description: item.snippet.description,
|
||||
url: `https://www.youtube.com/watch?v=${item.id.videoId}`,
|
||||
}));
|
||||
return JSON.stringify(result, null, 2);
|
||||
}, ytToolkit.youtube_search);
|
||||
const searchTool = tool(
|
||||
async ({ query, maxResults = 5 }) => {
|
||||
const response = await youtubeClient.search.list({
|
||||
part: 'snippet',
|
||||
q: query,
|
||||
type: 'video',
|
||||
maxResults: maxResults || 5,
|
||||
});
|
||||
const result = response.data.items.map((item) => ({
|
||||
title: item.snippet.title,
|
||||
description: item.snippet.description,
|
||||
url: `https://www.youtube.com/watch?v=${item.id.videoId}`,
|
||||
}));
|
||||
return JSON.stringify(result, null, 2);
|
||||
},
|
||||
{
|
||||
name: 'youtube_search',
|
||||
description: `Search for YouTube videos by keyword or phrase.
|
||||
- Required: query (search terms to find videos)
|
||||
- Optional: maxResults (number of videos to return, 1-50, default: 5)
|
||||
- Returns: List of videos with titles, descriptions, and URLs
|
||||
- Use for: Finding specific videos, exploring content, research
|
||||
Example: query="cooking pasta tutorials" maxResults=3`,
|
||||
schema: z.object({
|
||||
query: z.string().describe('Search query terms'),
|
||||
maxResults: z.number().int().min(1).max(50).optional().describe('Number of results (1-50)'),
|
||||
}),
|
||||
},
|
||||
);
|
||||
|
||||
const infoTool = tool(async ({ url }) => {
|
||||
const videoId = extractVideoId(url);
|
||||
if (!videoId) {
|
||||
throw new Error('Invalid YouTube URL or video ID');
|
||||
}
|
||||
const infoTool = tool(
|
||||
async ({ url }) => {
|
||||
const videoId = extractVideoId(url);
|
||||
if (!videoId) {
|
||||
throw new Error('Invalid YouTube URL or video ID');
|
||||
}
|
||||
|
||||
const response = await youtubeClient.videos.list({
|
||||
part: 'snippet,statistics',
|
||||
id: videoId,
|
||||
});
|
||||
const response = await youtubeClient.videos.list({
|
||||
part: 'snippet,statistics',
|
||||
id: videoId,
|
||||
});
|
||||
|
||||
if (!response.data.items?.length) {
|
||||
throw new Error('Video not found');
|
||||
}
|
||||
const video = response.data.items[0];
|
||||
if (!response.data.items?.length) {
|
||||
throw new Error('Video not found');
|
||||
}
|
||||
const video = response.data.items[0];
|
||||
|
||||
const result = {
|
||||
title: video.snippet.title,
|
||||
description: video.snippet.description,
|
||||
views: video.statistics.viewCount,
|
||||
likes: video.statistics.likeCount,
|
||||
comments: video.statistics.commentCount,
|
||||
};
|
||||
return JSON.stringify(result, null, 2);
|
||||
}, ytToolkit.youtube_info);
|
||||
const result = {
|
||||
title: video.snippet.title,
|
||||
description: video.snippet.description,
|
||||
views: video.statistics.viewCount,
|
||||
likes: video.statistics.likeCount,
|
||||
comments: video.statistics.commentCount,
|
||||
};
|
||||
return JSON.stringify(result, null, 2);
|
||||
},
|
||||
{
|
||||
name: 'youtube_info',
|
||||
description: `Get detailed metadata and statistics for a specific YouTube video.
|
||||
- Required: url (full YouTube URL or video ID)
|
||||
- Returns: Video title, description, view count, like count, comment count
|
||||
- Use for: Getting video metrics and basic metadata
|
||||
- DO NOT USE FOR VIDEO SUMMARIES, USE TRANSCRIPTS FOR COMPREHENSIVE ANALYSIS
|
||||
- Accepts both full URLs and video IDs
|
||||
Example: url="https://youtube.com/watch?v=abc123" or url="abc123"`,
|
||||
schema: z.object({
|
||||
url: z.string().describe('YouTube video URL or ID'),
|
||||
}),
|
||||
},
|
||||
);
|
||||
|
||||
const commentsTool = tool(async ({ url, maxResults = 10 }) => {
|
||||
const videoId = extractVideoId(url);
|
||||
if (!videoId) {
|
||||
throw new Error('Invalid YouTube URL or video ID');
|
||||
}
|
||||
const commentsTool = tool(
|
||||
async ({ url, maxResults = 10 }) => {
|
||||
const videoId = extractVideoId(url);
|
||||
if (!videoId) {
|
||||
throw new Error('Invalid YouTube URL or video ID');
|
||||
}
|
||||
|
||||
const response = await youtubeClient.commentThreads.list({
|
||||
part: 'snippet',
|
||||
videoId,
|
||||
maxResults: maxResults || 10,
|
||||
});
|
||||
const response = await youtubeClient.commentThreads.list({
|
||||
part: 'snippet',
|
||||
videoId,
|
||||
maxResults: maxResults || 10,
|
||||
});
|
||||
|
||||
const result = response.data.items.map((item) => ({
|
||||
author: item.snippet.topLevelComment.snippet.authorDisplayName,
|
||||
text: item.snippet.topLevelComment.snippet.textDisplay,
|
||||
likes: item.snippet.topLevelComment.snippet.likeCount,
|
||||
}));
|
||||
return JSON.stringify(result, null, 2);
|
||||
}, ytToolkit.youtube_comments);
|
||||
const result = response.data.items.map((item) => ({
|
||||
author: item.snippet.topLevelComment.snippet.authorDisplayName,
|
||||
text: item.snippet.topLevelComment.snippet.textDisplay,
|
||||
likes: item.snippet.topLevelComment.snippet.likeCount,
|
||||
}));
|
||||
return JSON.stringify(result, null, 2);
|
||||
},
|
||||
{
|
||||
name: 'youtube_comments',
|
||||
description: `Retrieve top-level comments from a YouTube video.
|
||||
- Required: url (full YouTube URL or video ID)
|
||||
- Optional: maxResults (number of comments, 1-50, default: 10)
|
||||
- Returns: Comment text, author names, like counts
|
||||
- Use for: Sentiment analysis, audience feedback, engagement review
|
||||
Example: url="abc123" maxResults=20`,
|
||||
schema: z.object({
|
||||
url: z.string().describe('YouTube video URL or ID'),
|
||||
maxResults: z
|
||||
.number()
|
||||
.int()
|
||||
.min(1)
|
||||
.max(50)
|
||||
.optional()
|
||||
.describe('Number of comments to retrieve'),
|
||||
}),
|
||||
},
|
||||
);
|
||||
|
||||
const transcriptTool = tool(async ({ url }) => {
|
||||
const videoId = extractVideoId(url);
|
||||
if (!videoId) {
|
||||
throw new Error('Invalid YouTube URL or video ID');
|
||||
}
|
||||
|
||||
try {
|
||||
try {
|
||||
const transcript = await YoutubeTranscript.fetchTranscript(videoId, { lang: 'en' });
|
||||
return parseTranscript(transcript);
|
||||
} catch (e) {
|
||||
logger.error(e);
|
||||
const transcriptTool = tool(
|
||||
async ({ url }) => {
|
||||
const videoId = extractVideoId(url);
|
||||
if (!videoId) {
|
||||
throw new Error('Invalid YouTube URL or video ID');
|
||||
}
|
||||
|
||||
try {
|
||||
const transcript = await YoutubeTranscript.fetchTranscript(videoId, { lang: 'de' });
|
||||
return parseTranscript(transcript);
|
||||
} catch (e) {
|
||||
logger.error(e);
|
||||
}
|
||||
try {
|
||||
const transcript = await YoutubeTranscript.fetchTranscript(videoId, { lang: 'en' });
|
||||
return parseTranscript(transcript);
|
||||
} catch (e) {
|
||||
logger.error(e);
|
||||
}
|
||||
|
||||
const transcript = await YoutubeTranscript.fetchTranscript(videoId);
|
||||
return parseTranscript(transcript);
|
||||
} catch (error) {
|
||||
throw new Error(`Failed to fetch transcript: ${error.message}`);
|
||||
}
|
||||
}, ytToolkit.youtube_transcript);
|
||||
try {
|
||||
const transcript = await YoutubeTranscript.fetchTranscript(videoId, { lang: 'de' });
|
||||
return parseTranscript(transcript);
|
||||
} catch (e) {
|
||||
logger.error(e);
|
||||
}
|
||||
|
||||
const transcript = await YoutubeTranscript.fetchTranscript(videoId);
|
||||
return parseTranscript(transcript);
|
||||
} catch (error) {
|
||||
throw new Error(`Failed to fetch transcript: ${error.message}`);
|
||||
}
|
||||
},
|
||||
{
|
||||
name: 'youtube_transcript',
|
||||
description: `Fetch and parse the transcript/captions of a YouTube video.
|
||||
- Required: url (full YouTube URL or video ID)
|
||||
- Returns: Full video transcript as plain text
|
||||
- Use for: Content analysis, summarization, translation reference
|
||||
- This is the "Go-to" tool for analyzing actual video content
|
||||
- Attempts to fetch English first, then German, then any available language
|
||||
Example: url="https://youtube.com/watch?v=abc123"`,
|
||||
schema: z.object({
|
||||
url: z.string().describe('YouTube video URL or ID'),
|
||||
}),
|
||||
},
|
||||
);
|
||||
|
||||
return [searchTool, infoTool, commentsTool, transcriptTool];
|
||||
}
|
||||
|
||||
@@ -1,9 +1,43 @@
|
||||
const DALLE3 = require('../DALLE3');
|
||||
const { ProxyAgent } = require('undici');
|
||||
|
||||
jest.mock('tiktoken');
|
||||
const processFileURL = jest.fn();
|
||||
|
||||
jest.mock('~/server/services/Files/images', () => ({
|
||||
getImageBasename: jest.fn().mockImplementation((url) => {
|
||||
const parts = url.split('/');
|
||||
const lastPart = parts.pop();
|
||||
const imageExtensionRegex = /\.(jpg|jpeg|png|gif|bmp|tiff|svg)$/i;
|
||||
if (imageExtensionRegex.test(lastPart)) {
|
||||
return lastPart;
|
||||
}
|
||||
return '';
|
||||
}),
|
||||
}));
|
||||
|
||||
jest.mock('fs', () => {
|
||||
return {
|
||||
existsSync: jest.fn(),
|
||||
mkdirSync: jest.fn(),
|
||||
promises: {
|
||||
writeFile: jest.fn(),
|
||||
readFile: jest.fn(),
|
||||
unlink: jest.fn(),
|
||||
},
|
||||
};
|
||||
});
|
||||
|
||||
jest.mock('path', () => {
|
||||
return {
|
||||
resolve: jest.fn(),
|
||||
join: jest.fn(),
|
||||
relative: jest.fn(),
|
||||
extname: jest.fn().mockImplementation((filename) => {
|
||||
return filename.slice(filename.lastIndexOf('.'));
|
||||
}),
|
||||
};
|
||||
});
|
||||
|
||||
describe('DALLE3 Proxy Configuration', () => {
|
||||
let originalEnv;
|
||||
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
const OpenAI = require('openai');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const DALLE3 = require('../DALLE3');
|
||||
const logger = require('~/config/winston');
|
||||
|
||||
jest.mock('openai');
|
||||
|
||||
jest.mock('@librechat/data-schemas', () => {
|
||||
return {
|
||||
logger: {
|
||||
@@ -25,6 +26,25 @@ jest.mock('tiktoken', () => {
|
||||
|
||||
const processFileURL = jest.fn();
|
||||
|
||||
jest.mock('~/server/services/Files/images', () => ({
|
||||
getImageBasename: jest.fn().mockImplementation((url) => {
|
||||
// Split the URL by '/'
|
||||
const parts = url.split('/');
|
||||
|
||||
// Get the last part of the URL
|
||||
const lastPart = parts.pop();
|
||||
|
||||
// Check if the last part of the URL matches the image extension regex
|
||||
const imageExtensionRegex = /\.(jpg|jpeg|png|gif|bmp|tiff|svg)$/i;
|
||||
if (imageExtensionRegex.test(lastPart)) {
|
||||
return lastPart;
|
||||
}
|
||||
|
||||
// If the regex test fails, return an empty string
|
||||
return '';
|
||||
}),
|
||||
}));
|
||||
|
||||
const generate = jest.fn();
|
||||
OpenAI.mockImplementation(() => ({
|
||||
images: {
|
||||
|
||||
@@ -3,7 +3,7 @@ const { SerpAPI } = require('@langchain/community/tools/serpapi');
|
||||
const { Calculator } = require('@langchain/community/tools/calculator');
|
||||
const { mcpToolPattern, loadWebSearchAuth } = require('@librechat/api');
|
||||
const { EnvVar, createCodeExecutionTool, createSearchTool } = require('@librechat/agents');
|
||||
const { Tools, EToolResources, replaceSpecialVars } = require('librechat-data-provider');
|
||||
const { Tools, Constants, EToolResources, replaceSpecialVars } = require('librechat-data-provider');
|
||||
const {
|
||||
availableTools,
|
||||
manifestToolMap,
|
||||
@@ -24,9 +24,9 @@ const {
|
||||
const { primeFiles: primeCodeFiles } = require('~/server/services/Files/Code/process');
|
||||
const { createFileSearchTool, primeFiles: primeSearchFiles } = require('./fileSearch');
|
||||
const { getUserPluginAuthValue } = require('~/server/services/PluginService');
|
||||
const { createMCPTool, createMCPTools } = require('~/server/services/MCP');
|
||||
const { loadAuthValues } = require('~/server/services/Tools/credentials');
|
||||
const { getCachedTools } = require('~/server/services/Config');
|
||||
const { createMCPTool } = require('~/server/services/MCP');
|
||||
|
||||
/**
|
||||
* Validates the availability and authentication of tools for a user based on environment variables or user-specific plugin authentication values.
|
||||
@@ -121,33 +121,31 @@ const getAuthFields = (toolKey) => {
|
||||
|
||||
/**
|
||||
*
|
||||
* @param {object} params
|
||||
* @param {string} params.user
|
||||
* @param {Pick<Agent, 'id' | 'provider' | 'model'>} [params.agent]
|
||||
* @param {string} [params.model]
|
||||
* @param {EModelEndpoint} [params.endpoint]
|
||||
* @param {LoadToolOptions} [params.options]
|
||||
* @param {boolean} [params.useSpecs]
|
||||
* @param {Array<string>} params.tools
|
||||
* @param {boolean} [params.functions]
|
||||
* @param {boolean} [params.returnMap]
|
||||
* @param {AppConfig['webSearch']} [params.webSearch]
|
||||
* @param {AppConfig['fileStrategy']} [params.fileStrategy]
|
||||
* @param {AppConfig['imageOutputType']} [params.imageOutputType]
|
||||
* @param {object} object
|
||||
* @param {string} object.user
|
||||
* @param {Record<string, Record<string, string>>} [object.userMCPAuthMap]
|
||||
* @param {AbortSignal} [object.signal]
|
||||
* @param {Pick<Agent, 'id' | 'provider' | 'model'>} [object.agent]
|
||||
* @param {string} [object.model]
|
||||
* @param {EModelEndpoint} [object.endpoint]
|
||||
* @param {LoadToolOptions} [object.options]
|
||||
* @param {boolean} [object.useSpecs]
|
||||
* @param {Array<string>} object.tools
|
||||
* @param {boolean} [object.functions]
|
||||
* @param {boolean} [object.returnMap]
|
||||
* @returns {Promise<{ loadedTools: Tool[], toolContextMap: Object<string, any> } | Record<string,Tool>>}
|
||||
*/
|
||||
const loadTools = async ({
|
||||
user,
|
||||
agent,
|
||||
model,
|
||||
signal,
|
||||
endpoint,
|
||||
userMCPAuthMap,
|
||||
tools = [],
|
||||
options = {},
|
||||
functions = true,
|
||||
returnMap = false,
|
||||
webSearch,
|
||||
fileStrategy,
|
||||
imageOutputType,
|
||||
}) => {
|
||||
const toolConstructors = {
|
||||
flux: FluxAPI,
|
||||
@@ -206,8 +204,6 @@ const loadTools = async ({
|
||||
...authValues,
|
||||
isAgent: !!agent,
|
||||
req: options.req,
|
||||
imageOutputType,
|
||||
fileStrategy,
|
||||
imageFiles,
|
||||
});
|
||||
},
|
||||
@@ -223,7 +219,7 @@ const loadTools = async ({
|
||||
const imageGenOptions = {
|
||||
isAgent: !!agent,
|
||||
req: options.req,
|
||||
fileStrategy,
|
||||
fileStrategy: options.fileStrategy,
|
||||
processFileURL: options.processFileURL,
|
||||
returnMetadata: options.returnMetadata,
|
||||
uploadImageBuffer: options.uploadImageBuffer,
|
||||
@@ -239,6 +235,7 @@ const loadTools = async ({
|
||||
/** @type {Record<string, string>} */
|
||||
const toolContextMap = {};
|
||||
const cachedTools = (await getCachedTools({ userId: user, includeGlobal: true })) ?? {};
|
||||
const requestedMCPTools = {};
|
||||
|
||||
for (const tool of tools) {
|
||||
if (tool === Tools.execute_code) {
|
||||
@@ -280,10 +277,11 @@ const loadTools = async ({
|
||||
};
|
||||
continue;
|
||||
} else if (tool === Tools.web_search) {
|
||||
const webSearchConfig = options?.req?.app?.locals?.webSearch;
|
||||
const result = await loadWebSearchAuth({
|
||||
userId: user,
|
||||
loadAuthValues,
|
||||
webSearchConfig: webSearch,
|
||||
webSearchConfig,
|
||||
});
|
||||
const { onSearchResults, onGetHighlights } = options?.[Tools.web_search] ?? {};
|
||||
requestedTools[tool] = async () => {
|
||||
@@ -306,14 +304,35 @@ Current Date & Time: ${replaceSpecialVars({ text: '{{iso_datetime}}' })}
|
||||
};
|
||||
continue;
|
||||
} else if (tool && cachedTools && mcpToolPattern.test(tool)) {
|
||||
requestedTools[tool] = async () =>
|
||||
const [toolName, serverName] = tool.split(Constants.mcp_delimiter);
|
||||
if (toolName === Constants.mcp_all) {
|
||||
const currentMCPGenerator = async (index) =>
|
||||
createMCPTools({
|
||||
req: options.req,
|
||||
res: options.res,
|
||||
index,
|
||||
serverName,
|
||||
userMCPAuthMap,
|
||||
model: agent?.model ?? model,
|
||||
provider: agent?.provider ?? endpoint,
|
||||
signal,
|
||||
});
|
||||
requestedMCPTools[serverName] = [currentMCPGenerator];
|
||||
continue;
|
||||
}
|
||||
const currentMCPGenerator = async (index) =>
|
||||
createMCPTool({
|
||||
index,
|
||||
req: options.req,
|
||||
res: options.res,
|
||||
toolKey: tool,
|
||||
userMCPAuthMap,
|
||||
model: agent?.model ?? model,
|
||||
provider: agent?.provider ?? endpoint,
|
||||
signal,
|
||||
});
|
||||
requestedMCPTools[serverName] = requestedMCPTools[serverName] || [];
|
||||
requestedMCPTools[serverName].push(currentMCPGenerator);
|
||||
continue;
|
||||
}
|
||||
|
||||
@@ -353,6 +372,34 @@ Current Date & Time: ${replaceSpecialVars({ text: '{{iso_datetime}}' })}
|
||||
}
|
||||
|
||||
const loadedTools = (await Promise.all(toolPromises)).flatMap((plugin) => plugin || []);
|
||||
const mcpToolPromises = [];
|
||||
/** MCP server tools are initialized sequentially by server */
|
||||
let index = -1;
|
||||
for (const [serverName, generators] of Object.entries(requestedMCPTools)) {
|
||||
index++;
|
||||
for (const generator of generators) {
|
||||
try {
|
||||
if (generator && generators.length === 1) {
|
||||
mcpToolPromises.push(
|
||||
generator(index).catch((error) => {
|
||||
logger.error(`Error loading ${serverName} tools:`, error);
|
||||
return null;
|
||||
}),
|
||||
);
|
||||
continue;
|
||||
}
|
||||
const mcpTool = await generator(index);
|
||||
if (Array.isArray(mcpTool)) {
|
||||
loadedTools.push(...mcpTool);
|
||||
} else if (mcpTool) {
|
||||
loadedTools.push(mcpTool);
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error(`Error loading MCP tool for server ${serverName}:`, error);
|
||||
}
|
||||
}
|
||||
}
|
||||
loadedTools.push(...(await Promise.all(mcpToolPromises)).flatMap((plugin) => plugin || []));
|
||||
return { loadedTools, toolContextMap };
|
||||
};
|
||||
|
||||
|
||||
@@ -9,27 +9,6 @@ const mockPluginService = {
|
||||
|
||||
jest.mock('~/server/services/PluginService', () => mockPluginService);
|
||||
|
||||
jest.mock('~/server/services/Config', () => ({
|
||||
getAppConfig: jest.fn().mockResolvedValue({
|
||||
// Default app config for tool tests
|
||||
paths: { uploads: '/tmp' },
|
||||
fileStrategy: 'local',
|
||||
filteredTools: [],
|
||||
includedTools: [],
|
||||
}),
|
||||
getCachedTools: jest.fn().mockResolvedValue({
|
||||
// Default cached tools for tests
|
||||
dalle: {
|
||||
type: 'function',
|
||||
function: {
|
||||
name: 'dalle',
|
||||
description: 'DALL-E image generation',
|
||||
parameters: {},
|
||||
},
|
||||
},
|
||||
}),
|
||||
}));
|
||||
|
||||
const { BaseLLM } = require('@langchain/openai');
|
||||
const { Calculator } = require('@langchain/community/tools/calculator');
|
||||
|
||||
|
||||
1
api/cache/getLogStores.js
vendored
1
api/cache/getLogStores.js
vendored
@@ -31,7 +31,6 @@ const namespaces = {
|
||||
[CacheKeys.SAML_SESSION]: sessionCache(CacheKeys.SAML_SESSION),
|
||||
|
||||
[CacheKeys.ROLES]: standardCache(CacheKeys.ROLES),
|
||||
[CacheKeys.MCP_TOOLS]: standardCache(CacheKeys.MCP_TOOLS),
|
||||
[CacheKeys.CONFIG_STORE]: standardCache(CacheKeys.CONFIG_STORE),
|
||||
[CacheKeys.STATIC_CONFIG]: standardCache(CacheKeys.STATIC_CONFIG),
|
||||
[CacheKeys.PENDING_REQ]: standardCache(CacheKeys.PENDING_REQ),
|
||||
|
||||
@@ -2,7 +2,7 @@ const mongoose = require('mongoose');
|
||||
const crypto = require('node:crypto');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { ResourceType, SystemRoles, Tools, actionDelimiter } = require('librechat-data-provider');
|
||||
const { GLOBAL_PROJECT_NAME, EPHEMERAL_AGENT_ID, mcp_delimiter } =
|
||||
const { GLOBAL_PROJECT_NAME, EPHEMERAL_AGENT_ID, mcp_all, mcp_delimiter } =
|
||||
require('librechat-data-provider').Constants;
|
||||
const {
|
||||
removeAgentFromAllProjects,
|
||||
@@ -78,6 +78,7 @@ const loadEphemeralAgent = async ({ req, agent_id, endpoint, model_parameters: _
|
||||
tools.push(Tools.web_search);
|
||||
}
|
||||
|
||||
const addedServers = new Set();
|
||||
if (mcpServers.size > 0) {
|
||||
for (const toolName of Object.keys(availableTools)) {
|
||||
if (!toolName.includes(mcp_delimiter)) {
|
||||
@@ -85,9 +86,17 @@ const loadEphemeralAgent = async ({ req, agent_id, endpoint, model_parameters: _
|
||||
}
|
||||
const mcpServer = toolName.split(mcp_delimiter)?.[1];
|
||||
if (mcpServer && mcpServers.has(mcpServer)) {
|
||||
addedServers.add(mcpServer);
|
||||
tools.push(toolName);
|
||||
}
|
||||
}
|
||||
|
||||
for (const mcpServer of mcpServers) {
|
||||
if (addedServers.has(mcpServer)) {
|
||||
continue;
|
||||
}
|
||||
tools.push(`${mcp_all}${mcp_delimiter}${mcpServer}`);
|
||||
}
|
||||
}
|
||||
|
||||
const instructions = req.body.promptPrefix;
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { createTempChatExpirationDate } = require('@librechat/api');
|
||||
const { getAppConfig } = require('~/server/services/Config/app');
|
||||
const { getCustomConfig } = require('~/server/services/Config/getCustomConfig');
|
||||
const { getMessages, deleteMessages } = require('./Message');
|
||||
const { Conversation } = require('~/db/models');
|
||||
|
||||
@@ -102,10 +102,8 @@ module.exports = {
|
||||
|
||||
if (req?.body?.isTemporary) {
|
||||
try {
|
||||
const appConfig = await getAppConfig({
|
||||
role: req.user.role,
|
||||
});
|
||||
update.expiredAt = createTempChatExpirationDate(appConfig?.interfaceConfig);
|
||||
const customConfig = await getCustomConfig();
|
||||
update.expiredAt = createTempChatExpirationDate(customConfig);
|
||||
} catch (err) {
|
||||
logger.error('Error creating temporary chat expiration date:', err);
|
||||
logger.info(`---\`saveConvo\` context: ${metadata?.context}`);
|
||||
|
||||
@@ -13,9 +13,9 @@ const {
|
||||
saveConvo,
|
||||
getConvo,
|
||||
} = require('./Conversation');
|
||||
jest.mock('~/server/services/Config/app');
|
||||
jest.mock('~/server/services/Config/getCustomConfig');
|
||||
jest.mock('./Message');
|
||||
const { getAppConfig } = require('~/server/services/Config/app');
|
||||
const { getCustomConfig } = require('~/server/services/Config/getCustomConfig');
|
||||
const { getMessages, deleteMessages } = require('./Message');
|
||||
|
||||
const { Conversation } = require('~/db/models');
|
||||
@@ -118,9 +118,9 @@ describe('Conversation Operations', () => {
|
||||
|
||||
describe('isTemporary conversation handling', () => {
|
||||
it('should save a conversation with expiredAt when isTemporary is true', async () => {
|
||||
// Mock app config with 24 hour retention
|
||||
getAppConfig.mockResolvedValue({
|
||||
interfaceConfig: {
|
||||
// Mock custom config with 24 hour retention
|
||||
getCustomConfig.mockResolvedValue({
|
||||
interface: {
|
||||
temporaryChatRetention: 24,
|
||||
},
|
||||
});
|
||||
@@ -167,9 +167,9 @@ describe('Conversation Operations', () => {
|
||||
});
|
||||
|
||||
it('should use custom retention period from config', async () => {
|
||||
// Mock app config with 48 hour retention
|
||||
getAppConfig.mockResolvedValue({
|
||||
interfaceConfig: {
|
||||
// Mock custom config with 48 hour retention
|
||||
getCustomConfig.mockResolvedValue({
|
||||
interface: {
|
||||
temporaryChatRetention: 48,
|
||||
},
|
||||
});
|
||||
@@ -194,9 +194,9 @@ describe('Conversation Operations', () => {
|
||||
});
|
||||
|
||||
it('should handle minimum retention period (1 hour)', async () => {
|
||||
// Mock app config with less than minimum retention
|
||||
getAppConfig.mockResolvedValue({
|
||||
interfaceConfig: {
|
||||
// Mock custom config with less than minimum retention
|
||||
getCustomConfig.mockResolvedValue({
|
||||
interface: {
|
||||
temporaryChatRetention: 0.5, // Half hour - should be clamped to 1 hour
|
||||
},
|
||||
});
|
||||
@@ -221,9 +221,9 @@ describe('Conversation Operations', () => {
|
||||
});
|
||||
|
||||
it('should handle maximum retention period (8760 hours)', async () => {
|
||||
// Mock app config with more than maximum retention
|
||||
getAppConfig.mockResolvedValue({
|
||||
interfaceConfig: {
|
||||
// Mock custom config with more than maximum retention
|
||||
getCustomConfig.mockResolvedValue({
|
||||
interface: {
|
||||
temporaryChatRetention: 10000, // Should be clamped to 8760 hours
|
||||
},
|
||||
});
|
||||
@@ -247,9 +247,9 @@ describe('Conversation Operations', () => {
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle getAppConfig errors gracefully', async () => {
|
||||
// Mock getAppConfig to throw an error
|
||||
getAppConfig.mockRejectedValue(new Error('Config service unavailable'));
|
||||
it('should handle getCustomConfig errors gracefully', async () => {
|
||||
// Mock getCustomConfig to throw an error
|
||||
getCustomConfig.mockRejectedValue(new Error('Config service unavailable'));
|
||||
|
||||
mockReq.body = { isTemporary: true };
|
||||
|
||||
@@ -261,8 +261,8 @@ describe('Conversation Operations', () => {
|
||||
});
|
||||
|
||||
it('should use default retention when config is not provided', async () => {
|
||||
// Mock getAppConfig to return empty config
|
||||
getAppConfig.mockResolvedValue({});
|
||||
// Mock getCustomConfig to return empty config
|
||||
getCustomConfig.mockResolvedValue({});
|
||||
|
||||
mockReq.body = { isTemporary: true };
|
||||
|
||||
@@ -285,8 +285,8 @@ describe('Conversation Operations', () => {
|
||||
|
||||
it('should update expiredAt when saving existing temporary conversation', async () => {
|
||||
// First save a temporary conversation
|
||||
getAppConfig.mockResolvedValue({
|
||||
interfaceConfig: {
|
||||
getCustomConfig.mockResolvedValue({
|
||||
interface: {
|
||||
temporaryChatRetention: 24,
|
||||
},
|
||||
});
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
const { z } = require('zod');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { createTempChatExpirationDate } = require('@librechat/api');
|
||||
const { getAppConfig } = require('~/server/services/Config/app');
|
||||
const { getCustomConfig } = require('~/server/services/Config/getCustomConfig');
|
||||
const { Message } = require('~/db/models');
|
||||
|
||||
const idSchema = z.string().uuid();
|
||||
@@ -57,10 +57,8 @@ async function saveMessage(req, params, metadata) {
|
||||
|
||||
if (req?.body?.isTemporary) {
|
||||
try {
|
||||
const appConfig = await getAppConfig({
|
||||
role: req.user.role,
|
||||
});
|
||||
update.expiredAt = createTempChatExpirationDate(appConfig?.interfaceConfig);
|
||||
const customConfig = await getCustomConfig();
|
||||
update.expiredAt = createTempChatExpirationDate(customConfig);
|
||||
} catch (err) {
|
||||
logger.error('Error creating temporary chat expiration date:', err);
|
||||
logger.info(`---\`saveMessage\` context: ${metadata?.context}`);
|
||||
|
||||
@@ -13,8 +13,8 @@ const {
|
||||
deleteMessagesSince,
|
||||
} = require('./Message');
|
||||
|
||||
jest.mock('~/server/services/Config/app');
|
||||
const { getAppConfig } = require('~/server/services/Config/app');
|
||||
jest.mock('~/server/services/Config/getCustomConfig');
|
||||
const { getCustomConfig } = require('~/server/services/Config/getCustomConfig');
|
||||
|
||||
/**
|
||||
* @type {import('mongoose').Model<import('@librechat/data-schemas').IMessage>}
|
||||
@@ -326,9 +326,9 @@ describe('Message Operations', () => {
|
||||
});
|
||||
|
||||
it('should save a message with expiredAt when isTemporary is true', async () => {
|
||||
// Mock app config with 24 hour retention
|
||||
getAppConfig.mockResolvedValue({
|
||||
interfaceConfig: {
|
||||
// Mock custom config with 24 hour retention
|
||||
getCustomConfig.mockResolvedValue({
|
||||
interface: {
|
||||
temporaryChatRetention: 24,
|
||||
},
|
||||
});
|
||||
@@ -375,9 +375,9 @@ describe('Message Operations', () => {
|
||||
});
|
||||
|
||||
it('should use custom retention period from config', async () => {
|
||||
// Mock app config with 48 hour retention
|
||||
getAppConfig.mockResolvedValue({
|
||||
interfaceConfig: {
|
||||
// Mock custom config with 48 hour retention
|
||||
getCustomConfig.mockResolvedValue({
|
||||
interface: {
|
||||
temporaryChatRetention: 48,
|
||||
},
|
||||
});
|
||||
@@ -402,9 +402,9 @@ describe('Message Operations', () => {
|
||||
});
|
||||
|
||||
it('should handle minimum retention period (1 hour)', async () => {
|
||||
// Mock app config with less than minimum retention
|
||||
getAppConfig.mockResolvedValue({
|
||||
interfaceConfig: {
|
||||
// Mock custom config with less than minimum retention
|
||||
getCustomConfig.mockResolvedValue({
|
||||
interface: {
|
||||
temporaryChatRetention: 0.5, // Half hour - should be clamped to 1 hour
|
||||
},
|
||||
});
|
||||
@@ -429,9 +429,9 @@ describe('Message Operations', () => {
|
||||
});
|
||||
|
||||
it('should handle maximum retention period (8760 hours)', async () => {
|
||||
// Mock app config with more than maximum retention
|
||||
getAppConfig.mockResolvedValue({
|
||||
interfaceConfig: {
|
||||
// Mock custom config with more than maximum retention
|
||||
getCustomConfig.mockResolvedValue({
|
||||
interface: {
|
||||
temporaryChatRetention: 10000, // Should be clamped to 8760 hours
|
||||
},
|
||||
});
|
||||
@@ -455,9 +455,9 @@ describe('Message Operations', () => {
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle getAppConfig errors gracefully', async () => {
|
||||
// Mock getAppConfig to throw an error
|
||||
getAppConfig.mockRejectedValue(new Error('Config service unavailable'));
|
||||
it('should handle getCustomConfig errors gracefully', async () => {
|
||||
// Mock getCustomConfig to throw an error
|
||||
getCustomConfig.mockRejectedValue(new Error('Config service unavailable'));
|
||||
|
||||
mockReq.body = { isTemporary: true };
|
||||
|
||||
@@ -469,8 +469,8 @@ describe('Message Operations', () => {
|
||||
});
|
||||
|
||||
it('should use default retention when config is not provided', async () => {
|
||||
// Mock getAppConfig to return empty config
|
||||
getAppConfig.mockResolvedValue({});
|
||||
// Mock getCustomConfig to return empty config
|
||||
getCustomConfig.mockResolvedValue({});
|
||||
|
||||
mockReq.body = { isTemporary: true };
|
||||
|
||||
@@ -493,8 +493,8 @@ describe('Message Operations', () => {
|
||||
|
||||
it('should not update expiredAt on message update', async () => {
|
||||
// First save a temporary message
|
||||
getAppConfig.mockResolvedValue({
|
||||
interfaceConfig: {
|
||||
getCustomConfig.mockResolvedValue({
|
||||
interface: {
|
||||
temporaryChatRetention: 24,
|
||||
},
|
||||
});
|
||||
@@ -520,8 +520,8 @@ describe('Message Operations', () => {
|
||||
|
||||
it('should preserve expiredAt when saving existing temporary message', async () => {
|
||||
// First save a temporary message
|
||||
getAppConfig.mockResolvedValue({
|
||||
interfaceConfig: {
|
||||
getCustomConfig.mockResolvedValue({
|
||||
interface: {
|
||||
temporaryChatRetention: 24,
|
||||
},
|
||||
});
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { getBalanceConfig } = require('~/server/services/Config');
|
||||
const { getMultiplier, getCacheMultiplier } = require('./tx');
|
||||
const { Transaction, Balance } = require('~/db/models');
|
||||
|
||||
@@ -186,10 +187,9 @@ async function createAutoRefillTransaction(txData) {
|
||||
|
||||
/**
|
||||
* Static method to create a transaction and update the balance
|
||||
* @param {txData} _txData - Transaction data.
|
||||
* @param {txData} txData - Transaction data.
|
||||
*/
|
||||
async function createTransaction(_txData) {
|
||||
const { balance, ...txData } = _txData;
|
||||
async function createTransaction(txData) {
|
||||
if (txData.rawAmount != null && isNaN(txData.rawAmount)) {
|
||||
return;
|
||||
}
|
||||
@@ -199,6 +199,8 @@ async function createTransaction(_txData) {
|
||||
calculateTokenValue(transaction);
|
||||
|
||||
await transaction.save();
|
||||
|
||||
const balance = await getBalanceConfig();
|
||||
if (!balance?.enabled) {
|
||||
return;
|
||||
}
|
||||
@@ -219,10 +221,9 @@ async function createTransaction(_txData) {
|
||||
|
||||
/**
|
||||
* Static method to create a structured transaction and update the balance
|
||||
* @param {txData} _txData - Transaction data.
|
||||
* @param {txData} txData - Transaction data.
|
||||
*/
|
||||
async function createStructuredTransaction(_txData) {
|
||||
const { balance, ...txData } = _txData;
|
||||
async function createStructuredTransaction(txData) {
|
||||
const transaction = new Transaction({
|
||||
...txData,
|
||||
endpointTokenConfig: txData.endpointTokenConfig,
|
||||
@@ -232,6 +233,7 @@ async function createStructuredTransaction(_txData) {
|
||||
|
||||
await transaction.save();
|
||||
|
||||
const balance = await getBalanceConfig();
|
||||
if (!balance?.enabled) {
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -1,11 +1,14 @@
|
||||
const mongoose = require('mongoose');
|
||||
const { MongoMemoryServer } = require('mongodb-memory-server');
|
||||
const { spendTokens, spendStructuredTokens } = require('./spendTokens');
|
||||
|
||||
const { getBalanceConfig } = require('~/server/services/Config');
|
||||
const { getMultiplier, getCacheMultiplier } = require('./tx');
|
||||
const { createTransaction } = require('./Transaction');
|
||||
const { Balance } = require('~/db/models');
|
||||
|
||||
// Mock the custom config module so we can control the balance flag.
|
||||
jest.mock('~/server/services/Config');
|
||||
|
||||
let mongoServer;
|
||||
beforeAll(async () => {
|
||||
mongoServer = await MongoMemoryServer.create();
|
||||
@@ -20,6 +23,8 @@ afterAll(async () => {
|
||||
|
||||
beforeEach(async () => {
|
||||
await mongoose.connection.dropDatabase();
|
||||
// Default: enable balance updates in tests.
|
||||
getBalanceConfig.mockResolvedValue({ enabled: true });
|
||||
});
|
||||
|
||||
describe('Regular Token Spending Tests', () => {
|
||||
@@ -36,7 +41,6 @@ describe('Regular Token Spending Tests', () => {
|
||||
model,
|
||||
context: 'test',
|
||||
endpointTokenConfig: null,
|
||||
balance: { enabled: true },
|
||||
};
|
||||
|
||||
const tokenUsage = {
|
||||
@@ -70,7 +74,6 @@ describe('Regular Token Spending Tests', () => {
|
||||
model,
|
||||
context: 'test',
|
||||
endpointTokenConfig: null,
|
||||
balance: { enabled: true },
|
||||
};
|
||||
|
||||
const tokenUsage = {
|
||||
@@ -101,7 +104,6 @@ describe('Regular Token Spending Tests', () => {
|
||||
model,
|
||||
context: 'test',
|
||||
endpointTokenConfig: null,
|
||||
balance: { enabled: true },
|
||||
};
|
||||
|
||||
const tokenUsage = {};
|
||||
@@ -126,7 +128,6 @@ describe('Regular Token Spending Tests', () => {
|
||||
model,
|
||||
context: 'test',
|
||||
endpointTokenConfig: null,
|
||||
balance: { enabled: true },
|
||||
};
|
||||
|
||||
const tokenUsage = { promptTokens: 100 };
|
||||
@@ -142,7 +143,8 @@ describe('Regular Token Spending Tests', () => {
|
||||
});
|
||||
|
||||
test('spendTokens should not update balance when balance feature is disabled', async () => {
|
||||
// Arrange: Balance config is now passed directly in txData
|
||||
// Arrange: Override the config to disable balance updates.
|
||||
getBalanceConfig.mockResolvedValue({ balance: { enabled: false } });
|
||||
const userId = new mongoose.Types.ObjectId();
|
||||
const initialBalance = 10000000;
|
||||
await Balance.create({ user: userId, tokenCredits: initialBalance });
|
||||
@@ -154,7 +156,6 @@ describe('Regular Token Spending Tests', () => {
|
||||
model,
|
||||
context: 'test',
|
||||
endpointTokenConfig: null,
|
||||
balance: { enabled: false },
|
||||
};
|
||||
|
||||
const tokenUsage = {
|
||||
@@ -185,7 +186,6 @@ describe('Structured Token Spending Tests', () => {
|
||||
model,
|
||||
context: 'message',
|
||||
endpointTokenConfig: null,
|
||||
balance: { enabled: true },
|
||||
};
|
||||
|
||||
const tokenUsage = {
|
||||
@@ -239,7 +239,6 @@ describe('Structured Token Spending Tests', () => {
|
||||
conversationId: 'test-convo',
|
||||
model,
|
||||
context: 'message',
|
||||
balance: { enabled: true },
|
||||
};
|
||||
|
||||
const tokenUsage = {
|
||||
@@ -272,7 +271,6 @@ describe('Structured Token Spending Tests', () => {
|
||||
conversationId: 'test-convo',
|
||||
model,
|
||||
context: 'message',
|
||||
balance: { enabled: true },
|
||||
};
|
||||
|
||||
const tokenUsage = {
|
||||
@@ -304,7 +302,6 @@ describe('Structured Token Spending Tests', () => {
|
||||
conversationId: 'test-convo',
|
||||
model,
|
||||
context: 'message',
|
||||
balance: { enabled: true },
|
||||
};
|
||||
|
||||
const tokenUsage = {};
|
||||
@@ -331,7 +328,6 @@ describe('Structured Token Spending Tests', () => {
|
||||
conversationId: 'test-convo',
|
||||
model,
|
||||
context: 'incomplete',
|
||||
balance: { enabled: true },
|
||||
};
|
||||
|
||||
const tokenUsage = {
|
||||
@@ -368,7 +364,6 @@ describe('NaN Handling Tests', () => {
|
||||
endpointTokenConfig: null,
|
||||
rawAmount: NaN,
|
||||
tokenType: 'prompt',
|
||||
balance: { enabled: true },
|
||||
};
|
||||
|
||||
// Act
|
||||
|
||||
@@ -24,15 +24,8 @@ const { getConvoTitle, getConvo, saveConvo, deleteConvos } = require('./Conversa
|
||||
const { getPreset, getPresets, savePreset, deletePresets } = require('./Preset');
|
||||
const { File } = require('~/db/models');
|
||||
|
||||
const seedDatabase = async () => {
|
||||
await methods.initializeRoles();
|
||||
await methods.seedDefaultRoles();
|
||||
await methods.ensureDefaultCategories();
|
||||
};
|
||||
|
||||
module.exports = {
|
||||
...methods,
|
||||
seedDatabase,
|
||||
comparePassword,
|
||||
findFileById,
|
||||
createFile,
|
||||
|
||||
@@ -1,24 +0,0 @@
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { updateInterfacePermissions: updateInterfacePerms } = require('@librechat/api');
|
||||
const { getRoleByName, updateAccessPermissions } = require('./Role');
|
||||
|
||||
/**
|
||||
* Update interface permissions based on app configuration.
|
||||
* Must be done independently from loading the app config.
|
||||
* @param {AppConfig} appConfig
|
||||
*/
|
||||
async function updateInterfacePermissions(appConfig) {
|
||||
try {
|
||||
await updateInterfacePerms({
|
||||
appConfig,
|
||||
getRoleByName,
|
||||
updateAccessPermissions,
|
||||
});
|
||||
} catch (error) {
|
||||
logger.error('Error updating interface permissions:', error);
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
updateInterfacePermissions,
|
||||
};
|
||||
@@ -5,7 +5,13 @@ const { createTransaction, createStructuredTransaction } = require('./Transactio
|
||||
*
|
||||
* @function
|
||||
* @async
|
||||
* @param {txData} txData - Transaction data.
|
||||
* @param {Object} txData - Transaction data.
|
||||
* @param {mongoose.Schema.Types.ObjectId} txData.user - The user ID.
|
||||
* @param {String} txData.conversationId - The ID of the conversation.
|
||||
* @param {String} txData.model - The model name.
|
||||
* @param {String} txData.context - The context in which the transaction is made.
|
||||
* @param {EndpointTokenConfig} [txData.endpointTokenConfig] - The current endpoint token config.
|
||||
* @param {String} [txData.valueKey] - The value key (optional).
|
||||
* @param {Object} tokenUsage - The number of tokens used.
|
||||
* @param {Number} tokenUsage.promptTokens - The number of prompt tokens used.
|
||||
* @param {Number} tokenUsage.completionTokens - The number of completion tokens used.
|
||||
@@ -63,7 +69,13 @@ const spendTokens = async (txData, tokenUsage) => {
|
||||
*
|
||||
* @function
|
||||
* @async
|
||||
* @param {txData} txData - Transaction data.
|
||||
* @param {Object} txData - Transaction data.
|
||||
* @param {mongoose.Schema.Types.ObjectId} txData.user - The user ID.
|
||||
* @param {String} txData.conversationId - The ID of the conversation.
|
||||
* @param {String} txData.model - The model name.
|
||||
* @param {String} txData.context - The context in which the transaction is made.
|
||||
* @param {EndpointTokenConfig} [txData.endpointTokenConfig] - The current endpoint token config.
|
||||
* @param {String} [txData.valueKey] - The value key (optional).
|
||||
* @param {Object} tokenUsage - The number of tokens used.
|
||||
* @param {Object} tokenUsage.promptTokens - The number of prompt tokens used.
|
||||
* @param {Number} tokenUsage.promptTokens.input - The number of input tokens.
|
||||
|
||||
@@ -5,6 +5,7 @@ const { createTransaction, createAutoRefillTransaction } = require('./Transactio
|
||||
|
||||
require('~/db/models');
|
||||
|
||||
// Mock the logger to prevent console output during tests
|
||||
jest.mock('~/config', () => ({
|
||||
logger: {
|
||||
debug: jest.fn(),
|
||||
@@ -12,6 +13,10 @@ jest.mock('~/config', () => ({
|
||||
},
|
||||
}));
|
||||
|
||||
// Mock the Config service
|
||||
const { getBalanceConfig } = require('~/server/services/Config');
|
||||
jest.mock('~/server/services/Config');
|
||||
|
||||
describe('spendTokens', () => {
|
||||
let mongoServer;
|
||||
let userId;
|
||||
@@ -39,7 +44,8 @@ describe('spendTokens', () => {
|
||||
// Create a new user ID for each test
|
||||
userId = new mongoose.Types.ObjectId();
|
||||
|
||||
// Balance config is now passed directly in txData
|
||||
// Mock the balance config to be enabled by default
|
||||
getBalanceConfig.mockResolvedValue({ enabled: true });
|
||||
});
|
||||
|
||||
it('should create transactions for both prompt and completion tokens', async () => {
|
||||
@@ -54,7 +60,6 @@ describe('spendTokens', () => {
|
||||
conversationId: 'test-convo',
|
||||
model: 'gpt-3.5-turbo',
|
||||
context: 'test',
|
||||
balance: { enabled: true },
|
||||
};
|
||||
const tokenUsage = {
|
||||
promptTokens: 100,
|
||||
@@ -93,7 +98,6 @@ describe('spendTokens', () => {
|
||||
conversationId: 'test-convo',
|
||||
model: 'gpt-3.5-turbo',
|
||||
context: 'test',
|
||||
balance: { enabled: true },
|
||||
};
|
||||
const tokenUsage = {
|
||||
promptTokens: 100,
|
||||
@@ -123,7 +127,6 @@ describe('spendTokens', () => {
|
||||
conversationId: 'test-convo',
|
||||
model: 'gpt-3.5-turbo',
|
||||
context: 'test',
|
||||
balance: { enabled: true },
|
||||
};
|
||||
const tokenUsage = {};
|
||||
|
||||
@@ -135,7 +138,8 @@ describe('spendTokens', () => {
|
||||
});
|
||||
|
||||
it('should not update balance when the balance feature is disabled', async () => {
|
||||
// Balance is now passed directly in txData
|
||||
// Override configuration: disable balance updates
|
||||
getBalanceConfig.mockResolvedValue({ enabled: false });
|
||||
// Create a balance for the user
|
||||
await Balance.create({
|
||||
user: userId,
|
||||
@@ -147,7 +151,6 @@ describe('spendTokens', () => {
|
||||
conversationId: 'test-convo',
|
||||
model: 'gpt-3.5-turbo',
|
||||
context: 'test',
|
||||
balance: { enabled: false },
|
||||
};
|
||||
const tokenUsage = {
|
||||
promptTokens: 100,
|
||||
@@ -177,7 +180,6 @@ describe('spendTokens', () => {
|
||||
conversationId: 'test-convo',
|
||||
model: 'gpt-4', // Using a more expensive model
|
||||
context: 'test',
|
||||
balance: { enabled: true },
|
||||
};
|
||||
|
||||
// Spending more tokens than the user has balance for
|
||||
@@ -231,7 +233,6 @@ describe('spendTokens', () => {
|
||||
conversationId: 'test-convo-1',
|
||||
model: 'gpt-4',
|
||||
context: 'test',
|
||||
balance: { enabled: true },
|
||||
};
|
||||
|
||||
const tokenUsage1 = {
|
||||
@@ -251,7 +252,6 @@ describe('spendTokens', () => {
|
||||
conversationId: 'test-convo-2',
|
||||
model: 'gpt-4',
|
||||
context: 'test',
|
||||
balance: { enabled: true },
|
||||
};
|
||||
|
||||
const tokenUsage2 = {
|
||||
@@ -292,7 +292,6 @@ describe('spendTokens', () => {
|
||||
tokenType: 'completion',
|
||||
rawAmount: -100,
|
||||
context: 'test',
|
||||
balance: { enabled: true },
|
||||
});
|
||||
|
||||
console.log('Direct Transaction.create result:', directResult);
|
||||
@@ -317,7 +316,6 @@ describe('spendTokens', () => {
|
||||
conversationId: `test-convo-${model}`,
|
||||
model,
|
||||
context: 'test',
|
||||
balance: { enabled: true },
|
||||
};
|
||||
|
||||
const tokenUsage = {
|
||||
@@ -354,7 +352,6 @@ describe('spendTokens', () => {
|
||||
conversationId: 'test-convo-1',
|
||||
model: 'claude-3-5-sonnet',
|
||||
context: 'test',
|
||||
balance: { enabled: true },
|
||||
};
|
||||
|
||||
const tokenUsage1 = {
|
||||
@@ -378,7 +375,6 @@ describe('spendTokens', () => {
|
||||
conversationId: 'test-convo-2',
|
||||
model: 'claude-3-5-sonnet',
|
||||
context: 'test',
|
||||
balance: { enabled: true },
|
||||
};
|
||||
|
||||
const tokenUsage2 = {
|
||||
@@ -430,7 +426,6 @@ describe('spendTokens', () => {
|
||||
conversationId: 'test-convo',
|
||||
model: 'claude-3-5-sonnet', // Using a model that supports structured tokens
|
||||
context: 'test',
|
||||
balance: { enabled: true },
|
||||
};
|
||||
|
||||
// Spending more tokens than the user has balance for
|
||||
@@ -510,7 +505,6 @@ describe('spendTokens', () => {
|
||||
conversationId,
|
||||
user: userId,
|
||||
model: usage.model,
|
||||
balance: { enabled: true },
|
||||
};
|
||||
|
||||
// Calculate expected spend for this transaction
|
||||
@@ -623,7 +617,6 @@ describe('spendTokens', () => {
|
||||
tokenType: 'credits',
|
||||
context: 'concurrent-refill-test',
|
||||
rawAmount: refillAmount,
|
||||
balance: { enabled: true },
|
||||
}),
|
||||
);
|
||||
}
|
||||
@@ -690,7 +683,6 @@ describe('spendTokens', () => {
|
||||
conversationId: 'test-convo',
|
||||
model: 'claude-3-5-sonnet',
|
||||
context: 'test',
|
||||
balance: { enabled: true },
|
||||
};
|
||||
const tokenUsage = {
|
||||
promptTokens: {
|
||||
|
||||
@@ -4,11 +4,18 @@ const {
|
||||
getToolkitKey,
|
||||
checkPluginAuth,
|
||||
filterUniquePlugins,
|
||||
convertMCPToolToPlugin,
|
||||
convertMCPToolsToPlugins,
|
||||
} = require('@librechat/api');
|
||||
const { getCachedTools, getAppConfig } = require('~/server/services/Config');
|
||||
const {
|
||||
getCachedTools,
|
||||
setCachedTools,
|
||||
mergeUserTools,
|
||||
getCustomConfig,
|
||||
} = require('~/server/services/Config');
|
||||
const { loadAndFormatTools } = require('~/server/services/ToolService');
|
||||
const { availableTools, toolkits } = require('~/app/clients/tools');
|
||||
const { getMCPManager, getFlowStateManager } = require('~/config');
|
||||
const { getMCPManager } = require('~/config');
|
||||
const { getLogStores } = require('~/cache');
|
||||
|
||||
const getAvailablePluginsController = async (req, res) => {
|
||||
@@ -20,9 +27,9 @@ const getAvailablePluginsController = async (req, res) => {
|
||||
return;
|
||||
}
|
||||
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
/** @type {{ filteredTools: string[], includedTools: string[] }} */
|
||||
const { filteredTools = [], includedTools = [] } = appConfig;
|
||||
const { filteredTools = [], includedTools = [] } = req.app.locals;
|
||||
/** @type {import('@librechat/api').LCManifestTool[]} */
|
||||
const pluginManifest = availableTools;
|
||||
|
||||
const uniquePlugins = filterUniquePlugins(pluginManifest);
|
||||
@@ -48,45 +55,6 @@ const getAvailablePluginsController = async (req, res) => {
|
||||
}
|
||||
};
|
||||
|
||||
function createServerToolsCallback() {
|
||||
/**
|
||||
* @param {string} serverName
|
||||
* @param {TPlugin[] | null} serverTools
|
||||
*/
|
||||
return async function (serverName, serverTools) {
|
||||
try {
|
||||
const mcpToolsCache = getLogStores(CacheKeys.MCP_TOOLS);
|
||||
if (!serverName || !mcpToolsCache) {
|
||||
return;
|
||||
}
|
||||
await mcpToolsCache.set(serverName, serverTools);
|
||||
logger.debug(`MCP tools for ${serverName} added to cache.`);
|
||||
} catch (error) {
|
||||
logger.error('Error retrieving MCP tools from cache:', error);
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
function createGetServerTools() {
|
||||
/**
|
||||
* Retrieves cached server tools
|
||||
* @param {string} serverName
|
||||
* @returns {Promise<TPlugin[] | null>}
|
||||
*/
|
||||
return async function (serverName) {
|
||||
try {
|
||||
const mcpToolsCache = getLogStores(CacheKeys.MCP_TOOLS);
|
||||
if (!mcpToolsCache) {
|
||||
return null;
|
||||
}
|
||||
return await mcpToolsCache.get(serverName);
|
||||
} catch (error) {
|
||||
logger.error('Error retrieving MCP tools from cache:', error);
|
||||
return null;
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Retrieves and returns a list of available tools, either from a cache or by reading a plugin manifest file.
|
||||
*
|
||||
@@ -102,13 +70,18 @@ function createGetServerTools() {
|
||||
const getAvailableTools = async (req, res) => {
|
||||
try {
|
||||
const userId = req.user?.id;
|
||||
const appConfig = await getAppConfig();
|
||||
if (!userId) {
|
||||
logger.warn('[getAvailableTools] User ID not found in request');
|
||||
return res.status(401).json({ message: 'Unauthorized' });
|
||||
}
|
||||
const customConfig = await getCustomConfig();
|
||||
const cache = getLogStores(CacheKeys.CONFIG_STORE);
|
||||
const cachedToolsArray = await cache.get(CacheKeys.TOOLS);
|
||||
const cachedUserTools = await getCachedTools({ userId });
|
||||
|
||||
const mcpManager = getMCPManager();
|
||||
const userPlugins = convertMCPToolsToPlugins({ functionTools: cachedUserTools, mcpManager });
|
||||
const userPlugins =
|
||||
cachedUserTools != null
|
||||
? convertMCPToolsToPlugins({ functionTools: cachedUserTools, customConfig })
|
||||
: undefined;
|
||||
|
||||
if (cachedToolsArray != null && userPlugins != null) {
|
||||
const dedupedTools = filterUniquePlugins([...userPlugins, ...cachedToolsArray]);
|
||||
@@ -116,30 +89,51 @@ const getAvailableTools = async (req, res) => {
|
||||
return;
|
||||
}
|
||||
|
||||
/** @type {Record<string, FunctionTool> | null} Get tool definitions to filter which tools are actually available */
|
||||
let toolDefinitions = await getCachedTools({ includeGlobal: true });
|
||||
let prelimCachedTools;
|
||||
|
||||
// TODO: this is a temp fix until app config is refactored
|
||||
if (!toolDefinitions) {
|
||||
toolDefinitions = loadAndFormatTools({
|
||||
adminFilter: req.app.locals?.filteredTools,
|
||||
adminIncluded: req.app.locals?.includedTools,
|
||||
directory: req.app.locals?.paths.structuredTools,
|
||||
});
|
||||
prelimCachedTools = toolDefinitions;
|
||||
}
|
||||
|
||||
/** @type {import('@librechat/api').LCManifestTool[]} */
|
||||
let pluginManifest = availableTools;
|
||||
if (appConfig?.mcpConfig != null) {
|
||||
if (customConfig?.mcpServers != null) {
|
||||
try {
|
||||
const flowsCache = getLogStores(CacheKeys.FLOWS);
|
||||
const flowManager = flowsCache ? getFlowStateManager(flowsCache) : null;
|
||||
const serverToolsCallback = createServerToolsCallback();
|
||||
const getServerTools = createGetServerTools();
|
||||
const mcpTools = await mcpManager.loadManifestTools({
|
||||
flowManager,
|
||||
serverToolsCallback,
|
||||
getServerTools,
|
||||
});
|
||||
pluginManifest = [...mcpTools, ...pluginManifest];
|
||||
const mcpManager = getMCPManager();
|
||||
const mcpTools = await mcpManager.getAllToolFunctions(userId);
|
||||
prelimCachedTools = prelimCachedTools ?? {};
|
||||
for (const [toolKey, toolData] of Object.entries(mcpTools)) {
|
||||
const plugin = convertMCPToolToPlugin({
|
||||
toolKey,
|
||||
toolData,
|
||||
customConfig,
|
||||
});
|
||||
if (plugin) {
|
||||
pluginManifest.push(plugin);
|
||||
}
|
||||
prelimCachedTools[toolKey] = toolData;
|
||||
}
|
||||
await mergeUserTools({ userId, cachedUserTools, userTools: prelimCachedTools });
|
||||
} catch (error) {
|
||||
logger.error(
|
||||
'[getAvailableTools] Error loading MCP Tools, servers may still be initializing:',
|
||||
error,
|
||||
);
|
||||
}
|
||||
} else if (prelimCachedTools != null) {
|
||||
await setCachedTools(prelimCachedTools, { isGlobal: true });
|
||||
}
|
||||
|
||||
/** @type {TPlugin[]} */
|
||||
/** @type {TPlugin[]} Deduplicate and authenticate plugins */
|
||||
const uniquePlugins = filterUniquePlugins(pluginManifest);
|
||||
|
||||
const authenticatedPlugins = uniquePlugins.map((plugin) => {
|
||||
if (checkPluginAuth(plugin)) {
|
||||
return { ...plugin, authenticated: true };
|
||||
@@ -148,8 +142,7 @@ const getAvailableTools = async (req, res) => {
|
||||
}
|
||||
});
|
||||
|
||||
const toolDefinitions = (await getCachedTools({ includeGlobal: true })) || {};
|
||||
|
||||
/** Filter plugins based on availability and add MCP-specific auth config */
|
||||
const toolsOutput = [];
|
||||
for (const plugin of authenticatedPlugins) {
|
||||
const isToolDefined = toolDefinitions[plugin.pluginKey] !== undefined;
|
||||
@@ -165,41 +158,36 @@ const getAvailableTools = async (req, res) => {
|
||||
|
||||
const toolToAdd = { ...plugin };
|
||||
|
||||
if (!plugin.pluginKey.includes(Constants.mcp_delimiter)) {
|
||||
toolsOutput.push(toolToAdd);
|
||||
continue;
|
||||
}
|
||||
if (plugin.pluginKey.includes(Constants.mcp_delimiter)) {
|
||||
const parts = plugin.pluginKey.split(Constants.mcp_delimiter);
|
||||
const serverName = parts[parts.length - 1];
|
||||
const serverConfig = customConfig?.mcpServers?.[serverName];
|
||||
|
||||
const parts = plugin.pluginKey.split(Constants.mcp_delimiter);
|
||||
const serverName = parts[parts.length - 1];
|
||||
const serverConfig = appConfig?.mcpConfig?.[serverName];
|
||||
|
||||
if (!serverConfig?.customUserVars) {
|
||||
toolsOutput.push(toolToAdd);
|
||||
continue;
|
||||
}
|
||||
|
||||
const customVarKeys = Object.keys(serverConfig.customUserVars);
|
||||
|
||||
if (customVarKeys.length === 0) {
|
||||
toolToAdd.authConfig = [];
|
||||
toolToAdd.authenticated = true;
|
||||
} else {
|
||||
toolToAdd.authConfig = Object.entries(serverConfig.customUserVars).map(([key, value]) => ({
|
||||
authField: key,
|
||||
label: value.title || key,
|
||||
description: value.description || '',
|
||||
}));
|
||||
toolToAdd.authenticated = false;
|
||||
if (serverConfig?.customUserVars) {
|
||||
const customVarKeys = Object.keys(serverConfig.customUserVars);
|
||||
if (customVarKeys.length === 0) {
|
||||
toolToAdd.authConfig = [];
|
||||
toolToAdd.authenticated = true;
|
||||
} else {
|
||||
toolToAdd.authConfig = Object.entries(serverConfig.customUserVars).map(
|
||||
([key, value]) => ({
|
||||
authField: key,
|
||||
label: value.title || key,
|
||||
description: value.description || '',
|
||||
}),
|
||||
);
|
||||
toolToAdd.authenticated = false;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
toolsOutput.push(toolToAdd);
|
||||
}
|
||||
|
||||
const finalTools = filterUniquePlugins(toolsOutput);
|
||||
await cache.set(CacheKeys.TOOLS, finalTools);
|
||||
|
||||
const dedupedTools = filterUniquePlugins([...userPlugins, ...finalTools]);
|
||||
|
||||
const dedupedTools = filterUniquePlugins([...(userPlugins ?? []), ...finalTools]);
|
||||
res.status(200).json(dedupedTools);
|
||||
} catch (error) {
|
||||
logger.error('[getAvailableTools]', error);
|
||||
|
||||
@@ -1,7 +1,8 @@
|
||||
const { Constants } = require('librechat-data-provider');
|
||||
const { getCachedTools, getAppConfig } = require('~/server/services/Config');
|
||||
const { getCustomConfig, getCachedTools } = require('~/server/services/Config');
|
||||
const { getLogStores } = require('~/cache');
|
||||
|
||||
// Mock the dependencies
|
||||
jest.mock('@librechat/data-schemas', () => ({
|
||||
logger: {
|
||||
debug: jest.fn(),
|
||||
@@ -10,18 +11,20 @@ jest.mock('@librechat/data-schemas', () => ({
|
||||
}));
|
||||
|
||||
jest.mock('~/server/services/Config', () => ({
|
||||
getCustomConfig: jest.fn(),
|
||||
getCachedTools: jest.fn(),
|
||||
getAppConfig: jest.fn(),
|
||||
setCachedTools: jest.fn(),
|
||||
mergeUserTools: jest.fn(),
|
||||
}));
|
||||
|
||||
jest.mock('~/server/services/ToolService', () => ({
|
||||
getToolkitKey: jest.fn(),
|
||||
loadAndFormatTools: jest.fn(),
|
||||
}));
|
||||
|
||||
jest.mock('~/config', () => ({
|
||||
getMCPManager: jest.fn(() => ({
|
||||
loadManifestTools: jest.fn().mockResolvedValue([]),
|
||||
getRawConfig: jest.fn(),
|
||||
loadAllManifestTools: jest.fn().mockResolvedValue([]),
|
||||
})),
|
||||
getFlowStateManager: jest.fn(),
|
||||
}));
|
||||
@@ -35,74 +38,72 @@ jest.mock('~/cache', () => ({
|
||||
getLogStores: jest.fn(),
|
||||
}));
|
||||
|
||||
jest.mock('@librechat/api', () => ({
|
||||
getToolkitKey: jest.fn(),
|
||||
checkPluginAuth: jest.fn(),
|
||||
filterUniquePlugins: jest.fn(),
|
||||
convertMCPToolsToPlugins: jest.fn(),
|
||||
}));
|
||||
|
||||
// Import the actual module with the function we want to test
|
||||
const { getAvailableTools, getAvailablePluginsController } = require('./PluginController');
|
||||
const {
|
||||
filterUniquePlugins,
|
||||
checkPluginAuth,
|
||||
convertMCPToolsToPlugins,
|
||||
getToolkitKey,
|
||||
} = require('@librechat/api');
|
||||
const { loadAndFormatTools } = require('~/server/services/ToolService');
|
||||
|
||||
describe('PluginController', () => {
|
||||
let mockReq, mockRes, mockCache;
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
mockReq = { user: { id: 'test-user-id' } };
|
||||
mockReq = {
|
||||
user: { id: 'test-user-id' },
|
||||
app: {
|
||||
locals: {
|
||||
paths: { structuredTools: '/mock/path' },
|
||||
filteredTools: null,
|
||||
includedTools: null,
|
||||
},
|
||||
},
|
||||
};
|
||||
mockRes = { status: jest.fn().mockReturnThis(), json: jest.fn() };
|
||||
mockCache = { get: jest.fn(), set: jest.fn() };
|
||||
getLogStores.mockReturnValue(mockCache);
|
||||
|
||||
// Clear availableTools and toolkits arrays before each test
|
||||
require('~/app/clients/tools').availableTools.length = 0;
|
||||
require('~/app/clients/tools').toolkits.length = 0;
|
||||
});
|
||||
|
||||
describe('getAvailablePluginsController', () => {
|
||||
beforeEach(() => {
|
||||
getAppConfig.mockResolvedValue({
|
||||
filteredTools: [],
|
||||
includedTools: [],
|
||||
});
|
||||
mockReq.app = { locals: { filteredTools: [], includedTools: [] } };
|
||||
});
|
||||
|
||||
it('should use filterUniquePlugins to remove duplicate plugins', async () => {
|
||||
// Add plugins with duplicates to availableTools
|
||||
const mockPlugins = [
|
||||
{ name: 'Plugin1', pluginKey: 'key1', description: 'First' },
|
||||
{ name: 'Plugin1', pluginKey: 'key1', description: 'First duplicate' },
|
||||
{ name: 'Plugin2', pluginKey: 'key2', description: 'Second' },
|
||||
];
|
||||
|
||||
require('~/app/clients/tools').availableTools.push(...mockPlugins);
|
||||
|
||||
mockCache.get.mockResolvedValue(null);
|
||||
filterUniquePlugins.mockReturnValue(mockPlugins);
|
||||
checkPluginAuth.mockReturnValue(true);
|
||||
|
||||
await getAvailablePluginsController(mockReq, mockRes);
|
||||
|
||||
expect(filterUniquePlugins).toHaveBeenCalled();
|
||||
expect(mockRes.status).toHaveBeenCalledWith(200);
|
||||
// The response includes authenticated: true for each plugin when checkPluginAuth returns true
|
||||
expect(mockRes.json).toHaveBeenCalledWith([
|
||||
{ name: 'Plugin1', pluginKey: 'key1', description: 'First', authenticated: true },
|
||||
{ name: 'Plugin2', pluginKey: 'key2', description: 'Second', authenticated: true },
|
||||
]);
|
||||
const responseData = mockRes.json.mock.calls[0][0];
|
||||
expect(responseData).toHaveLength(2);
|
||||
expect(responseData[0].pluginKey).toBe('key1');
|
||||
expect(responseData[1].pluginKey).toBe('key2');
|
||||
});
|
||||
|
||||
it('should use checkPluginAuth to verify plugin authentication', async () => {
|
||||
// checkPluginAuth returns false for plugins without authConfig
|
||||
// so authenticated property won't be added
|
||||
const mockPlugin = { name: 'Plugin1', pluginKey: 'key1', description: 'First' };
|
||||
|
||||
require('~/app/clients/tools').availableTools.push(mockPlugin);
|
||||
mockCache.get.mockResolvedValue(null);
|
||||
filterUniquePlugins.mockReturnValue([mockPlugin]);
|
||||
checkPluginAuth.mockReturnValueOnce(true);
|
||||
|
||||
await getAvailablePluginsController(mockReq, mockRes);
|
||||
|
||||
expect(checkPluginAuth).toHaveBeenCalledWith(mockPlugin);
|
||||
const responseData = mockRes.json.mock.calls[0][0];
|
||||
expect(responseData[0].authenticated).toBe(true);
|
||||
// checkPluginAuth returns false, so authenticated property is not added
|
||||
expect(responseData[0].authenticated).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should return cached plugins when available', async () => {
|
||||
@@ -114,8 +115,7 @@ describe('PluginController', () => {
|
||||
|
||||
await getAvailablePluginsController(mockReq, mockRes);
|
||||
|
||||
expect(filterUniquePlugins).not.toHaveBeenCalled();
|
||||
expect(checkPluginAuth).not.toHaveBeenCalled();
|
||||
// When cache is hit, we return immediately without processing
|
||||
expect(mockRes.json).toHaveBeenCalledWith(cachedPlugins);
|
||||
});
|
||||
|
||||
@@ -125,13 +125,9 @@ describe('PluginController', () => {
|
||||
{ name: 'Plugin2', pluginKey: 'key2', description: 'Second' },
|
||||
];
|
||||
|
||||
getAppConfig.mockResolvedValue({
|
||||
filteredTools: [],
|
||||
includedTools: ['key1'],
|
||||
});
|
||||
require('~/app/clients/tools').availableTools.push(...mockPlugins);
|
||||
mockReq.app.locals.includedTools = ['key1'];
|
||||
mockCache.get.mockResolvedValue(null);
|
||||
filterUniquePlugins.mockReturnValue(mockPlugins);
|
||||
checkPluginAuth.mockReturnValue(false);
|
||||
|
||||
await getAvailablePluginsController(mockReq, mockRes);
|
||||
|
||||
@@ -145,67 +141,102 @@ describe('PluginController', () => {
|
||||
it('should use convertMCPToolsToPlugins for user-specific MCP tools', async () => {
|
||||
const mockUserTools = {
|
||||
[`tool1${Constants.mcp_delimiter}server1`]: {
|
||||
function: { name: 'tool1', description: 'Tool 1' },
|
||||
type: 'function',
|
||||
function: {
|
||||
name: `tool1${Constants.mcp_delimiter}server1`,
|
||||
description: 'Tool 1',
|
||||
parameters: { type: 'object', properties: {} },
|
||||
},
|
||||
},
|
||||
};
|
||||
const mockConvertedPlugins = [
|
||||
{
|
||||
name: 'tool1',
|
||||
pluginKey: `tool1${Constants.mcp_delimiter}server1`,
|
||||
description: 'Tool 1',
|
||||
},
|
||||
];
|
||||
|
||||
mockCache.get.mockResolvedValue(null);
|
||||
getCachedTools.mockResolvedValueOnce(mockUserTools);
|
||||
convertMCPToolsToPlugins.mockReturnValue(mockConvertedPlugins);
|
||||
filterUniquePlugins.mockImplementation((plugins) => plugins);
|
||||
getCustomConfig.mockResolvedValue(null);
|
||||
|
||||
// Mock second call to return tool definitions
|
||||
getCachedTools.mockResolvedValueOnce(mockUserTools);
|
||||
|
||||
await getAvailableTools(mockReq, mockRes);
|
||||
|
||||
expect(convertMCPToolsToPlugins).toHaveBeenCalledWith({
|
||||
functionTools: mockUserTools,
|
||||
mcpManager: expect.any(Object),
|
||||
});
|
||||
const responseData = mockRes.json.mock.calls[0][0];
|
||||
// convertMCPToolsToPlugins should have converted the tool
|
||||
expect(responseData.length).toBeGreaterThan(0);
|
||||
const convertedTool = responseData.find(
|
||||
(tool) => tool.pluginKey === `tool1${Constants.mcp_delimiter}server1`,
|
||||
);
|
||||
expect(convertedTool).toBeDefined();
|
||||
expect(convertedTool.name).toBe('tool1');
|
||||
});
|
||||
|
||||
it('should use filterUniquePlugins to deduplicate combined tools', async () => {
|
||||
const mockUserPlugins = [
|
||||
{ name: 'UserTool', pluginKey: 'user-tool', description: 'User tool' },
|
||||
];
|
||||
const mockManifestPlugins = [
|
||||
const mockUserTools = {
|
||||
'user-tool': {
|
||||
type: 'function',
|
||||
function: {
|
||||
name: 'user-tool',
|
||||
description: 'User tool',
|
||||
parameters: { type: 'object', properties: {} },
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
const mockCachedPlugins = [
|
||||
{ name: 'user-tool', pluginKey: 'user-tool', description: 'Duplicate user tool' },
|
||||
{ name: 'ManifestTool', pluginKey: 'manifest-tool', description: 'Manifest tool' },
|
||||
];
|
||||
|
||||
mockCache.get.mockResolvedValue(mockManifestPlugins);
|
||||
getCachedTools.mockResolvedValueOnce({});
|
||||
convertMCPToolsToPlugins.mockReturnValue(mockUserPlugins);
|
||||
filterUniquePlugins.mockReturnValue([...mockUserPlugins, ...mockManifestPlugins]);
|
||||
mockCache.get.mockResolvedValue(mockCachedPlugins);
|
||||
getCachedTools.mockResolvedValueOnce(mockUserTools);
|
||||
getCustomConfig.mockResolvedValue(null);
|
||||
|
||||
// Mock second call to return tool definitions
|
||||
getCachedTools.mockResolvedValueOnce(mockUserTools);
|
||||
|
||||
await getAvailableTools(mockReq, mockRes);
|
||||
|
||||
// Should be called to deduplicate the combined array
|
||||
expect(filterUniquePlugins).toHaveBeenLastCalledWith([
|
||||
...mockUserPlugins,
|
||||
...mockManifestPlugins,
|
||||
]);
|
||||
const responseData = mockRes.json.mock.calls[0][0];
|
||||
// Should have deduplicated tools with same pluginKey
|
||||
const userToolCount = responseData.filter((tool) => tool.pluginKey === 'user-tool').length;
|
||||
expect(userToolCount).toBe(1);
|
||||
});
|
||||
|
||||
it('should use checkPluginAuth to verify authentication status', async () => {
|
||||
const mockPlugin = { name: 'Tool1', pluginKey: 'tool1', description: 'Tool 1' };
|
||||
// Add a plugin to availableTools that will be checked
|
||||
const mockPlugin = {
|
||||
name: 'Tool1',
|
||||
pluginKey: 'tool1',
|
||||
description: 'Tool 1',
|
||||
// No authConfig means checkPluginAuth returns false
|
||||
};
|
||||
|
||||
require('~/app/clients/tools').availableTools.push(mockPlugin);
|
||||
|
||||
mockCache.get.mockResolvedValue(null);
|
||||
getCachedTools.mockResolvedValue({});
|
||||
convertMCPToolsToPlugins.mockReturnValue([]);
|
||||
filterUniquePlugins.mockReturnValue([mockPlugin]);
|
||||
checkPluginAuth.mockReturnValue(true);
|
||||
getCachedTools.mockResolvedValue(null);
|
||||
getCustomConfig.mockResolvedValue(null);
|
||||
|
||||
// Mock getCachedTools second call to return tool definitions
|
||||
getCachedTools.mockResolvedValueOnce({}).mockResolvedValueOnce({ tool1: true });
|
||||
// Mock loadAndFormatTools to return tool definitions including our tool
|
||||
loadAndFormatTools.mockReturnValue({
|
||||
tool1: {
|
||||
type: 'function',
|
||||
function: {
|
||||
name: 'tool1',
|
||||
description: 'Tool 1',
|
||||
parameters: {},
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
await getAvailableTools(mockReq, mockRes);
|
||||
|
||||
expect(checkPluginAuth).toHaveBeenCalledWith(mockPlugin);
|
||||
expect(mockRes.status).toHaveBeenCalledWith(200);
|
||||
const responseData = mockRes.json.mock.calls[0][0];
|
||||
expect(Array.isArray(responseData)).toBe(true);
|
||||
const tool = responseData.find((t) => t.pluginKey === 'tool1');
|
||||
expect(tool).toBeDefined();
|
||||
// checkPluginAuth returns false, so authenticated property is not added
|
||||
expect(tool.authenticated).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should use getToolkitKey for toolkit validation', async () => {
|
||||
@@ -216,84 +247,100 @@ describe('PluginController', () => {
|
||||
toolkit: true,
|
||||
};
|
||||
|
||||
mockCache.get.mockResolvedValue(null);
|
||||
getCachedTools.mockResolvedValue({});
|
||||
convertMCPToolsToPlugins.mockReturnValue([]);
|
||||
filterUniquePlugins.mockReturnValue([mockToolkit]);
|
||||
checkPluginAuth.mockReturnValue(false);
|
||||
getToolkitKey.mockReturnValue('toolkit1');
|
||||
require('~/app/clients/tools').availableTools.push(mockToolkit);
|
||||
|
||||
// Mock getCachedTools second call to return tool definitions
|
||||
getCachedTools.mockResolvedValueOnce({}).mockResolvedValueOnce({
|
||||
toolkit1_function: true,
|
||||
// Mock toolkits to have a mapping
|
||||
require('~/app/clients/tools').toolkits.push({
|
||||
name: 'Toolkit1',
|
||||
pluginKey: 'toolkit1',
|
||||
tools: ['toolkit1_function'],
|
||||
});
|
||||
|
||||
mockCache.get.mockResolvedValue(null);
|
||||
getCachedTools.mockResolvedValue(null);
|
||||
getCustomConfig.mockResolvedValue(null);
|
||||
|
||||
// Mock loadAndFormatTools to return tool definitions
|
||||
loadAndFormatTools.mockReturnValue({
|
||||
toolkit1_function: {
|
||||
type: 'function',
|
||||
function: {
|
||||
name: 'toolkit1_function',
|
||||
description: 'Toolkit function',
|
||||
parameters: {},
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
await getAvailableTools(mockReq, mockRes);
|
||||
|
||||
expect(getToolkitKey).toHaveBeenCalled();
|
||||
expect(mockRes.status).toHaveBeenCalledWith(200);
|
||||
const responseData = mockRes.json.mock.calls[0][0];
|
||||
expect(Array.isArray(responseData)).toBe(true);
|
||||
const toolkit = responseData.find((t) => t.pluginKey === 'toolkit1');
|
||||
expect(toolkit).toBeDefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe('plugin.icon behavior', () => {
|
||||
const callGetAvailableToolsWithMCPServer = async (serverConfig) => {
|
||||
const callGetAvailableToolsWithMCPServer = async (mcpServers) => {
|
||||
mockCache.get.mockResolvedValue(null);
|
||||
getCustomConfig.mockResolvedValue({ mcpServers });
|
||||
|
||||
const functionTools = {
|
||||
[`test-tool${Constants.mcp_delimiter}test-server`]: {
|
||||
function: { name: 'test-tool', description: 'A test tool' },
|
||||
type: 'function',
|
||||
function: {
|
||||
name: `test-tool${Constants.mcp_delimiter}test-server`,
|
||||
description: 'A test tool',
|
||||
parameters: { type: 'object', properties: {} },
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
const mockConvertedPlugin = {
|
||||
name: 'test-tool',
|
||||
pluginKey: `test-tool${Constants.mcp_delimiter}test-server`,
|
||||
description: 'A test tool',
|
||||
icon: serverConfig?.iconPath,
|
||||
authenticated: true,
|
||||
authConfig: [],
|
||||
};
|
||||
|
||||
// Mock the MCP manager to return the server config
|
||||
// Mock the MCP manager to return tools
|
||||
const mockMCPManager = {
|
||||
loadManifestTools: jest.fn().mockResolvedValue([]),
|
||||
getRawConfig: jest.fn().mockReturnValue(serverConfig),
|
||||
getAllToolFunctions: jest.fn().mockResolvedValue(functionTools),
|
||||
};
|
||||
require('~/config').getMCPManager.mockReturnValue(mockMCPManager);
|
||||
|
||||
getCachedTools.mockResolvedValueOnce(functionTools);
|
||||
convertMCPToolsToPlugins.mockReturnValue([mockConvertedPlugin]);
|
||||
filterUniquePlugins.mockImplementation((plugins) => plugins);
|
||||
checkPluginAuth.mockReturnValue(true);
|
||||
getToolkitKey.mockReturnValue(undefined);
|
||||
getCachedTools.mockResolvedValueOnce({});
|
||||
|
||||
getCachedTools.mockResolvedValueOnce({
|
||||
[`test-tool${Constants.mcp_delimiter}test-server`]: true,
|
||||
});
|
||||
// Mock loadAndFormatTools to return empty object since these are MCP tools
|
||||
loadAndFormatTools.mockReturnValue({});
|
||||
|
||||
getCachedTools.mockResolvedValueOnce(functionTools);
|
||||
|
||||
await getAvailableTools(mockReq, mockRes);
|
||||
const responseData = mockRes.json.mock.calls[0][0];
|
||||
return responseData.find((tool) => tool.name === 'test-tool');
|
||||
return responseData.find(
|
||||
(tool) => tool.pluginKey === `test-tool${Constants.mcp_delimiter}test-server`,
|
||||
);
|
||||
};
|
||||
|
||||
it('should set plugin.icon when iconPath is defined', async () => {
|
||||
const serverConfig = {
|
||||
iconPath: '/path/to/icon.png',
|
||||
const mcpServers = {
|
||||
'test-server': {
|
||||
iconPath: '/path/to/icon.png',
|
||||
},
|
||||
};
|
||||
const testTool = await callGetAvailableToolsWithMCPServer(serverConfig);
|
||||
const testTool = await callGetAvailableToolsWithMCPServer(mcpServers);
|
||||
expect(testTool.icon).toBe('/path/to/icon.png');
|
||||
});
|
||||
|
||||
it('should set plugin.icon to undefined when iconPath is not defined', async () => {
|
||||
const serverConfig = {};
|
||||
const testTool = await callGetAvailableToolsWithMCPServer(serverConfig);
|
||||
const mcpServers = {
|
||||
'test-server': {},
|
||||
};
|
||||
const testTool = await callGetAvailableToolsWithMCPServer(mcpServers);
|
||||
expect(testTool.icon).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe('helper function integration', () => {
|
||||
it('should properly handle MCP tools with custom user variables', async () => {
|
||||
const appConfig = {
|
||||
mcpConfig: {
|
||||
const customConfig = {
|
||||
mcpServers: {
|
||||
'test-server': {
|
||||
customUserVars: {
|
||||
API_KEY: { title: 'API Key', description: 'Your API key' },
|
||||
@@ -302,42 +349,35 @@ describe('PluginController', () => {
|
||||
},
|
||||
};
|
||||
|
||||
// We need to test the actual flow where MCP manager tools are included
|
||||
const mcpManagerTools = [
|
||||
{
|
||||
name: 'tool1',
|
||||
pluginKey: `tool1${Constants.mcp_delimiter}test-server`,
|
||||
description: 'Tool 1',
|
||||
authenticated: true,
|
||||
// Mock MCP tools returned by getAllToolFunctions
|
||||
const mcpToolFunctions = {
|
||||
[`tool1${Constants.mcp_delimiter}test-server`]: {
|
||||
type: 'function',
|
||||
function: {
|
||||
name: `tool1${Constants.mcp_delimiter}test-server`,
|
||||
description: 'Tool 1',
|
||||
parameters: {},
|
||||
},
|
||||
},
|
||||
];
|
||||
};
|
||||
|
||||
// Mock the MCP manager to return tools
|
||||
const mockMCPManager = {
|
||||
loadManifestTools: jest.fn().mockResolvedValue(mcpManagerTools),
|
||||
getRawConfig: jest.fn(),
|
||||
getAllToolFunctions: jest.fn().mockResolvedValue(mcpToolFunctions),
|
||||
};
|
||||
require('~/config').getMCPManager.mockReturnValue(mockMCPManager);
|
||||
|
||||
mockCache.get.mockResolvedValue(null);
|
||||
getAppConfig.mockResolvedValue(appConfig);
|
||||
getCustomConfig.mockResolvedValue(customConfig);
|
||||
|
||||
// First call returns user tools (empty in this case)
|
||||
getCachedTools.mockResolvedValueOnce({});
|
||||
|
||||
// Mock convertMCPToolsToPlugins to return empty array for user tools
|
||||
convertMCPToolsToPlugins.mockReturnValue([]);
|
||||
// Mock loadAndFormatTools to return empty object for MCP tools
|
||||
loadAndFormatTools.mockReturnValue({});
|
||||
|
||||
// Mock filterUniquePlugins to pass through
|
||||
filterUniquePlugins.mockImplementation((plugins) => plugins || []);
|
||||
|
||||
// Mock checkPluginAuth
|
||||
checkPluginAuth.mockReturnValue(true);
|
||||
|
||||
// Second call returns tool definitions
|
||||
getCachedTools.mockResolvedValueOnce({
|
||||
[`tool1${Constants.mcp_delimiter}test-server`]: true,
|
||||
});
|
||||
// Second call returns tool definitions including our MCP tool
|
||||
getCachedTools.mockResolvedValueOnce(mcpToolFunctions);
|
||||
|
||||
await getAvailableTools(mockReq, mockRes);
|
||||
|
||||
@@ -378,23 +418,23 @@ describe('PluginController', () => {
|
||||
it('should handle null cachedTools and cachedUserTools', async () => {
|
||||
mockCache.get.mockResolvedValue(null);
|
||||
getCachedTools.mockResolvedValue(null);
|
||||
convertMCPToolsToPlugins.mockReturnValue(undefined);
|
||||
filterUniquePlugins.mockImplementation((plugins) => plugins || []);
|
||||
getCustomConfig.mockResolvedValue(null);
|
||||
|
||||
// Mock loadAndFormatTools to return empty object when getCachedTools returns null
|
||||
loadAndFormatTools.mockReturnValue({});
|
||||
|
||||
await getAvailableTools(mockReq, mockRes);
|
||||
|
||||
expect(convertMCPToolsToPlugins).toHaveBeenCalledWith({
|
||||
functionTools: null,
|
||||
mcpManager: expect.any(Object),
|
||||
});
|
||||
// Should handle null values gracefully
|
||||
expect(mockRes.status).toHaveBeenCalledWith(200);
|
||||
});
|
||||
|
||||
it('should handle when getCachedTools returns undefined', async () => {
|
||||
mockCache.get.mockResolvedValue(null);
|
||||
getCachedTools.mockResolvedValue(undefined);
|
||||
convertMCPToolsToPlugins.mockReturnValue(undefined);
|
||||
filterUniquePlugins.mockImplementation((plugins) => plugins || []);
|
||||
checkPluginAuth.mockReturnValue(false);
|
||||
getCustomConfig.mockResolvedValue(null);
|
||||
|
||||
// Mock loadAndFormatTools to return empty object when getCachedTools returns undefined
|
||||
loadAndFormatTools.mockReturnValue({});
|
||||
|
||||
// Mock getCachedTools to return undefined for both calls
|
||||
getCachedTools.mockReset();
|
||||
@@ -402,36 +442,40 @@ describe('PluginController', () => {
|
||||
|
||||
await getAvailableTools(mockReq, mockRes);
|
||||
|
||||
expect(convertMCPToolsToPlugins).toHaveBeenCalledWith({
|
||||
functionTools: undefined,
|
||||
mcpManager: expect.any(Object),
|
||||
});
|
||||
// Should handle undefined values gracefully
|
||||
expect(mockRes.status).toHaveBeenCalledWith(200);
|
||||
});
|
||||
|
||||
it('should handle cachedToolsArray and userPlugins both being defined', async () => {
|
||||
const cachedTools = [{ name: 'CachedTool', pluginKey: 'cached-tool', description: 'Cached' }];
|
||||
// Use MCP delimiter for the user tool so convertMCPToolsToPlugins works
|
||||
const userTools = {
|
||||
'user-tool': { function: { name: 'user-tool', description: 'User tool' } },
|
||||
[`user-tool${Constants.mcp_delimiter}server1`]: {
|
||||
type: 'function',
|
||||
function: {
|
||||
name: `user-tool${Constants.mcp_delimiter}server1`,
|
||||
description: 'User tool',
|
||||
parameters: {},
|
||||
},
|
||||
},
|
||||
};
|
||||
const userPlugins = [{ name: 'UserTool', pluginKey: 'user-tool', description: 'User tool' }];
|
||||
|
||||
mockCache.get.mockResolvedValue(cachedTools);
|
||||
getCachedTools.mockResolvedValue(userTools);
|
||||
convertMCPToolsToPlugins.mockReturnValue(userPlugins);
|
||||
filterUniquePlugins.mockReturnValue([...userPlugins, ...cachedTools]);
|
||||
getCustomConfig.mockResolvedValue(null);
|
||||
|
||||
await getAvailableTools(mockReq, mockRes);
|
||||
|
||||
expect(mockRes.status).toHaveBeenCalledWith(200);
|
||||
expect(mockRes.json).toHaveBeenCalledWith([...userPlugins, ...cachedTools]);
|
||||
const responseData = mockRes.json.mock.calls[0][0];
|
||||
// Should have both cached and user tools
|
||||
expect(responseData.length).toBeGreaterThanOrEqual(2);
|
||||
});
|
||||
|
||||
it('should handle empty toolDefinitions object', async () => {
|
||||
mockCache.get.mockResolvedValue(null);
|
||||
getCachedTools.mockResolvedValueOnce({}).mockResolvedValueOnce({});
|
||||
convertMCPToolsToPlugins.mockReturnValue([]);
|
||||
filterUniquePlugins.mockImplementation((plugins) => plugins || []);
|
||||
checkPluginAuth.mockReturnValue(true);
|
||||
getCustomConfig.mockResolvedValue(null);
|
||||
|
||||
await getAvailableTools(mockReq, mockRes);
|
||||
|
||||
@@ -454,33 +498,10 @@ describe('PluginController', () => {
|
||||
},
|
||||
};
|
||||
|
||||
// Mock the MCP manager to return server config without customUserVars
|
||||
const mockMCPManager = {
|
||||
loadManifestTools: jest.fn().mockResolvedValue([]),
|
||||
getRawConfig: jest.fn().mockReturnValue(customConfig.mcpServers['test-server']),
|
||||
};
|
||||
require('~/config').getMCPManager.mockReturnValue(mockMCPManager);
|
||||
|
||||
mockCache.get.mockResolvedValue(null);
|
||||
getAppConfig.mockResolvedValue({
|
||||
mcpConfig: customConfig.mcpServers,
|
||||
filteredTools: [],
|
||||
includedTools: [],
|
||||
});
|
||||
getCustomConfig.mockResolvedValue(customConfig);
|
||||
getCachedTools.mockResolvedValueOnce(mockUserTools);
|
||||
|
||||
const mockPlugin = {
|
||||
name: 'tool1',
|
||||
pluginKey: `tool1${Constants.mcp_delimiter}test-server`,
|
||||
description: 'Tool 1',
|
||||
authenticated: true,
|
||||
authConfig: [],
|
||||
};
|
||||
|
||||
convertMCPToolsToPlugins.mockReturnValue([mockPlugin]);
|
||||
filterUniquePlugins.mockImplementation((plugins) => plugins);
|
||||
checkPluginAuth.mockReturnValue(true);
|
||||
|
||||
getCachedTools.mockResolvedValueOnce({
|
||||
[`tool1${Constants.mcp_delimiter}test-server`]: true,
|
||||
});
|
||||
@@ -493,11 +514,9 @@ describe('PluginController', () => {
|
||||
expect(responseData[0].authConfig).toEqual([]);
|
||||
});
|
||||
|
||||
it('should handle undefined filteredTools and includedTools', async () => {
|
||||
getAppConfig.mockResolvedValue({});
|
||||
it('should handle req.app.locals with undefined filteredTools and includedTools', async () => {
|
||||
mockReq.app = { locals: {} };
|
||||
mockCache.get.mockResolvedValue(null);
|
||||
filterUniquePlugins.mockReturnValue([]);
|
||||
checkPluginAuth.mockReturnValue(false);
|
||||
|
||||
await getAvailablePluginsController(mockReq, mockRes);
|
||||
|
||||
@@ -513,12 +532,24 @@ describe('PluginController', () => {
|
||||
toolkit: true,
|
||||
};
|
||||
|
||||
// Ensure req.app.locals is properly mocked
|
||||
mockReq.app = {
|
||||
locals: {
|
||||
filteredTools: [],
|
||||
includedTools: [],
|
||||
paths: { structuredTools: '/mock/path' },
|
||||
},
|
||||
};
|
||||
|
||||
// Add the toolkit to availableTools
|
||||
require('~/app/clients/tools').availableTools.push(mockToolkit);
|
||||
|
||||
mockCache.get.mockResolvedValue(null);
|
||||
getCachedTools.mockResolvedValue({});
|
||||
convertMCPToolsToPlugins.mockReturnValue([]);
|
||||
filterUniquePlugins.mockReturnValue([mockToolkit]);
|
||||
checkPluginAuth.mockReturnValue(false);
|
||||
getToolkitKey.mockReturnValue(undefined);
|
||||
getCustomConfig.mockResolvedValue(null);
|
||||
|
||||
// Mock loadAndFormatTools to return an empty object when toolDefinitions is null
|
||||
loadAndFormatTools.mockReturnValue({});
|
||||
|
||||
// Mock getCachedTools second call to return null
|
||||
getCachedTools.mockResolvedValueOnce({}).mockResolvedValueOnce(null);
|
||||
|
||||
@@ -17,13 +17,11 @@ const { needsRefresh, getNewS3URL } = require('~/server/services/Files/S3/crud')
|
||||
const { Tools, Constants, FileSources } = require('librechat-data-provider');
|
||||
const { processDeleteRequest } = require('~/server/services/Files/process');
|
||||
const { Transaction, Balance, User } = require('~/db/models');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const { deleteToolCalls } = require('~/models/ToolCall');
|
||||
const { deleteAllSharedLinks } = require('~/models');
|
||||
const { getMCPManager } = require('~/config');
|
||||
|
||||
const getUserController = async (req, res) => {
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
/** @type {MongoUser} */
|
||||
const userData = req.user.toObject != null ? req.user.toObject() : { ...req.user };
|
||||
/**
|
||||
@@ -33,7 +31,7 @@ const getUserController = async (req, res) => {
|
||||
delete userData.password;
|
||||
delete userData.totpSecret;
|
||||
delete userData.backupCodes;
|
||||
if (appConfig.fileStrategy === FileSources.s3 && userData.avatar) {
|
||||
if (req.app.locals.fileStrategy === FileSources.s3 && userData.avatar) {
|
||||
const avatarNeedsRefresh = needsRefresh(userData.avatar, 3600);
|
||||
if (!avatarNeedsRefresh) {
|
||||
return res.status(200).send(userData);
|
||||
@@ -89,7 +87,6 @@ const deleteUserFiles = async (req) => {
|
||||
};
|
||||
|
||||
const updateUserPluginsController = async (req, res) => {
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
const { user } = req;
|
||||
const { pluginKey, action, auth, isEntityTool } = req.body;
|
||||
try {
|
||||
@@ -134,7 +131,7 @@ const updateUserPluginsController = async (req, res) => {
|
||||
|
||||
if (pluginKey === Tools.web_search) {
|
||||
/** @type {TCustomConfig['webSearch']} */
|
||||
const webSearchConfig = appConfig?.webSearch;
|
||||
const webSearchConfig = req.app.locals?.webSearch;
|
||||
keys = extractWebSearchEnvVars({
|
||||
keys: action === 'install' ? keys : webSearchKeys,
|
||||
config: webSearchConfig,
|
||||
|
||||
@@ -7,8 +7,6 @@ const {
|
||||
createRun,
|
||||
Tokenizer,
|
||||
checkAccess,
|
||||
hasCustomUserVars,
|
||||
getUserMCPAuthMap,
|
||||
memoryInstructions,
|
||||
formatContentStrings,
|
||||
createMemoryProcessor,
|
||||
@@ -35,18 +33,13 @@ const {
|
||||
bedrockInputSchema,
|
||||
removeNullishValues,
|
||||
} = require('librechat-data-provider');
|
||||
const {
|
||||
findPluginAuthsByKeys,
|
||||
getFormattedMemories,
|
||||
deleteMemory,
|
||||
setMemory,
|
||||
} = require('~/models');
|
||||
const { addCacheControl, createContextHandlers } = require('~/app/clients/prompts');
|
||||
const { initializeAgent } = require('~/server/services/Endpoints/agents/agent');
|
||||
const { spendTokens, spendStructuredTokens } = require('~/models/spendTokens');
|
||||
const { checkCapability, getAppConfig } = require('~/server/services/Config');
|
||||
const { getFormattedMemories, deleteMemory, setMemory } = require('~/models');
|
||||
const { encodeAndFormat } = require('~/server/services/Files/images/encode');
|
||||
const { getProviderConfig } = require('~/server/services/Endpoints');
|
||||
const { checkCapability } = require('~/server/services/Config');
|
||||
const BaseClient = require('~/app/clients/BaseClient');
|
||||
const { getRoleByName } = require('~/models/Role');
|
||||
const { loadAgent } = require('~/models/Agent');
|
||||
@@ -453,8 +446,8 @@ class AgentClient extends BaseClient {
|
||||
);
|
||||
return;
|
||||
}
|
||||
const appConfig = await getAppConfig({ role: user.role });
|
||||
const memoryConfig = appConfig.memory;
|
||||
/** @type {TCustomConfig['memory']} */
|
||||
const memoryConfig = this.options.req?.app?.locals?.memory;
|
||||
if (!memoryConfig || memoryConfig.disabled === true) {
|
||||
return;
|
||||
}
|
||||
@@ -462,7 +455,7 @@ class AgentClient extends BaseClient {
|
||||
/** @type {Agent} */
|
||||
let prelimAgent;
|
||||
const allowedProviders = new Set(
|
||||
appConfig?.endpoints?.[EModelEndpoint.agents]?.allowedProviders,
|
||||
this.options.req?.app?.locals?.[EModelEndpoint.agents]?.allowedProviders,
|
||||
);
|
||||
try {
|
||||
if (memoryConfig.agent?.id != null && memoryConfig.agent.id !== this.options.agent.id) {
|
||||
@@ -584,8 +577,8 @@ class AgentClient extends BaseClient {
|
||||
if (this.processMemory == null) {
|
||||
return;
|
||||
}
|
||||
const appConfig = await getAppConfig({ role: this.options.req.user?.role });
|
||||
const memoryConfig = appConfig.memory;
|
||||
/** @type {TCustomConfig['memory']} */
|
||||
const memoryConfig = this.options.req?.app?.locals?.memory;
|
||||
const messageWindowSize = memoryConfig?.messageWindowSize ?? 5;
|
||||
|
||||
let messagesToProcess = [...messages];
|
||||
@@ -617,6 +610,7 @@ class AgentClient extends BaseClient {
|
||||
await this.chatCompletion({
|
||||
payload,
|
||||
onProgress: opts.onProgress,
|
||||
userMCPAuthMap: opts.userMCPAuthMap,
|
||||
abortController: opts.abortController,
|
||||
});
|
||||
return this.contentParts;
|
||||
@@ -626,15 +620,9 @@ class AgentClient extends BaseClient {
|
||||
* @param {Object} params
|
||||
* @param {string} [params.model]
|
||||
* @param {string} [params.context='message']
|
||||
* @param {AppConfig['balance']} [params.balance]
|
||||
* @param {UsageMetadata[]} [params.collectedUsage=this.collectedUsage]
|
||||
*/
|
||||
async recordCollectedUsage({
|
||||
model,
|
||||
balance,
|
||||
context = 'message',
|
||||
collectedUsage = this.collectedUsage,
|
||||
}) {
|
||||
async recordCollectedUsage({ model, context = 'message', collectedUsage = this.collectedUsage }) {
|
||||
if (!collectedUsage || !collectedUsage.length) {
|
||||
return;
|
||||
}
|
||||
@@ -656,7 +644,6 @@ class AgentClient extends BaseClient {
|
||||
|
||||
const txMetadata = {
|
||||
context,
|
||||
balance,
|
||||
conversationId: this.conversationId,
|
||||
user: this.user ?? this.options.req.user?.id,
|
||||
endpointTokenConfig: this.options.endpointTokenConfig,
|
||||
@@ -756,7 +743,13 @@ class AgentClient extends BaseClient {
|
||||
return currentMessageTokens > 0 ? currentMessageTokens : originalEstimate;
|
||||
}
|
||||
|
||||
async chatCompletion({ payload, abortController = null }) {
|
||||
/**
|
||||
* @param {object} params
|
||||
* @param {string | ChatCompletionMessageParam[]} params.payload
|
||||
* @param {Record<string, Record<string, string>>} [params.userMCPAuthMap]
|
||||
* @param {AbortController} [params.abortController]
|
||||
*/
|
||||
async chatCompletion({ payload, userMCPAuthMap, abortController = null }) {
|
||||
/** @type {Partial<GraphRunnableConfig>} */
|
||||
let config;
|
||||
/** @type {ReturnType<createRun>} */
|
||||
@@ -768,9 +761,8 @@ class AgentClient extends BaseClient {
|
||||
abortController = new AbortController();
|
||||
}
|
||||
|
||||
const appConfig = await getAppConfig({ role: this.options.req.user?.role });
|
||||
/** @type {AppConfig['endpoints']['agents']} */
|
||||
const agentsEConfig = appConfig.endpoints?.[EModelEndpoint.agents];
|
||||
/** @type {TCustomConfig['endpoints']['agents']} */
|
||||
const agentsEConfig = this.options.req.app.locals[EModelEndpoint.agents];
|
||||
|
||||
config = {
|
||||
configurable: {
|
||||
@@ -913,21 +905,9 @@ class AgentClient extends BaseClient {
|
||||
run.Graph.contentData = contentData;
|
||||
}
|
||||
|
||||
try {
|
||||
if (agent.tools?.length && hasCustomUserVars(appConfig)) {
|
||||
config.configurable.userMCPAuthMap = await getUserMCPAuthMap({
|
||||
userId: this.options.req.user.id,
|
||||
tools: agent.tools,
|
||||
findPluginAuthsByKeys,
|
||||
});
|
||||
}
|
||||
} catch (err) {
|
||||
logger.error(
|
||||
`[api/server/controllers/agents/client.js #chatCompletion] Error getting custom user vars for agent ${agent.id}`,
|
||||
err,
|
||||
);
|
||||
if (userMCPAuthMap != null) {
|
||||
config.configurable.userMCPAuthMap = userMCPAuthMap;
|
||||
}
|
||||
|
||||
await run.processStream({ messages }, config, {
|
||||
keepContent: i !== 0,
|
||||
tokenCounter: createTokenCounter(this.getEncoding()),
|
||||
@@ -1050,7 +1030,7 @@ class AgentClient extends BaseClient {
|
||||
this.artifactPromises.push(...attachments);
|
||||
}
|
||||
|
||||
await this.recordCollectedUsage({ context: 'message', balance: appConfig?.balance });
|
||||
await this.recordCollectedUsage({ context: 'message' });
|
||||
} catch (err) {
|
||||
logger.error(
|
||||
'[api/server/controllers/agents/client.js #chatCompletion] Error recording collected usage',
|
||||
@@ -1091,7 +1071,6 @@ class AgentClient extends BaseClient {
|
||||
}
|
||||
const { handleLLMEnd, collected: collectedMetadata } = createMetadataAggregator();
|
||||
const { req, res, agent } = this.options;
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
let endpoint = agent.endpoint;
|
||||
|
||||
/** @type {import('@librechat/agents').ClientOptions} */
|
||||
@@ -1099,13 +1078,11 @@ class AgentClient extends BaseClient {
|
||||
model: agent.model || agent.model_parameters.model,
|
||||
};
|
||||
|
||||
let titleProviderConfig = getProviderConfig({ provider: endpoint, appConfig });
|
||||
let titleProviderConfig = await getProviderConfig(endpoint);
|
||||
|
||||
/** @type {TEndpoint | undefined} */
|
||||
const endpointConfig =
|
||||
appConfig.endpoints?.all ??
|
||||
appConfig.endpoints?.[endpoint] ??
|
||||
titleProviderConfig.customEndpointConfig;
|
||||
req.app.locals.all ?? req.app.locals[endpoint] ?? titleProviderConfig.customEndpointConfig;
|
||||
if (!endpointConfig) {
|
||||
logger.warn(
|
||||
'[api/server/controllers/agents/client.js #titleConvo] Error getting endpoint config',
|
||||
@@ -1114,10 +1091,7 @@ class AgentClient extends BaseClient {
|
||||
|
||||
if (endpointConfig?.titleEndpoint && endpointConfig.titleEndpoint !== endpoint) {
|
||||
try {
|
||||
titleProviderConfig = getProviderConfig({
|
||||
provider: endpointConfig.titleEndpoint,
|
||||
appConfig,
|
||||
});
|
||||
titleProviderConfig = await getProviderConfig(endpointConfig.titleEndpoint);
|
||||
endpoint = endpointConfig.titleEndpoint;
|
||||
} catch (error) {
|
||||
logger.warn(
|
||||
@@ -1126,7 +1100,7 @@ class AgentClient extends BaseClient {
|
||||
);
|
||||
// Fall back to original provider config
|
||||
endpoint = agent.endpoint;
|
||||
titleProviderConfig = getProviderConfig({ provider: endpoint, appConfig });
|
||||
titleProviderConfig = await getProviderConfig(endpoint);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1230,10 +1204,9 @@ class AgentClient extends BaseClient {
|
||||
});
|
||||
|
||||
await this.recordCollectedUsage({
|
||||
collectedUsage,
|
||||
context: 'title',
|
||||
model: clientOptions.model,
|
||||
balance: appConfig?.balance,
|
||||
context: 'title',
|
||||
collectedUsage,
|
||||
}).catch((err) => {
|
||||
logger.error(
|
||||
'[api/server/controllers/agents/client.js #titleConvo] Error recording collected usage',
|
||||
@@ -1252,26 +1225,17 @@ class AgentClient extends BaseClient {
|
||||
* @param {object} params
|
||||
* @param {number} params.promptTokens
|
||||
* @param {number} params.completionTokens
|
||||
* @param {string} [params.model]
|
||||
* @param {OpenAIUsageMetadata} [params.usage]
|
||||
* @param {AppConfig['balance']} [params.balance]
|
||||
* @param {string} [params.model]
|
||||
* @param {string} [params.context='message']
|
||||
* @returns {Promise<void>}
|
||||
*/
|
||||
async recordTokenUsage({
|
||||
model,
|
||||
usage,
|
||||
balance,
|
||||
promptTokens,
|
||||
completionTokens,
|
||||
context = 'message',
|
||||
}) {
|
||||
async recordTokenUsage({ model, promptTokens, completionTokens, usage, context = 'message' }) {
|
||||
try {
|
||||
await spendTokens(
|
||||
{
|
||||
model,
|
||||
context,
|
||||
balance,
|
||||
conversationId: this.conversationId,
|
||||
user: this.user ?? this.options.req.user?.id,
|
||||
endpointTokenConfig: this.options.endpointTokenConfig,
|
||||
@@ -1288,7 +1252,6 @@ class AgentClient extends BaseClient {
|
||||
await spendTokens(
|
||||
{
|
||||
model,
|
||||
balance,
|
||||
context: 'reasoning',
|
||||
conversationId: this.conversationId,
|
||||
user: this.user ?? this.options.req.user?.id,
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
const { Providers } = require('@librechat/agents');
|
||||
const { Constants, EModelEndpoint } = require('librechat-data-provider');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const AgentClient = require('./client');
|
||||
|
||||
jest.mock('@librechat/agents', () => ({
|
||||
@@ -11,10 +10,6 @@ jest.mock('@librechat/agents', () => ({
|
||||
}),
|
||||
}));
|
||||
|
||||
jest.mock('~/server/services/Config', () => ({
|
||||
getAppConfig: jest.fn(),
|
||||
}));
|
||||
|
||||
describe('AgentClient - titleConvo', () => {
|
||||
let client;
|
||||
let mockRun;
|
||||
@@ -44,21 +39,19 @@ describe('AgentClient - titleConvo', () => {
|
||||
},
|
||||
};
|
||||
|
||||
// Mock getAppConfig to return endpoint configurations
|
||||
getAppConfig.mockResolvedValue({
|
||||
endpoints: {
|
||||
[EModelEndpoint.openAI]: {
|
||||
// Match the agent endpoint
|
||||
titleModel: 'gpt-3.5-turbo',
|
||||
titlePrompt: 'Custom title prompt',
|
||||
titleMethod: 'structured',
|
||||
titlePromptTemplate: 'Template: {{content}}',
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
// Mock request and response
|
||||
mockReq = {
|
||||
app: {
|
||||
locals: {
|
||||
[EModelEndpoint.openAI]: {
|
||||
// Match the agent endpoint
|
||||
titleModel: 'gpt-3.5-turbo',
|
||||
titlePrompt: 'Custom title prompt',
|
||||
titleMethod: 'structured',
|
||||
titlePromptTemplate: 'Template: {{content}}',
|
||||
},
|
||||
},
|
||||
},
|
||||
user: {
|
||||
id: 'user-123',
|
||||
},
|
||||
@@ -150,7 +143,7 @@ describe('AgentClient - titleConvo', () => {
|
||||
|
||||
it('should handle missing endpoint config gracefully', async () => {
|
||||
// Remove endpoint config
|
||||
getAppConfig.mockResolvedValue({ endpoints: {} });
|
||||
mockReq.app.locals[EModelEndpoint.openAI] = undefined;
|
||||
|
||||
const text = 'Test conversation text';
|
||||
const abortController = new AbortController();
|
||||
@@ -168,16 +161,7 @@ describe('AgentClient - titleConvo', () => {
|
||||
|
||||
it('should use agent model when titleModel is not provided', async () => {
|
||||
// Remove titleModel from config
|
||||
getAppConfig.mockResolvedValue({
|
||||
endpoints: {
|
||||
[EModelEndpoint.openAI]: {
|
||||
titlePrompt: 'Custom title prompt',
|
||||
titleMethod: 'structured',
|
||||
titlePromptTemplate: 'Template: {{content}}',
|
||||
// titleModel is omitted
|
||||
},
|
||||
},
|
||||
});
|
||||
delete mockReq.app.locals[EModelEndpoint.openAI].titleModel;
|
||||
|
||||
const text = 'Test conversation text';
|
||||
const abortController = new AbortController();
|
||||
@@ -189,16 +173,7 @@ describe('AgentClient - titleConvo', () => {
|
||||
});
|
||||
|
||||
it('should not use titleModel when it equals CURRENT_MODEL constant', async () => {
|
||||
getAppConfig.mockResolvedValue({
|
||||
endpoints: {
|
||||
[EModelEndpoint.openAI]: {
|
||||
titleModel: Constants.CURRENT_MODEL,
|
||||
titlePrompt: 'Custom title prompt',
|
||||
titleMethod: 'structured',
|
||||
titlePromptTemplate: 'Template: {{content}}',
|
||||
},
|
||||
},
|
||||
});
|
||||
mockReq.app.locals[EModelEndpoint.openAI].titleModel = Constants.CURRENT_MODEL;
|
||||
|
||||
const text = 'Test conversation text';
|
||||
const abortController = new AbortController();
|
||||
@@ -270,17 +245,10 @@ describe('AgentClient - titleConvo', () => {
|
||||
process.env.ANTHROPIC_API_KEY = 'test-api-key';
|
||||
|
||||
// Add titleEndpoint to the config
|
||||
getAppConfig.mockResolvedValue({
|
||||
endpoints: {
|
||||
[EModelEndpoint.openAI]: {
|
||||
titleModel: 'gpt-3.5-turbo',
|
||||
titleEndpoint: EModelEndpoint.anthropic,
|
||||
titleMethod: 'structured',
|
||||
titlePrompt: 'Custom title prompt',
|
||||
titlePromptTemplate: 'Custom template',
|
||||
},
|
||||
},
|
||||
});
|
||||
mockReq.app.locals[EModelEndpoint.openAI].titleEndpoint = EModelEndpoint.anthropic;
|
||||
mockReq.app.locals[EModelEndpoint.openAI].titleMethod = 'structured';
|
||||
mockReq.app.locals[EModelEndpoint.openAI].titlePrompt = 'Custom title prompt';
|
||||
mockReq.app.locals[EModelEndpoint.openAI].titlePromptTemplate = 'Custom template';
|
||||
|
||||
const text = 'Test conversation text';
|
||||
const abortController = new AbortController();
|
||||
@@ -306,17 +274,19 @@ describe('AgentClient - titleConvo', () => {
|
||||
});
|
||||
|
||||
it('should use all config when endpoint config is missing', async () => {
|
||||
// Set 'all' config without endpoint-specific config
|
||||
getAppConfig.mockResolvedValue({
|
||||
endpoints: {
|
||||
all: {
|
||||
titleModel: 'gpt-4o-mini',
|
||||
titlePrompt: 'All config title prompt',
|
||||
titleMethod: 'completion',
|
||||
titlePromptTemplate: 'All config template: {{content}}',
|
||||
},
|
||||
},
|
||||
});
|
||||
// Remove endpoint-specific config
|
||||
delete mockReq.app.locals[EModelEndpoint.openAI].titleModel;
|
||||
delete mockReq.app.locals[EModelEndpoint.openAI].titlePrompt;
|
||||
delete mockReq.app.locals[EModelEndpoint.openAI].titleMethod;
|
||||
delete mockReq.app.locals[EModelEndpoint.openAI].titlePromptTemplate;
|
||||
|
||||
// Set 'all' config
|
||||
mockReq.app.locals.all = {
|
||||
titleModel: 'gpt-4o-mini',
|
||||
titlePrompt: 'All config title prompt',
|
||||
titleMethod: 'completion',
|
||||
titlePromptTemplate: 'All config template: {{content}}',
|
||||
};
|
||||
|
||||
const text = 'Test conversation text';
|
||||
const abortController = new AbortController();
|
||||
@@ -339,22 +309,18 @@ describe('AgentClient - titleConvo', () => {
|
||||
|
||||
it('should prioritize all config over endpoint config for title settings', async () => {
|
||||
// Set both endpoint and 'all' config
|
||||
getAppConfig.mockResolvedValue({
|
||||
endpoints: {
|
||||
[EModelEndpoint.openAI]: {
|
||||
titleModel: 'gpt-3.5-turbo',
|
||||
titlePrompt: 'Endpoint title prompt',
|
||||
titleMethod: 'structured',
|
||||
// titlePromptTemplate is omitted to test fallback
|
||||
},
|
||||
all: {
|
||||
titleModel: 'gpt-4o-mini',
|
||||
titlePrompt: 'All config title prompt',
|
||||
titleMethod: 'completion',
|
||||
titlePromptTemplate: 'All config template',
|
||||
},
|
||||
},
|
||||
});
|
||||
mockReq.app.locals[EModelEndpoint.openAI].titleModel = 'gpt-3.5-turbo';
|
||||
mockReq.app.locals[EModelEndpoint.openAI].titlePrompt = 'Endpoint title prompt';
|
||||
mockReq.app.locals[EModelEndpoint.openAI].titleMethod = 'structured';
|
||||
// Remove titlePromptTemplate from endpoint config to test fallback
|
||||
delete mockReq.app.locals[EModelEndpoint.openAI].titlePromptTemplate;
|
||||
|
||||
mockReq.app.locals.all = {
|
||||
titleModel: 'gpt-4o-mini',
|
||||
titlePrompt: 'All config title prompt',
|
||||
titleMethod: 'completion',
|
||||
titlePromptTemplate: 'All config template',
|
||||
};
|
||||
|
||||
const text = 'Test conversation text';
|
||||
const abortController = new AbortController();
|
||||
@@ -380,19 +346,18 @@ describe('AgentClient - titleConvo', () => {
|
||||
const originalApiKey = process.env.ANTHROPIC_API_KEY;
|
||||
process.env.ANTHROPIC_API_KEY = 'test-anthropic-key';
|
||||
|
||||
// Remove endpoint-specific config to test 'all' config
|
||||
delete mockReq.app.locals[EModelEndpoint.openAI];
|
||||
|
||||
// Set comprehensive 'all' config with all new title options
|
||||
getAppConfig.mockResolvedValue({
|
||||
endpoints: {
|
||||
all: {
|
||||
titleConvo: true,
|
||||
titleModel: 'claude-3-haiku-20240307',
|
||||
titleMethod: 'completion', // Testing the new default method
|
||||
titlePrompt: 'Generate a concise, descriptive title for this conversation',
|
||||
titlePromptTemplate: 'Conversation summary: {{content}}',
|
||||
titleEndpoint: EModelEndpoint.anthropic, // Should switch provider to Anthropic
|
||||
},
|
||||
},
|
||||
});
|
||||
mockReq.app.locals.all = {
|
||||
titleConvo: true,
|
||||
titleModel: 'claude-3-haiku-20240307',
|
||||
titleMethod: 'completion', // Testing the new default method
|
||||
titlePrompt: 'Generate a concise, descriptive title for this conversation',
|
||||
titlePromptTemplate: 'Conversation summary: {{content}}',
|
||||
titleEndpoint: EModelEndpoint.anthropic, // Should switch provider to Anthropic
|
||||
};
|
||||
|
||||
const text = 'Test conversation about AI and machine learning';
|
||||
const abortController = new AbortController();
|
||||
@@ -437,17 +402,16 @@ describe('AgentClient - titleConvo', () => {
|
||||
// Clear previous calls
|
||||
mockRun.generateTitle.mockClear();
|
||||
|
||||
// Remove endpoint config
|
||||
delete mockReq.app.locals[EModelEndpoint.openAI];
|
||||
|
||||
// Set 'all' config with specific titleMethod
|
||||
getAppConfig.mockResolvedValue({
|
||||
endpoints: {
|
||||
all: {
|
||||
titleModel: 'gpt-4o-mini',
|
||||
titleMethod: method,
|
||||
titlePrompt: `Testing ${method} method`,
|
||||
titlePromptTemplate: `Template for ${method}: {{content}}`,
|
||||
},
|
||||
},
|
||||
});
|
||||
mockReq.app.locals.all = {
|
||||
titleModel: 'gpt-4o-mini',
|
||||
titleMethod: method,
|
||||
titlePrompt: `Testing ${method} method`,
|
||||
titlePromptTemplate: `Template for ${method}: {{content}}`,
|
||||
};
|
||||
|
||||
const text = `Test conversation for ${method} method`;
|
||||
const abortController = new AbortController();
|
||||
@@ -491,36 +455,32 @@ describe('AgentClient - titleConvo', () => {
|
||||
// Set up Azure endpoint with serverless config
|
||||
mockAgent.endpoint = EModelEndpoint.azureOpenAI;
|
||||
mockAgent.provider = EModelEndpoint.azureOpenAI;
|
||||
getAppConfig.mockResolvedValue({
|
||||
endpoints: {
|
||||
[EModelEndpoint.azureOpenAI]: {
|
||||
titleConvo: true,
|
||||
titleModel: 'grok-3',
|
||||
titleMethod: 'completion',
|
||||
titlePrompt: 'Azure serverless title prompt',
|
||||
streamRate: 35,
|
||||
modelGroupMap: {
|
||||
mockReq.app.locals[EModelEndpoint.azureOpenAI] = {
|
||||
titleConvo: true,
|
||||
titleModel: 'grok-3',
|
||||
titleMethod: 'completion',
|
||||
titlePrompt: 'Azure serverless title prompt',
|
||||
streamRate: 35,
|
||||
modelGroupMap: {
|
||||
'grok-3': {
|
||||
group: 'Azure AI Foundry',
|
||||
deploymentName: 'grok-3',
|
||||
},
|
||||
},
|
||||
groupMap: {
|
||||
'Azure AI Foundry': {
|
||||
apiKey: '${AZURE_API_KEY}',
|
||||
baseURL: 'https://test.services.ai.azure.com/models',
|
||||
version: '2024-05-01-preview',
|
||||
serverless: true,
|
||||
models: {
|
||||
'grok-3': {
|
||||
group: 'Azure AI Foundry',
|
||||
deploymentName: 'grok-3',
|
||||
},
|
||||
},
|
||||
groupMap: {
|
||||
'Azure AI Foundry': {
|
||||
apiKey: '${AZURE_API_KEY}',
|
||||
baseURL: 'https://test.services.ai.azure.com/models',
|
||||
version: '2024-05-01-preview',
|
||||
serverless: true,
|
||||
models: {
|
||||
'grok-3': {
|
||||
deploymentName: 'grok-3',
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
};
|
||||
mockReq.body.endpoint = EModelEndpoint.azureOpenAI;
|
||||
mockReq.body.model = 'grok-3';
|
||||
|
||||
@@ -543,35 +503,31 @@ describe('AgentClient - titleConvo', () => {
|
||||
// Set up Azure endpoint
|
||||
mockAgent.endpoint = EModelEndpoint.azureOpenAI;
|
||||
mockAgent.provider = EModelEndpoint.azureOpenAI;
|
||||
getAppConfig.mockResolvedValue({
|
||||
endpoints: {
|
||||
[EModelEndpoint.azureOpenAI]: {
|
||||
titleConvo: true,
|
||||
titleModel: 'gpt-4o',
|
||||
titleMethod: 'structured',
|
||||
titlePrompt: 'Azure instance title prompt',
|
||||
streamRate: 35,
|
||||
modelGroupMap: {
|
||||
mockReq.app.locals[EModelEndpoint.azureOpenAI] = {
|
||||
titleConvo: true,
|
||||
titleModel: 'gpt-4o',
|
||||
titleMethod: 'structured',
|
||||
titlePrompt: 'Azure instance title prompt',
|
||||
streamRate: 35,
|
||||
modelGroupMap: {
|
||||
'gpt-4o': {
|
||||
group: 'eastus',
|
||||
deploymentName: 'gpt-4o',
|
||||
},
|
||||
},
|
||||
groupMap: {
|
||||
eastus: {
|
||||
apiKey: '${EASTUS_API_KEY}',
|
||||
instanceName: 'region-instance',
|
||||
version: '2024-02-15-preview',
|
||||
models: {
|
||||
'gpt-4o': {
|
||||
group: 'eastus',
|
||||
deploymentName: 'gpt-4o',
|
||||
},
|
||||
},
|
||||
groupMap: {
|
||||
eastus: {
|
||||
apiKey: '${EASTUS_API_KEY}',
|
||||
instanceName: 'region-instance',
|
||||
version: '2024-02-15-preview',
|
||||
models: {
|
||||
'gpt-4o': {
|
||||
deploymentName: 'gpt-4o',
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
};
|
||||
mockReq.body.endpoint = EModelEndpoint.azureOpenAI;
|
||||
mockReq.body.model = 'gpt-4o';
|
||||
|
||||
@@ -595,36 +551,32 @@ describe('AgentClient - titleConvo', () => {
|
||||
mockAgent.endpoint = EModelEndpoint.azureOpenAI;
|
||||
mockAgent.provider = EModelEndpoint.azureOpenAI;
|
||||
mockAgent.model_parameters.model = 'gpt-4o-latest';
|
||||
getAppConfig.mockResolvedValue({
|
||||
endpoints: {
|
||||
[EModelEndpoint.azureOpenAI]: {
|
||||
titleConvo: true,
|
||||
titleModel: Constants.CURRENT_MODEL,
|
||||
titleMethod: 'functions',
|
||||
streamRate: 35,
|
||||
modelGroupMap: {
|
||||
mockReq.app.locals[EModelEndpoint.azureOpenAI] = {
|
||||
titleConvo: true,
|
||||
titleModel: Constants.CURRENT_MODEL,
|
||||
titleMethod: 'functions',
|
||||
streamRate: 35,
|
||||
modelGroupMap: {
|
||||
'gpt-4o-latest': {
|
||||
group: 'region-eastus',
|
||||
deploymentName: 'gpt-4o-mini',
|
||||
version: '2024-02-15-preview',
|
||||
},
|
||||
},
|
||||
groupMap: {
|
||||
'region-eastus': {
|
||||
apiKey: '${EASTUS2_API_KEY}',
|
||||
instanceName: 'test-instance',
|
||||
version: '2024-12-01-preview',
|
||||
models: {
|
||||
'gpt-4o-latest': {
|
||||
group: 'region-eastus',
|
||||
deploymentName: 'gpt-4o-mini',
|
||||
version: '2024-02-15-preview',
|
||||
},
|
||||
},
|
||||
groupMap: {
|
||||
'region-eastus': {
|
||||
apiKey: '${EASTUS2_API_KEY}',
|
||||
instanceName: 'test-instance',
|
||||
version: '2024-12-01-preview',
|
||||
models: {
|
||||
'gpt-4o-latest': {
|
||||
deploymentName: 'gpt-4o-mini',
|
||||
version: '2024-02-15-preview',
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
};
|
||||
mockReq.body.endpoint = EModelEndpoint.azureOpenAI;
|
||||
mockReq.body.model = 'gpt-4o-latest';
|
||||
|
||||
@@ -646,63 +598,59 @@ describe('AgentClient - titleConvo', () => {
|
||||
// Set up Azure endpoint
|
||||
mockAgent.endpoint = EModelEndpoint.azureOpenAI;
|
||||
mockAgent.provider = EModelEndpoint.azureOpenAI;
|
||||
getAppConfig.mockResolvedValue({
|
||||
endpoints: {
|
||||
[EModelEndpoint.azureOpenAI]: {
|
||||
titleConvo: true,
|
||||
titleModel: 'o1-mini',
|
||||
titleMethod: 'completion',
|
||||
streamRate: 35,
|
||||
modelGroupMap: {
|
||||
mockReq.app.locals[EModelEndpoint.azureOpenAI] = {
|
||||
titleConvo: true,
|
||||
titleModel: 'o1-mini',
|
||||
titleMethod: 'completion',
|
||||
streamRate: 35,
|
||||
modelGroupMap: {
|
||||
'gpt-4o': {
|
||||
group: 'eastus',
|
||||
deploymentName: 'gpt-4o',
|
||||
},
|
||||
'o1-mini': {
|
||||
group: 'region-eastus',
|
||||
deploymentName: 'o1-mini',
|
||||
},
|
||||
'codex-mini': {
|
||||
group: 'codex-mini',
|
||||
deploymentName: 'codex-mini',
|
||||
},
|
||||
},
|
||||
groupMap: {
|
||||
eastus: {
|
||||
apiKey: '${EASTUS_API_KEY}',
|
||||
instanceName: 'region-eastus',
|
||||
version: '2024-02-15-preview',
|
||||
models: {
|
||||
'gpt-4o': {
|
||||
group: 'eastus',
|
||||
deploymentName: 'gpt-4o',
|
||||
},
|
||||
},
|
||||
},
|
||||
'region-eastus': {
|
||||
apiKey: '${EASTUS2_API_KEY}',
|
||||
instanceName: 'region-eastus2',
|
||||
version: '2024-12-01-preview',
|
||||
models: {
|
||||
'o1-mini': {
|
||||
group: 'region-eastus',
|
||||
deploymentName: 'o1-mini',
|
||||
},
|
||||
'codex-mini': {
|
||||
group: 'codex-mini',
|
||||
deploymentName: 'codex-mini',
|
||||
},
|
||||
},
|
||||
groupMap: {
|
||||
eastus: {
|
||||
apiKey: '${EASTUS_API_KEY}',
|
||||
instanceName: 'region-eastus',
|
||||
version: '2024-02-15-preview',
|
||||
models: {
|
||||
'gpt-4o': {
|
||||
deploymentName: 'gpt-4o',
|
||||
},
|
||||
},
|
||||
},
|
||||
'region-eastus': {
|
||||
apiKey: '${EASTUS2_API_KEY}',
|
||||
instanceName: 'region-eastus2',
|
||||
version: '2024-12-01-preview',
|
||||
models: {
|
||||
'o1-mini': {
|
||||
deploymentName: 'o1-mini',
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
'codex-mini': {
|
||||
apiKey: '${AZURE_API_KEY}',
|
||||
baseURL: 'https://example.cognitiveservices.azure.com/openai/',
|
||||
version: '2025-04-01-preview',
|
||||
serverless: true,
|
||||
models: {
|
||||
'codex-mini': {
|
||||
apiKey: '${AZURE_API_KEY}',
|
||||
baseURL: 'https://example.cognitiveservices.azure.com/openai/',
|
||||
version: '2025-04-01-preview',
|
||||
serverless: true,
|
||||
models: {
|
||||
'codex-mini': {
|
||||
deploymentName: 'codex-mini',
|
||||
},
|
||||
},
|
||||
deploymentName: 'codex-mini',
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
};
|
||||
mockReq.body.endpoint = EModelEndpoint.azureOpenAI;
|
||||
mockReq.body.model = 'o1-mini';
|
||||
|
||||
@@ -731,37 +679,36 @@ describe('AgentClient - titleConvo', () => {
|
||||
mockReq.body.endpoint = EModelEndpoint.azureOpenAI;
|
||||
mockReq.body.model = 'gpt-4';
|
||||
|
||||
// Remove Azure-specific config
|
||||
delete mockReq.app.locals[EModelEndpoint.azureOpenAI];
|
||||
|
||||
// Set 'all' config as fallback with a serverless Azure config
|
||||
getAppConfig.mockResolvedValue({
|
||||
endpoints: {
|
||||
all: {
|
||||
titleConvo: true,
|
||||
titleModel: 'gpt-4',
|
||||
titleMethod: 'structured',
|
||||
titlePrompt: 'Fallback title prompt from all config',
|
||||
titlePromptTemplate: 'Template: {{content}}',
|
||||
modelGroupMap: {
|
||||
mockReq.app.locals.all = {
|
||||
titleConvo: true,
|
||||
titleModel: 'gpt-4',
|
||||
titleMethod: 'structured',
|
||||
titlePrompt: 'Fallback title prompt from all config',
|
||||
titlePromptTemplate: 'Template: {{content}}',
|
||||
modelGroupMap: {
|
||||
'gpt-4': {
|
||||
group: 'default-group',
|
||||
deploymentName: 'gpt-4',
|
||||
},
|
||||
},
|
||||
groupMap: {
|
||||
'default-group': {
|
||||
apiKey: '${AZURE_API_KEY}',
|
||||
baseURL: 'https://default.openai.azure.com/',
|
||||
version: '2024-02-15-preview',
|
||||
serverless: true,
|
||||
models: {
|
||||
'gpt-4': {
|
||||
group: 'default-group',
|
||||
deploymentName: 'gpt-4',
|
||||
},
|
||||
},
|
||||
groupMap: {
|
||||
'default-group': {
|
||||
apiKey: '${AZURE_API_KEY}',
|
||||
baseURL: 'https://default.openai.azure.com/',
|
||||
version: '2024-02-15-preview',
|
||||
serverless: true,
|
||||
models: {
|
||||
'gpt-4': {
|
||||
deploymentName: 'gpt-4',
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
};
|
||||
|
||||
const text = 'Test Azure with all config fallback';
|
||||
const abortController = new AbortController();
|
||||
@@ -1035,6 +982,13 @@ describe('AgentClient - titleConvo', () => {
|
||||
};
|
||||
|
||||
mockReq = {
|
||||
app: {
|
||||
locals: {
|
||||
memory: {
|
||||
messageWindowSize: 3,
|
||||
},
|
||||
},
|
||||
},
|
||||
user: {
|
||||
id: 'user-123',
|
||||
personalization: {
|
||||
@@ -1043,13 +997,6 @@ describe('AgentClient - titleConvo', () => {
|
||||
},
|
||||
};
|
||||
|
||||
// Mock getAppConfig for memory tests
|
||||
getAppConfig.mockResolvedValue({
|
||||
memory: {
|
||||
messageWindowSize: 3,
|
||||
},
|
||||
});
|
||||
|
||||
mockRes = {};
|
||||
|
||||
mockOptions = {
|
||||
|
||||
@@ -9,6 +9,24 @@ const {
|
||||
const { disposeClient, clientRegistry, requestDataMap } = require('~/server/cleanup');
|
||||
const { saveMessage } = require('~/models');
|
||||
|
||||
function createCloseHandler(abortController) {
|
||||
return function (manual) {
|
||||
if (!manual) {
|
||||
logger.debug('[AgentController] Request closed');
|
||||
}
|
||||
if (!abortController) {
|
||||
return;
|
||||
} else if (abortController.signal.aborted) {
|
||||
return;
|
||||
} else if (abortController.requestCompleted) {
|
||||
return;
|
||||
}
|
||||
|
||||
abortController.abort();
|
||||
logger.debug('[AgentController] Request aborted on close');
|
||||
};
|
||||
}
|
||||
|
||||
const AgentController = async (req, res, next, initializeClient, addTitle) => {
|
||||
let {
|
||||
text,
|
||||
@@ -31,7 +49,6 @@ const AgentController = async (req, res, next, initializeClient, addTitle) => {
|
||||
let userMessagePromise;
|
||||
let getAbortData;
|
||||
let client = null;
|
||||
// Initialize as an array
|
||||
let cleanupHandlers = [];
|
||||
|
||||
const newConvo = !conversationId;
|
||||
@@ -62,9 +79,7 @@ const AgentController = async (req, res, next, initializeClient, addTitle) => {
|
||||
// Create a function to handle final cleanup
|
||||
const performCleanup = () => {
|
||||
logger.debug('[AgentController] Performing cleanup');
|
||||
// Make sure cleanupHandlers is an array before iterating
|
||||
if (Array.isArray(cleanupHandlers)) {
|
||||
// Execute all cleanup handlers
|
||||
for (const handler of cleanupHandlers) {
|
||||
try {
|
||||
if (typeof handler === 'function') {
|
||||
@@ -105,8 +120,33 @@ const AgentController = async (req, res, next, initializeClient, addTitle) => {
|
||||
};
|
||||
|
||||
try {
|
||||
/** @type {{ client: TAgentClient }} */
|
||||
const result = await initializeClient({ req, res, endpointOption });
|
||||
let prelimAbortController = new AbortController();
|
||||
const prelimCloseHandler = createCloseHandler(prelimAbortController);
|
||||
res.on('close', prelimCloseHandler);
|
||||
const removePrelimHandler = (manual) => {
|
||||
try {
|
||||
prelimCloseHandler(manual);
|
||||
res.removeListener('close', prelimCloseHandler);
|
||||
} catch (e) {
|
||||
logger.error('[AgentController] Error removing close listener', e);
|
||||
}
|
||||
};
|
||||
cleanupHandlers.push(removePrelimHandler);
|
||||
/** @type {{ client: TAgentClient; userMCPAuthMap?: Record<string, Record<string, string>> }} */
|
||||
const result = await initializeClient({
|
||||
req,
|
||||
res,
|
||||
endpointOption,
|
||||
signal: prelimAbortController.signal,
|
||||
});
|
||||
if (prelimAbortController.signal?.aborted) {
|
||||
prelimAbortController = null;
|
||||
throw new Error('Request was aborted before initialization could complete');
|
||||
} else {
|
||||
prelimAbortController = null;
|
||||
removePrelimHandler(true);
|
||||
cleanupHandlers.pop();
|
||||
}
|
||||
client = result.client;
|
||||
|
||||
// Register client with finalization registry if available
|
||||
@@ -138,22 +178,7 @@ const AgentController = async (req, res, next, initializeClient, addTitle) => {
|
||||
};
|
||||
|
||||
const { abortController, onStart } = createAbortController(req, res, getAbortData, getReqData);
|
||||
|
||||
// Simple handler to avoid capturing scope
|
||||
const closeHandler = () => {
|
||||
logger.debug('[AgentController] Request closed');
|
||||
if (!abortController) {
|
||||
return;
|
||||
} else if (abortController.signal.aborted) {
|
||||
return;
|
||||
} else if (abortController.requestCompleted) {
|
||||
return;
|
||||
}
|
||||
|
||||
abortController.abort();
|
||||
logger.debug('[AgentController] Request aborted on close');
|
||||
};
|
||||
|
||||
const closeHandler = createCloseHandler(abortController);
|
||||
res.on('close', closeHandler);
|
||||
cleanupHandlers.push(() => {
|
||||
try {
|
||||
@@ -175,6 +200,7 @@ const AgentController = async (req, res, next, initializeClient, addTitle) => {
|
||||
abortController,
|
||||
overrideParentMessageId,
|
||||
isEdited: !!editedContent,
|
||||
userMCPAuthMap: result.userMCPAuthMap,
|
||||
responseMessageId: editedResponseMessageId,
|
||||
progressOptions: {
|
||||
res,
|
||||
|
||||
@@ -31,12 +31,12 @@ const {
|
||||
grantPermission,
|
||||
} = require('~/server/services/PermissionService');
|
||||
const { getStrategyFunctions } = require('~/server/services/Files/strategies');
|
||||
const { getCachedTools, getAppConfig } = require('~/server/services/Config');
|
||||
const { resizeAvatar } = require('~/server/services/Files/images/avatar');
|
||||
const { getFileStrategy } = require('~/server/utils/getFileStrategy');
|
||||
const { refreshS3Url } = require('~/server/services/Files/S3/crud');
|
||||
const { filterFile } = require('~/server/services/Files/process');
|
||||
const { updateAction, getActions } = require('~/models/Action');
|
||||
const { getCachedTools } = require('~/server/services/Config');
|
||||
const { deleteFileByFilter } = require('~/models/File');
|
||||
const { getCategoriesWithCounts } = require('~/models');
|
||||
|
||||
@@ -487,7 +487,6 @@ const getListAgentsHandler = async (req, res) => {
|
||||
*/
|
||||
const uploadAgentAvatarHandler = async (req, res) => {
|
||||
try {
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
filterFile({ req, file: req.file, image: true, isAvatar: true });
|
||||
const { agent_id } = req.params;
|
||||
if (!agent_id) {
|
||||
@@ -511,7 +510,9 @@ const uploadAgentAvatarHandler = async (req, res) => {
|
||||
}
|
||||
|
||||
const buffer = await fs.readFile(req.file.path);
|
||||
const fileStrategy = getFileStrategy(appConfig, { isAvatar: true });
|
||||
|
||||
const fileStrategy = getFileStrategy(req.app.locals, { isAvatar: true });
|
||||
|
||||
const resizedBuffer = await resizeAvatar({
|
||||
userId: req.user.id,
|
||||
input: buffer,
|
||||
|
||||
@@ -29,7 +29,6 @@ const { createRun, StreamRunManager } = require('~/server/services/Runs');
|
||||
const { addTitle } = require('~/server/services/Endpoints/assistants');
|
||||
const { createRunBody } = require('~/server/services/createRunBody');
|
||||
const { sendResponse } = require('~/server/middleware/error');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const { getTransactions } = require('~/models/Transaction');
|
||||
const { checkBalance } = require('~/models/balanceMethods');
|
||||
const { getConvo } = require('~/models/Conversation');
|
||||
@@ -48,7 +47,6 @@ const { getOpenAIClient } = require('./helpers');
|
||||
* @returns {void}
|
||||
*/
|
||||
const chatV1 = async (req, res) => {
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
logger.debug('[/assistants/chat/] req.body', req.body);
|
||||
|
||||
const {
|
||||
@@ -253,7 +251,7 @@ const chatV1 = async (req, res) => {
|
||||
}
|
||||
|
||||
const checkBalanceBeforeRun = async () => {
|
||||
const balance = appConfig?.balance;
|
||||
const balance = req.app?.locals?.balance;
|
||||
if (!balance?.enabled) {
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -26,7 +26,6 @@ const validateAuthor = require('~/server/middleware/assistants/validateAuthor');
|
||||
const { createRun, StreamRunManager } = require('~/server/services/Runs');
|
||||
const { addTitle } = require('~/server/services/Endpoints/assistants');
|
||||
const { createRunBody } = require('~/server/services/createRunBody');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const { getTransactions } = require('~/models/Transaction');
|
||||
const { checkBalance } = require('~/models/balanceMethods');
|
||||
const { getConvo } = require('~/models/Conversation');
|
||||
@@ -45,7 +44,6 @@ const { getOpenAIClient } = require('./helpers');
|
||||
*/
|
||||
const chatV2 = async (req, res) => {
|
||||
logger.debug('[/assistants/chat/] req.body', req.body);
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
|
||||
/** @type {{files: MongoFile[]}} */
|
||||
const {
|
||||
@@ -128,7 +126,7 @@ const chatV2 = async (req, res) => {
|
||||
}
|
||||
|
||||
const checkBalanceBeforeRun = async () => {
|
||||
const balance = appConfig?.balance;
|
||||
const balance = req.app?.locals?.balance;
|
||||
if (!balance?.enabled) {
|
||||
return;
|
||||
}
|
||||
@@ -376,9 +374,9 @@ const chatV2 = async (req, res) => {
|
||||
};
|
||||
|
||||
/** @type {undefined | TAssistantEndpoint} */
|
||||
const config = appConfig.endpoints?.[endpoint] ?? {};
|
||||
const config = req.app.locals[endpoint] ?? {};
|
||||
/** @type {undefined | TBaseEndpoint} */
|
||||
const allConfig = appConfig.endpoints?.all;
|
||||
const allConfig = req.app.locals.all;
|
||||
|
||||
const streamRunManager = new StreamRunManager({
|
||||
req,
|
||||
|
||||
@@ -7,8 +7,8 @@ const {
|
||||
const {
|
||||
initializeClient: initAzureClient,
|
||||
} = require('~/server/services/Endpoints/azureAssistants');
|
||||
const { getEndpointsConfig, getAppConfig } = require('~/server/services/Config');
|
||||
const { initializeClient } = require('~/server/services/Endpoints/assistants');
|
||||
const { getEndpointsConfig } = require('~/server/services/Config');
|
||||
|
||||
/**
|
||||
* @param {Express.Request} req
|
||||
@@ -210,7 +210,6 @@ async function getOpenAIClient({ req, res, endpointOption, initAppClient, overri
|
||||
* @returns {Promise<AssistantListResponse>} 200 - success response - application/json
|
||||
*/
|
||||
const fetchAssistants = async ({ req, res, overrideEndpoint }) => {
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
const {
|
||||
limit = 100,
|
||||
order = 'desc',
|
||||
@@ -231,20 +230,20 @@ const fetchAssistants = async ({ req, res, overrideEndpoint }) => {
|
||||
if (endpoint === EModelEndpoint.assistants) {
|
||||
({ body } = await listAllAssistants({ req, res, version, query }));
|
||||
} else if (endpoint === EModelEndpoint.azureAssistants) {
|
||||
const azureConfig = appConfig.endpoints?.[EModelEndpoint.azureOpenAI];
|
||||
const azureConfig = req.app.locals[EModelEndpoint.azureOpenAI];
|
||||
body = await listAssistantsForAzure({ req, res, version, azureConfig, query });
|
||||
}
|
||||
|
||||
if (req.user.role === SystemRoles.ADMIN) {
|
||||
return body;
|
||||
} else if (!appConfig.endpoints?.[endpoint]) {
|
||||
} else if (!req.app.locals[endpoint]) {
|
||||
return body;
|
||||
}
|
||||
|
||||
body.data = filterAssistants({
|
||||
userId: req.user.id,
|
||||
assistants: body.data,
|
||||
assistantsConfig: appConfig.endpoints?.[endpoint],
|
||||
assistantsConfig: req.app.locals[endpoint],
|
||||
});
|
||||
return body;
|
||||
};
|
||||
|
||||
@@ -5,9 +5,9 @@ const { uploadImageBuffer, filterFile } = require('~/server/services/Files/proce
|
||||
const validateAuthor = require('~/server/middleware/assistants/validateAuthor');
|
||||
const { getStrategyFunctions } = require('~/server/services/Files/strategies');
|
||||
const { deleteAssistantActions } = require('~/server/services/ActionService');
|
||||
const { getCachedTools, getAppConfig } = require('~/server/services/Config');
|
||||
const { updateAssistantDoc, getAssistants } = require('~/models/Assistant');
|
||||
const { getOpenAIClient, fetchAssistants } = require('./helpers');
|
||||
const { getCachedTools } = require('~/server/services/Config');
|
||||
const { manifestToolMap } = require('~/app/clients/tools');
|
||||
const { deleteFileByFilter } = require('~/models/File');
|
||||
|
||||
@@ -258,9 +258,8 @@ function filterAssistantDocs({ documents, userId, assistantsConfig = {} }) {
|
||||
*/
|
||||
const getAssistantDocuments = async (req, res) => {
|
||||
try {
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
const endpoint = req.query;
|
||||
const assistantsConfig = appConfig.endpoints?.[endpoint];
|
||||
const assistantsConfig = req.app.locals[endpoint];
|
||||
const documents = await getAssistants(
|
||||
{},
|
||||
{
|
||||
@@ -297,7 +296,6 @@ const getAssistantDocuments = async (req, res) => {
|
||||
*/
|
||||
const uploadAssistantAvatar = async (req, res) => {
|
||||
try {
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
filterFile({ req, file: req.file, image: true, isAvatar: true });
|
||||
const { assistant_id } = req.params;
|
||||
if (!assistant_id) {
|
||||
@@ -339,7 +337,7 @@ const uploadAssistantAvatar = async (req, res) => {
|
||||
const metadata = {
|
||||
..._metadata,
|
||||
avatar: image.filepath,
|
||||
avatar_source: appConfig.fileStrategy,
|
||||
avatar_source: req.app.locals.fileStrategy,
|
||||
};
|
||||
|
||||
const promises = [];
|
||||
@@ -349,7 +347,7 @@ const uploadAssistantAvatar = async (req, res) => {
|
||||
{
|
||||
avatar: {
|
||||
filepath: image.filepath,
|
||||
source: appConfig.fileStrategy,
|
||||
source: req.app.locals.fileStrategy,
|
||||
},
|
||||
user: req.user.id,
|
||||
},
|
||||
|
||||
@@ -13,7 +13,6 @@ const { processFileURL, uploadImageBuffer } = require('~/server/services/Files/p
|
||||
const { processCodeOutput } = require('~/server/services/Files/Code/process');
|
||||
const { createToolCall, getToolCallsByConvo } = require('~/models/ToolCall');
|
||||
const { loadAuthValues } = require('~/server/services/Tools/credentials');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const { loadTools } = require('~/app/clients/tools/util');
|
||||
const { getRoleByName } = require('~/models/Role');
|
||||
const { getMessage } = require('~/models/Message');
|
||||
@@ -36,10 +35,9 @@ const toolAccessPermType = {
|
||||
*/
|
||||
const verifyWebSearchAuth = async (req, res) => {
|
||||
try {
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
const userId = req.user.id;
|
||||
/** @type {TCustomConfig['webSearch']} */
|
||||
const webSearchConfig = appConfig?.webSearch || {};
|
||||
const webSearchConfig = req.app.locals?.webSearch || {};
|
||||
const result = await loadWebSearchAuth({
|
||||
userId,
|
||||
loadAuthValues,
|
||||
@@ -112,7 +110,6 @@ const verifyToolAuth = async (req, res) => {
|
||||
*/
|
||||
const callTool = async (req, res) => {
|
||||
try {
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
const { toolId = '' } = req.params;
|
||||
if (!fieldsMap[toolId]) {
|
||||
logger.warn(`[${toolId}/call] User ${req.user.id} attempted call to invalid tool`);
|
||||
@@ -158,10 +155,8 @@ const callTool = async (req, res) => {
|
||||
returnMetadata: true,
|
||||
processFileURL,
|
||||
uploadImageBuffer,
|
||||
fileStrategy: req.app.locals.fileStrategy,
|
||||
},
|
||||
webSearch: appConfig.webSearch,
|
||||
fileStrategy: appConfig.fileStrategy,
|
||||
imageOutputType: appConfig.imageOutputType,
|
||||
});
|
||||
|
||||
const tool = loadedTools[0];
|
||||
|
||||
@@ -14,14 +14,12 @@ const { isEnabled, ErrorController } = require('@librechat/api');
|
||||
const { connectDb, indexSync } = require('~/db');
|
||||
const validateImageRequest = require('./middleware/validateImageRequest');
|
||||
const { jwtLogin, ldapLogin, passportLogin } = require('~/strategies');
|
||||
const { updateInterfacePermissions } = require('~/models/interface');
|
||||
const { checkMigrations } = require('./services/start/migration');
|
||||
const initializeMCPs = require('./services/initializeMCPs');
|
||||
const configureSocialLogins = require('./socialLogins');
|
||||
const { getAppConfig } = require('./services/Config');
|
||||
const AppService = require('./services/AppService');
|
||||
const staticCache = require('./utils/staticCache');
|
||||
const noIndex = require('./middleware/noIndex');
|
||||
const { seedDatabase } = require('~/models');
|
||||
const routes = require('./routes');
|
||||
|
||||
const { PORT, HOST, ALLOW_SOCIAL_LOGIN, DISABLE_COMPRESSION, TRUST_PROXY } = process.env ?? {};
|
||||
@@ -47,11 +45,9 @@ const startServer = async () => {
|
||||
app.disable('x-powered-by');
|
||||
app.set('trust proxy', trusted_proxy);
|
||||
|
||||
await seedDatabase();
|
||||
await AppService(app);
|
||||
|
||||
const appConfig = await getAppConfig();
|
||||
await updateInterfacePermissions(appConfig);
|
||||
const indexPath = path.join(appConfig.paths.dist, 'index.html');
|
||||
const indexPath = path.join(app.locals.paths.dist, 'index.html');
|
||||
const indexHTML = fs.readFileSync(indexPath, 'utf8');
|
||||
|
||||
app.get('/health', (_req, res) => res.status(200).send('OK'));
|
||||
@@ -70,9 +66,10 @@ const startServer = async () => {
|
||||
console.warn('Response compression has been disabled via DISABLE_COMPRESSION.');
|
||||
}
|
||||
|
||||
app.use(staticCache(appConfig.paths.dist));
|
||||
app.use(staticCache(appConfig.paths.fonts));
|
||||
app.use(staticCache(appConfig.paths.assets));
|
||||
// Serve static assets with aggressive caching
|
||||
app.use(staticCache(app.locals.paths.dist));
|
||||
app.use(staticCache(app.locals.paths.fonts));
|
||||
app.use(staticCache(app.locals.paths.assets));
|
||||
|
||||
if (!ALLOW_SOCIAL_LOGIN) {
|
||||
console.warn('Social logins are disabled. Set ALLOW_SOCIAL_LOGIN=true to enable them.');
|
||||
@@ -149,7 +146,7 @@ const startServer = async () => {
|
||||
logger.info(`Server listening at http://${host == '0.0.0.0' ? 'localhost' : host}:${port}`);
|
||||
}
|
||||
|
||||
initializeMCPs().then(() => checkMigrations());
|
||||
initializeMCPs(app).then(() => checkMigrations());
|
||||
});
|
||||
};
|
||||
|
||||
|
||||
@@ -3,28 +3,9 @@ const request = require('supertest');
|
||||
const { MongoMemoryServer } = require('mongodb-memory-server');
|
||||
const mongoose = require('mongoose');
|
||||
|
||||
jest.mock('~/server/services/Config', () => ({
|
||||
loadCustomConfig: jest.fn(() => Promise.resolve({})),
|
||||
setAppConfig: jest.fn(),
|
||||
getAppConfig: jest.fn().mockResolvedValue({
|
||||
paths: {
|
||||
uploads: '/tmp',
|
||||
dist: '/tmp/dist',
|
||||
fonts: '/tmp/fonts',
|
||||
assets: '/tmp/assets',
|
||||
},
|
||||
fileStrategy: 'local',
|
||||
imageOutputType: 'PNG',
|
||||
}),
|
||||
setCachedTools: jest.fn(),
|
||||
}));
|
||||
|
||||
jest.mock('~/app/clients/tools', () => ({
|
||||
createOpenAIImageTools: jest.fn(() => []),
|
||||
createYouTubeTools: jest.fn(() => []),
|
||||
manifestToolMap: {},
|
||||
toolkits: [],
|
||||
}));
|
||||
jest.mock('~/server/services/Config/loadCustomConfig', () => {
|
||||
return jest.fn(() => Promise.resolve({}));
|
||||
});
|
||||
|
||||
describe('Server Configuration', () => {
|
||||
// Increase the default timeout to allow for Mongo cleanup
|
||||
@@ -50,22 +31,6 @@ describe('Server Configuration', () => {
|
||||
});
|
||||
|
||||
beforeAll(async () => {
|
||||
// Create the required directories and files for the test
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
const dirs = ['/tmp/dist', '/tmp/fonts', '/tmp/assets'];
|
||||
dirs.forEach((dir) => {
|
||||
if (!fs.existsSync(dir)) {
|
||||
fs.mkdirSync(dir, { recursive: true });
|
||||
}
|
||||
});
|
||||
|
||||
fs.writeFileSync(
|
||||
path.join('/tmp/dist', 'index.html'),
|
||||
'<!DOCTYPE html><html><head><title>LibreChat</title></head><body><div id="root"></div></body></html>',
|
||||
);
|
||||
|
||||
mongoServer = await MongoMemoryServer.create();
|
||||
process.env.MONGO_URI = mongoServer.getUri();
|
||||
process.env.PORT = '0'; // Use a random available port
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { countTokens, isEnabled, sendEvent } = require('@librechat/api');
|
||||
const { isAssistantsEndpoint, ErrorTypes } = require('librechat-data-provider');
|
||||
const { isAssistantsEndpoint, ErrorTypes, Constants } = require('librechat-data-provider');
|
||||
const { truncateText, smartTruncateText } = require('~/app/clients/prompts');
|
||||
const clearPendingReq = require('~/cache/clearPendingReq');
|
||||
const { sendError } = require('~/server/middleware/error');
|
||||
@@ -11,6 +11,10 @@ const { abortRun } = require('./abortRun');
|
||||
|
||||
const abortDataMap = new WeakMap();
|
||||
|
||||
/**
|
||||
* @param {string} abortKey
|
||||
* @returns {boolean}
|
||||
*/
|
||||
function cleanupAbortController(abortKey) {
|
||||
if (!abortControllers.has(abortKey)) {
|
||||
return false;
|
||||
@@ -71,6 +75,20 @@ function cleanupAbortController(abortKey) {
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* @param {string} abortKey
|
||||
* @returns {function(): void}
|
||||
*/
|
||||
function createCleanUpHandler(abortKey) {
|
||||
return function () {
|
||||
try {
|
||||
cleanupAbortController(abortKey);
|
||||
} catch {
|
||||
// Ignore cleanup errors
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
async function abortMessage(req, res) {
|
||||
let { abortKey, endpoint } = req.body;
|
||||
|
||||
@@ -172,11 +190,15 @@ const createAbortController = (req, res, getAbortData, getReqData) => {
|
||||
/**
|
||||
* @param {TMessage} userMessage
|
||||
* @param {string} responseMessageId
|
||||
* @param {boolean} [isNewConvo]
|
||||
*/
|
||||
const onStart = (userMessage, responseMessageId) => {
|
||||
const onStart = (userMessage, responseMessageId, isNewConvo) => {
|
||||
sendEvent(res, { message: userMessage, created: true });
|
||||
|
||||
const abortKey = userMessage?.conversationId ?? req.user.id;
|
||||
const prelimAbortKey = userMessage?.conversationId ?? req.user.id;
|
||||
const abortKey = isNewConvo
|
||||
? `${prelimAbortKey}${Constants.COMMON_DIVIDER}${Constants.NEW_CONVO}`
|
||||
: prelimAbortKey;
|
||||
getReqData({ abortKey });
|
||||
const prevRequest = abortControllers.get(abortKey);
|
||||
const { overrideUserMessageId } = req?.body ?? {};
|
||||
@@ -194,16 +216,7 @@ const createAbortController = (req, res, getAbortData, getReqData) => {
|
||||
};
|
||||
|
||||
abortControllers.set(addedAbortKey, { abortController, ...minimalOptions });
|
||||
|
||||
// Use a simple function for cleanup to avoid capturing context
|
||||
const cleanupHandler = () => {
|
||||
try {
|
||||
cleanupAbortController(addedAbortKey);
|
||||
} catch (e) {
|
||||
// Ignore cleanup errors
|
||||
}
|
||||
};
|
||||
|
||||
const cleanupHandler = createCleanUpHandler(addedAbortKey);
|
||||
res.on('finish', cleanupHandler);
|
||||
return;
|
||||
}
|
||||
@@ -216,16 +229,7 @@ const createAbortController = (req, res, getAbortData, getReqData) => {
|
||||
};
|
||||
|
||||
abortControllers.set(abortKey, { abortController, ...minimalOptions });
|
||||
|
||||
// Use a simple function for cleanup to avoid capturing context
|
||||
const cleanupHandler = () => {
|
||||
try {
|
||||
cleanupAbortController(abortKey);
|
||||
} catch (e) {
|
||||
// Ignore cleanup errors
|
||||
}
|
||||
};
|
||||
|
||||
const cleanupHandler = createCleanUpHandler(abortKey);
|
||||
res.on('finish', cleanupHandler);
|
||||
};
|
||||
|
||||
@@ -364,15 +368,7 @@ const handleAbortError = async (res, req, error, data) => {
|
||||
};
|
||||
}
|
||||
|
||||
// Create a simple callback without capturing parent scope
|
||||
const callback = async () => {
|
||||
try {
|
||||
cleanupAbortController(conversationId);
|
||||
} catch (e) {
|
||||
// Ignore cleanup errors
|
||||
}
|
||||
};
|
||||
|
||||
const callback = createCleanUpHandler(conversationId);
|
||||
await sendError(req, res, options, callback);
|
||||
};
|
||||
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
const { v4 } = require('uuid');
|
||||
const { handleAbortError } = require('~/server/middleware/abortMiddleware');
|
||||
const { getAppConfig } = require('~/server/services/Config/app');
|
||||
|
||||
/**
|
||||
* Checks if the assistant is supported or excluded
|
||||
@@ -13,9 +12,8 @@ const { getAppConfig } = require('~/server/services/Config/app');
|
||||
const validateAssistant = async (req, res, next) => {
|
||||
const { endpoint, conversationId, assistant_id, messageId } = req.body;
|
||||
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
/** @type {Partial<TAssistantEndpoint>} */
|
||||
const assistantsConfig = appConfig.endpoints?.[endpoint];
|
||||
const assistantsConfig = req.app.locals?.[endpoint];
|
||||
if (!assistantsConfig) {
|
||||
return next();
|
||||
}
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
const { SystemRoles } = require('librechat-data-provider');
|
||||
const { getAppConfig } = require('~/server/services/Config/app');
|
||||
const { getAssistant } = require('~/models/Assistant');
|
||||
|
||||
/**
|
||||
@@ -21,9 +20,8 @@ const validateAuthor = async ({ req, openai, overrideEndpoint, overrideAssistant
|
||||
const assistant_id =
|
||||
overrideAssistantId ?? req.params.id ?? req.body.assistant_id ?? req.query.assistant_id;
|
||||
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
/** @type {Partial<TAssistantEndpoint>} */
|
||||
const assistantsConfig = appConfig.endpoints?.[endpoint];
|
||||
const assistantsConfig = req.app.locals?.[endpoint];
|
||||
if (!assistantsConfig) {
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -10,7 +10,6 @@ const azureAssistants = require('~/server/services/Endpoints/azureAssistants');
|
||||
const assistants = require('~/server/services/Endpoints/assistants');
|
||||
const { processFiles } = require('~/server/services/Files/process');
|
||||
const anthropic = require('~/server/services/Endpoints/anthropic');
|
||||
const { getAppConfig } = require('~/server/services/Config/app');
|
||||
const bedrock = require('~/server/services/Endpoints/bedrock');
|
||||
const openAI = require('~/server/services/Endpoints/openAI');
|
||||
const agents = require('~/server/services/Endpoints/agents');
|
||||
@@ -41,10 +40,9 @@ async function buildEndpointOption(req, res, next) {
|
||||
return handleError(res, { text: 'Error parsing conversation' });
|
||||
}
|
||||
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
if (appConfig.modelSpecs?.list && appConfig.modelSpecs?.enforce) {
|
||||
if (req.app.locals.modelSpecs?.list && req.app.locals.modelSpecs?.enforce) {
|
||||
/** @type {{ list: TModelSpec[] }}*/
|
||||
const { list } = appConfig.modelSpecs;
|
||||
const { list } = req.app.locals.modelSpecs;
|
||||
const { spec } = parsedBody;
|
||||
|
||||
if (!spec) {
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { isEmailDomainAllowed } = require('~/server/services/domains');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
/**
|
||||
* Checks the domain's social login is allowed
|
||||
@@ -15,10 +14,7 @@ const { getAppConfig } = require('~/server/services/Config');
|
||||
*/
|
||||
const checkDomainAllowed = async (req, res, next = () => {}) => {
|
||||
const email = req?.user?.email;
|
||||
const appConfig = getAppConfig({
|
||||
role: req?.user?.role,
|
||||
});
|
||||
if (email && !(await isEmailDomainAllowed(email, appConfig?.registration?.allowedDomains))) {
|
||||
if (email && !(await isEmailDomainAllowed(email))) {
|
||||
logger.error(`[Social Login] [Social Login not allowed] [Email: ${email}]`);
|
||||
return res.redirect('/login');
|
||||
} else {
|
||||
|
||||
@@ -1,18 +1,13 @@
|
||||
const jwt = require('jsonwebtoken');
|
||||
const validateImageRequest = require('~/server/middleware/validateImageRequest');
|
||||
|
||||
jest.mock('~/server/services/Config/app', () => ({
|
||||
getAppConfig: jest.fn(),
|
||||
}));
|
||||
|
||||
describe('validateImageRequest middleware', () => {
|
||||
let req, res, next;
|
||||
const validObjectId = '65cfb246f7ecadb8b1e8036b';
|
||||
const { getAppConfig } = require('~/server/services/Config/app');
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
req = {
|
||||
app: { locals: { secureImageLinks: true } },
|
||||
headers: {},
|
||||
originalUrl: '',
|
||||
};
|
||||
@@ -22,86 +17,79 @@ describe('validateImageRequest middleware', () => {
|
||||
};
|
||||
next = jest.fn();
|
||||
process.env.JWT_REFRESH_SECRET = 'test-secret';
|
||||
|
||||
// Mock getAppConfig to return secureImageLinks: true by default
|
||||
getAppConfig.mockResolvedValue({
|
||||
secureImageLinks: true,
|
||||
});
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
jest.clearAllMocks();
|
||||
});
|
||||
|
||||
test('should call next() if secureImageLinks is false', async () => {
|
||||
getAppConfig.mockResolvedValue({
|
||||
secureImageLinks: false,
|
||||
});
|
||||
await validateImageRequest(req, res, next);
|
||||
test('should call next() if secureImageLinks is false', () => {
|
||||
req.app.locals.secureImageLinks = false;
|
||||
validateImageRequest(req, res, next);
|
||||
expect(next).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('should return 401 if refresh token is not provided', async () => {
|
||||
await validateImageRequest(req, res, next);
|
||||
test('should return 401 if refresh token is not provided', () => {
|
||||
validateImageRequest(req, res, next);
|
||||
expect(res.status).toHaveBeenCalledWith(401);
|
||||
expect(res.send).toHaveBeenCalledWith('Unauthorized');
|
||||
});
|
||||
|
||||
test('should return 403 if refresh token is invalid', async () => {
|
||||
test('should return 403 if refresh token is invalid', () => {
|
||||
req.headers.cookie = 'refreshToken=invalid-token';
|
||||
await validateImageRequest(req, res, next);
|
||||
validateImageRequest(req, res, next);
|
||||
expect(res.status).toHaveBeenCalledWith(403);
|
||||
expect(res.send).toHaveBeenCalledWith('Access Denied');
|
||||
});
|
||||
|
||||
test('should return 403 if refresh token is expired', async () => {
|
||||
test('should return 403 if refresh token is expired', () => {
|
||||
const expiredToken = jwt.sign(
|
||||
{ id: validObjectId, exp: Math.floor(Date.now() / 1000) - 3600 },
|
||||
process.env.JWT_REFRESH_SECRET,
|
||||
);
|
||||
req.headers.cookie = `refreshToken=${expiredToken}`;
|
||||
await validateImageRequest(req, res, next);
|
||||
validateImageRequest(req, res, next);
|
||||
expect(res.status).toHaveBeenCalledWith(403);
|
||||
expect(res.send).toHaveBeenCalledWith('Access Denied');
|
||||
});
|
||||
|
||||
test('should call next() for valid image path', async () => {
|
||||
test('should call next() for valid image path', () => {
|
||||
const validToken = jwt.sign(
|
||||
{ id: validObjectId, exp: Math.floor(Date.now() / 1000) + 3600 },
|
||||
process.env.JWT_REFRESH_SECRET,
|
||||
);
|
||||
req.headers.cookie = `refreshToken=${validToken}`;
|
||||
req.originalUrl = `/images/${validObjectId}/example.jpg`;
|
||||
await validateImageRequest(req, res, next);
|
||||
validateImageRequest(req, res, next);
|
||||
expect(next).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('should return 403 for invalid image path', async () => {
|
||||
test('should return 403 for invalid image path', () => {
|
||||
const validToken = jwt.sign(
|
||||
{ id: validObjectId, exp: Math.floor(Date.now() / 1000) + 3600 },
|
||||
process.env.JWT_REFRESH_SECRET,
|
||||
);
|
||||
req.headers.cookie = `refreshToken=${validToken}`;
|
||||
req.originalUrl = '/images/65cfb246f7ecadb8b1e8036c/example.jpg'; // Different ObjectId
|
||||
await validateImageRequest(req, res, next);
|
||||
validateImageRequest(req, res, next);
|
||||
expect(res.status).toHaveBeenCalledWith(403);
|
||||
expect(res.send).toHaveBeenCalledWith('Access Denied');
|
||||
});
|
||||
|
||||
test('should return 403 for invalid ObjectId format', async () => {
|
||||
test('should return 403 for invalid ObjectId format', () => {
|
||||
const validToken = jwt.sign(
|
||||
{ id: validObjectId, exp: Math.floor(Date.now() / 1000) + 3600 },
|
||||
process.env.JWT_REFRESH_SECRET,
|
||||
);
|
||||
req.headers.cookie = `refreshToken=${validToken}`;
|
||||
req.originalUrl = '/images/123/example.jpg'; // Invalid ObjectId
|
||||
await validateImageRequest(req, res, next);
|
||||
validateImageRequest(req, res, next);
|
||||
expect(res.status).toHaveBeenCalledWith(403);
|
||||
expect(res.send).toHaveBeenCalledWith('Access Denied');
|
||||
});
|
||||
|
||||
// File traversal tests
|
||||
test('should prevent file traversal attempts', async () => {
|
||||
test('should prevent file traversal attempts', () => {
|
||||
const validToken = jwt.sign(
|
||||
{ id: validObjectId, exp: Math.floor(Date.now() / 1000) + 3600 },
|
||||
process.env.JWT_REFRESH_SECRET,
|
||||
@@ -115,23 +103,23 @@ describe('validateImageRequest middleware', () => {
|
||||
`/images/${validObjectId}/%2e%2e%2f%2e%2e%2f%2e%2e%2fetc%2fpasswd`,
|
||||
];
|
||||
|
||||
for (const attempt of traversalAttempts) {
|
||||
traversalAttempts.forEach((attempt) => {
|
||||
req.originalUrl = attempt;
|
||||
await validateImageRequest(req, res, next);
|
||||
validateImageRequest(req, res, next);
|
||||
expect(res.status).toHaveBeenCalledWith(403);
|
||||
expect(res.send).toHaveBeenCalledWith('Access Denied');
|
||||
jest.clearAllMocks();
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
test('should handle URL encoded characters in valid paths', async () => {
|
||||
test('should handle URL encoded characters in valid paths', () => {
|
||||
const validToken = jwt.sign(
|
||||
{ id: validObjectId, exp: Math.floor(Date.now() / 1000) + 3600 },
|
||||
process.env.JWT_REFRESH_SECRET,
|
||||
);
|
||||
req.headers.cookie = `refreshToken=${validToken}`;
|
||||
req.originalUrl = `/images/${validObjectId}/image%20with%20spaces.jpg`;
|
||||
await validateImageRequest(req, res, next);
|
||||
validateImageRequest(req, res, next);
|
||||
expect(next).toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
const cookies = require('cookie');
|
||||
const jwt = require('jsonwebtoken');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { getAppConfig } = require('~/server/services/Config/app');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
const OBJECT_ID_LENGTH = 24;
|
||||
const OBJECT_ID_PATTERN = /^[0-9a-f]{24}$/i;
|
||||
@@ -25,9 +24,8 @@ function isValidObjectId(id) {
|
||||
* Middleware to validate image request.
|
||||
* Must be set by `secureImageLinks` via custom config file.
|
||||
*/
|
||||
async function validateImageRequest(req, res, next) {
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
if (!appConfig.secureImageLinks) {
|
||||
function validateImageRequest(req, res, next) {
|
||||
if (!req.app.locals.secureImageLinks) {
|
||||
return next();
|
||||
}
|
||||
|
||||
|
||||
@@ -1,15 +1,17 @@
|
||||
const { MongoMemoryServer } = require('mongodb-memory-server');
|
||||
const express = require('express');
|
||||
const request = require('supertest');
|
||||
const mongoose = require('mongoose');
|
||||
const express = require('express');
|
||||
const { MongoMemoryServer } = require('mongodb-memory-server');
|
||||
|
||||
jest.mock('@librechat/api', () => ({
|
||||
...jest.requireActual('@librechat/api'),
|
||||
MCPOAuthHandler: {
|
||||
initiateOAuthFlow: jest.fn(),
|
||||
getFlowState: jest.fn(),
|
||||
completeOAuthFlow: jest.fn(),
|
||||
generateFlowId: jest.fn(),
|
||||
},
|
||||
getUserMCPAuthMap: jest.fn(),
|
||||
}));
|
||||
|
||||
jest.mock('@librechat/data-schemas', () => ({
|
||||
@@ -36,6 +38,7 @@ jest.mock('~/models', () => ({
|
||||
updateToken: jest.fn(),
|
||||
createToken: jest.fn(),
|
||||
deleteTokens: jest.fn(),
|
||||
findPluginAuthsByKeys: jest.fn(),
|
||||
}));
|
||||
|
||||
jest.mock('~/server/services/Config', () => ({
|
||||
@@ -44,6 +47,10 @@ jest.mock('~/server/services/Config', () => ({
|
||||
loadCustomConfig: jest.fn(),
|
||||
}));
|
||||
|
||||
jest.mock('~/server/services/Config/mcpToolsCache', () => ({
|
||||
updateMCPUserTools: jest.fn(),
|
||||
}));
|
||||
|
||||
jest.mock('~/server/services/MCP', () => ({
|
||||
getMCPSetupData: jest.fn(),
|
||||
getServerConnectionStatus: jest.fn(),
|
||||
@@ -66,6 +73,10 @@ jest.mock('~/server/middleware', () => ({
|
||||
requireJwtAuth: (req, res, next) => next(),
|
||||
}));
|
||||
|
||||
jest.mock('~/server/services/Tools/mcp', () => ({
|
||||
reinitMCPServer: jest.fn(),
|
||||
}));
|
||||
|
||||
describe('MCP Routes', () => {
|
||||
let app;
|
||||
let mongoServer;
|
||||
@@ -677,6 +688,13 @@ describe('MCP Routes', () => {
|
||||
require('~/config').getMCPManager.mockReturnValue(mockMcpManager);
|
||||
require('~/config').getFlowStateManager.mockReturnValue({});
|
||||
require('~/cache').getLogStores.mockReturnValue({});
|
||||
require('~/server/services/Tools/mcp').reinitMCPServer.mockResolvedValue({
|
||||
success: true,
|
||||
message: "MCP server 'oauth-server' ready for OAuth authentication",
|
||||
serverName: 'oauth-server',
|
||||
oauthRequired: true,
|
||||
oauthUrl: 'https://oauth.example.com/auth',
|
||||
});
|
||||
|
||||
const response = await request(app).post('/api/mcp/oauth-server/reinitialize');
|
||||
|
||||
@@ -701,6 +719,7 @@ describe('MCP Routes', () => {
|
||||
require('~/config').getMCPManager.mockReturnValue(mockMcpManager);
|
||||
require('~/config').getFlowStateManager.mockReturnValue({});
|
||||
require('~/cache').getLogStores.mockReturnValue({});
|
||||
require('~/server/services/Tools/mcp').reinitMCPServer.mockResolvedValue(null);
|
||||
|
||||
const response = await request(app).post('/api/mcp/error-server/reinitialize');
|
||||
|
||||
@@ -759,8 +778,18 @@ describe('MCP Routes', () => {
|
||||
require('~/cache').getLogStores.mockReturnValue({});
|
||||
|
||||
const { getCachedTools, setCachedTools } = require('~/server/services/Config');
|
||||
const { updateMCPUserTools } = require('~/server/services/Config/mcpToolsCache');
|
||||
getCachedTools.mockResolvedValue({});
|
||||
setCachedTools.mockResolvedValue();
|
||||
updateMCPUserTools.mockResolvedValue();
|
||||
|
||||
require('~/server/services/Tools/mcp').reinitMCPServer.mockResolvedValue({
|
||||
success: true,
|
||||
message: "MCP server 'test-server' reinitialized successfully",
|
||||
serverName: 'test-server',
|
||||
oauthRequired: false,
|
||||
oauthUrl: null,
|
||||
});
|
||||
|
||||
const response = await request(app).post('/api/mcp/test-server/reinitialize');
|
||||
|
||||
@@ -776,7 +805,6 @@ describe('MCP Routes', () => {
|
||||
'test-user-id',
|
||||
'test-server',
|
||||
);
|
||||
expect(setCachedTools).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should handle server with custom user variables', async () => {
|
||||
@@ -798,21 +826,38 @@ describe('MCP Routes', () => {
|
||||
require('~/config').getMCPManager.mockReturnValue(mockMcpManager);
|
||||
require('~/config').getFlowStateManager.mockReturnValue({});
|
||||
require('~/cache').getLogStores.mockReturnValue({});
|
||||
require('~/server/services/PluginService').getUserPluginAuthValue.mockResolvedValue(
|
||||
'api-key-value',
|
||||
);
|
||||
require('@librechat/api').getUserMCPAuthMap.mockResolvedValue({
|
||||
'mcp:test-server': {
|
||||
API_KEY: 'api-key-value',
|
||||
},
|
||||
});
|
||||
require('~/models').findPluginAuthsByKeys.mockResolvedValue([
|
||||
{ key: 'API_KEY', value: 'api-key-value' },
|
||||
]);
|
||||
|
||||
const { getCachedTools, setCachedTools } = require('~/server/services/Config');
|
||||
const { updateMCPUserTools } = require('~/server/services/Config/mcpToolsCache');
|
||||
getCachedTools.mockResolvedValue({});
|
||||
setCachedTools.mockResolvedValue();
|
||||
updateMCPUserTools.mockResolvedValue();
|
||||
|
||||
require('~/server/services/Tools/mcp').reinitMCPServer.mockResolvedValue({
|
||||
success: true,
|
||||
message: "MCP server 'test-server' reinitialized successfully",
|
||||
serverName: 'test-server',
|
||||
oauthRequired: false,
|
||||
oauthUrl: null,
|
||||
});
|
||||
|
||||
const response = await request(app).post('/api/mcp/test-server/reinitialize');
|
||||
|
||||
expect(response.status).toBe(200);
|
||||
expect(response.body.success).toBe(true);
|
||||
expect(
|
||||
require('~/server/services/PluginService').getUserPluginAuthValue,
|
||||
).toHaveBeenCalledWith('test-user-id', 'API_KEY', false);
|
||||
expect(require('@librechat/api').getUserMCPAuthMap).toHaveBeenCalledWith({
|
||||
userId: 'test-user-id',
|
||||
servers: ['test-server'],
|
||||
findPluginAuthsByKeys: require('~/models').findPluginAuthsByKeys,
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
|
||||
@@ -16,7 +16,6 @@ const { getAgent, updateAgent, getListAgentsByAccess } = require('~/models/Agent
|
||||
const { updateAction, getActions, deleteAction } = require('~/models/Action');
|
||||
const { isActionDomainAllowed } = require('~/server/services/domains');
|
||||
const { canAccessAgentResource } = require('~/server/middleware');
|
||||
const { getAppConfig } = require('~/server/services/Config/app');
|
||||
const { getRoleByName } = require('~/models/Role');
|
||||
|
||||
const router = express.Router();
|
||||
@@ -84,11 +83,7 @@ router.post(
|
||||
}
|
||||
|
||||
let metadata = await encryptMetadata(removeNullishValues(_metadata, true));
|
||||
const appConfig = await getAppConfig({ role: req.user.role });
|
||||
const isDomainAllowed = await isActionDomainAllowed(
|
||||
metadata.domain,
|
||||
appConfig?.actions?.allowedDomains,
|
||||
);
|
||||
const isDomainAllowed = await isActionDomainAllowed(metadata.domain);
|
||||
if (!isDomainAllowed) {
|
||||
return res.status(400).json({ message: 'Domain not allowed' });
|
||||
}
|
||||
|
||||
@@ -6,7 +6,6 @@ const { getOpenAIClient } = require('~/server/controllers/assistants/helpers');
|
||||
const { updateAction, getActions, deleteAction } = require('~/models/Action');
|
||||
const { updateAssistantDoc, getAssistant } = require('~/models/Assistant');
|
||||
const { isActionDomainAllowed } = require('~/server/services/domains');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
const router = express.Router();
|
||||
@@ -22,7 +21,6 @@ const router = express.Router();
|
||||
*/
|
||||
router.post('/:assistant_id', async (req, res) => {
|
||||
try {
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
const { assistant_id } = req.params;
|
||||
|
||||
/** @type {{ functions: FunctionTool[], action_id: string, metadata: ActionMetadata }} */
|
||||
@@ -32,10 +30,7 @@ router.post('/:assistant_id', async (req, res) => {
|
||||
}
|
||||
|
||||
let metadata = await encryptMetadata(removeNullishValues(_metadata, true));
|
||||
const isDomainAllowed = await isActionDomainAllowed(
|
||||
metadata.domain,
|
||||
appConfig?.actions?.allowedDomains,
|
||||
);
|
||||
const isDomainAllowed = await isActionDomainAllowed(metadata.domain);
|
||||
if (!isDomainAllowed) {
|
||||
return res.status(400).json({ message: 'Domain not allowed' });
|
||||
}
|
||||
@@ -130,7 +125,7 @@ router.post('/:assistant_id', async (req, res) => {
|
||||
}
|
||||
|
||||
/* Map Azure OpenAI model to the assistant as defined by config */
|
||||
if (appConfig.endpoints?.[EModelEndpoint.azureOpenAI]?.assistants) {
|
||||
if (req.app.locals[EModelEndpoint.azureOpenAI]?.assistants) {
|
||||
updatedAssistant = {
|
||||
...updatedAssistant,
|
||||
model: req.body.model,
|
||||
|
||||
@@ -1,14 +1,9 @@
|
||||
const express = require('express');
|
||||
const { isEnabled } = require('@librechat/api');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const {
|
||||
Constants,
|
||||
CacheKeys,
|
||||
removeNullishValues,
|
||||
defaultSocialLogins,
|
||||
} = require('librechat-data-provider');
|
||||
const { CacheKeys, defaultSocialLogins, Constants } = require('librechat-data-provider');
|
||||
const { getCustomConfig } = require('~/server/services/Config/getCustomConfig');
|
||||
const { getLdapConfig } = require('~/server/services/Config/ldap');
|
||||
const { getAppConfig } = require('~/server/services/Config/app');
|
||||
const { getProjectByName } = require('~/models/Project');
|
||||
const { getMCPManager } = require('~/config');
|
||||
const { getLogStores } = require('~/cache');
|
||||
@@ -48,8 +43,6 @@ router.get('/', async function (req, res) {
|
||||
const ldap = getLdapConfig();
|
||||
|
||||
try {
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
|
||||
const isOpenIdEnabled =
|
||||
!!process.env.OPENID_CLIENT_ID &&
|
||||
!!process.env.OPENID_CLIENT_SECRET &&
|
||||
@@ -65,7 +58,7 @@ router.get('/', async function (req, res) {
|
||||
/** @type {TStartupConfig} */
|
||||
const payload = {
|
||||
appTitle: process.env.APP_TITLE || 'LibreChat',
|
||||
socialLogins: appConfig?.registration?.socialLogins ?? defaultSocialLogins,
|
||||
socialLogins: req.app.locals.socialLogins ?? defaultSocialLogins,
|
||||
discordLoginEnabled: !!process.env.DISCORD_CLIENT_ID && !!process.env.DISCORD_CLIENT_SECRET,
|
||||
facebookLoginEnabled:
|
||||
!!process.env.FACEBOOK_CLIENT_ID && !!process.env.FACEBOOK_CLIENT_SECRET,
|
||||
@@ -98,10 +91,10 @@ router.get('/', async function (req, res) {
|
||||
isEnabled(process.env.SHOW_BIRTHDAY_ICON) ||
|
||||
process.env.SHOW_BIRTHDAY_ICON === '',
|
||||
helpAndFaqURL: process.env.HELP_AND_FAQ_URL || 'https://librechat.ai',
|
||||
interface: appConfig?.interfaceConfig,
|
||||
turnstile: appConfig?.turnstileConfig,
|
||||
modelSpecs: appConfig?.modelSpecs,
|
||||
balance: appConfig?.balance,
|
||||
interface: req.app.locals.interfaceConfig,
|
||||
turnstile: req.app.locals.turnstileConfig,
|
||||
modelSpecs: req.app.locals.modelSpecs,
|
||||
balance: req.app.locals.balance,
|
||||
sharedLinksEnabled,
|
||||
publicSharedLinksEnabled,
|
||||
analyticsGtmId: process.env.ANALYTICS_GTM_ID,
|
||||
@@ -116,31 +109,27 @@ router.get('/', async function (req, res) {
|
||||
};
|
||||
|
||||
payload.mcpServers = {};
|
||||
const getMCPServers = () => {
|
||||
const config = await getCustomConfig();
|
||||
if (config?.mcpServers != null) {
|
||||
try {
|
||||
const mcpManager = getMCPManager();
|
||||
if (!mcpManager) {
|
||||
return;
|
||||
}
|
||||
const mcpServers = mcpManager.getAllServers();
|
||||
if (!mcpServers) return;
|
||||
const oauthServers = mcpManager.getOAuthServers();
|
||||
for (const serverName in mcpServers) {
|
||||
const serverConfig = mcpServers[serverName];
|
||||
payload.mcpServers[serverName] = removeNullishValues({
|
||||
for (const serverName in config.mcpServers) {
|
||||
const serverConfig = config.mcpServers[serverName];
|
||||
payload.mcpServers[serverName] = {
|
||||
startup: serverConfig?.startup,
|
||||
chatMenu: serverConfig?.chatMenu,
|
||||
isOAuth: oauthServers?.has(serverName),
|
||||
customUserVars: serverConfig?.customUserVars,
|
||||
});
|
||||
customUserVars: serverConfig?.customUserVars || {},
|
||||
};
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Error loading MCP servers', error);
|
||||
} catch (err) {
|
||||
logger.error('Error loading MCP servers', err);
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
getMCPServers();
|
||||
const webSearchConfig = appConfig?.webSearch;
|
||||
/** @type {TCustomConfig['webSearch']} */
|
||||
const webSearchConfig = req.app.locals.webSearch;
|
||||
if (
|
||||
webSearchConfig != null &&
|
||||
(webSearchConfig.searchProvider ||
|
||||
|
||||
@@ -2,16 +2,14 @@ const fs = require('fs').promises;
|
||||
const express = require('express');
|
||||
const { getStrategyFunctions } = require('~/server/services/Files/strategies');
|
||||
const { resizeAvatar } = require('~/server/services/Files/images/avatar');
|
||||
const { getFileStrategy } = require('~/server/utils/getFileStrategy');
|
||||
const { filterFile } = require('~/server/services/Files/process');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const { getFileStrategy } = require('~/server/utils/getFileStrategy');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
const router = express.Router();
|
||||
|
||||
router.post('/', async (req, res) => {
|
||||
try {
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
filterFile({ req, file: req.file, image: true, isAvatar: true });
|
||||
const userId = req.user.id;
|
||||
const { manual } = req.body;
|
||||
@@ -21,8 +19,8 @@ router.post('/', async (req, res) => {
|
||||
throw new Error('User ID is undefined');
|
||||
}
|
||||
|
||||
const fileStrategy = getFileStrategy(appConfig, { isAvatar: true });
|
||||
const desiredFormat = appConfig.imageOutputType;
|
||||
const fileStrategy = getFileStrategy(req.app.locals, { isAvatar: true });
|
||||
const desiredFormat = req.app.locals.imageOutputType;
|
||||
const resizedBuffer = await resizeAvatar({
|
||||
userId,
|
||||
input,
|
||||
@@ -41,7 +39,7 @@ router.post('/', async (req, res) => {
|
||||
try {
|
||||
await fs.unlink(req.file.path);
|
||||
logger.debug('[/files/images/avatar] Temp. image upload file deleted');
|
||||
} catch {
|
||||
} catch (error) {
|
||||
logger.debug('[/files/images/avatar] Temp. image upload file already deleted');
|
||||
}
|
||||
}
|
||||
|
||||
@@ -26,7 +26,6 @@ const { loadAuthValues } = require('~/server/services/Tools/credentials');
|
||||
const { refreshS3FileUrls } = require('~/server/services/Files/S3/crud');
|
||||
const { hasAccessToFilesViaAgent } = require('~/server/services/Files');
|
||||
const { getFiles, batchUpdateFiles } = require('~/models/File');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const { cleanFileName } = require('~/server/utils/files');
|
||||
const { getAssistant } = require('~/models/Assistant');
|
||||
const { getAgent } = require('~/models/Agent');
|
||||
@@ -37,9 +36,8 @@ const router = express.Router();
|
||||
|
||||
router.get('/', async (req, res) => {
|
||||
try {
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
const files = await getFiles({ user: req.user.id });
|
||||
if (appConfig.fileStrategy === FileSources.s3) {
|
||||
if (req.app.locals.fileStrategy === FileSources.s3) {
|
||||
try {
|
||||
const cache = getLogStores(CacheKeys.S3_EXPIRY_INTERVAL);
|
||||
const alreadyChecked = await cache.get(req.user.id);
|
||||
@@ -116,8 +114,7 @@ router.get('/agent/:agent_id', async (req, res) => {
|
||||
|
||||
router.get('/config', async (req, res) => {
|
||||
try {
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
res.status(200).json(appConfig.fileConfig);
|
||||
res.status(200).json(req.app.locals.fileConfig);
|
||||
} catch (error) {
|
||||
logger.error('[/files] Error getting fileConfig', error);
|
||||
res.status(400).json({ message: 'Error in request', error: error.message });
|
||||
|
||||
@@ -7,13 +7,11 @@ const {
|
||||
processImageFile,
|
||||
processAgentFileUpload,
|
||||
} = require('~/server/services/Files/process');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
const router = express.Router();
|
||||
|
||||
router.post('/', async (req, res) => {
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
const metadata = req.body;
|
||||
|
||||
try {
|
||||
@@ -32,7 +30,7 @@ router.post('/', async (req, res) => {
|
||||
logger.error('[/files/images] Error processing file:', error);
|
||||
try {
|
||||
const filepath = path.join(
|
||||
appConfig.paths.imageOutput,
|
||||
req.app.locals.paths.imageOutput,
|
||||
req.user.id,
|
||||
path.basename(req.file.filename),
|
||||
);
|
||||
@@ -45,7 +43,7 @@ router.post('/', async (req, res) => {
|
||||
try {
|
||||
await fs.unlink(req.file.path);
|
||||
logger.debug('[/files/images] Temp. image upload file deleted');
|
||||
} catch {
|
||||
} catch (error) {
|
||||
logger.debug('[/files/images] Temp. image upload file already deleted');
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4,21 +4,15 @@ const crypto = require('crypto');
|
||||
const multer = require('multer');
|
||||
const { sanitizeFilename } = require('@librechat/api');
|
||||
const { fileConfig: defaultFileConfig, mergeFileConfig } = require('librechat-data-provider');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const { getCustomConfig } = require('~/server/services/Config');
|
||||
|
||||
const storage = multer.diskStorage({
|
||||
destination: function (req, file, cb) {
|
||||
getAppConfig({ role: req.user?.role })
|
||||
.then((appConfig) => {
|
||||
const outputPath = path.join(appConfig.paths.uploads, 'temp', req.user.id);
|
||||
if (!fs.existsSync(outputPath)) {
|
||||
fs.mkdirSync(outputPath, { recursive: true });
|
||||
}
|
||||
cb(null, outputPath);
|
||||
})
|
||||
.catch((error) => {
|
||||
cb(error);
|
||||
});
|
||||
const outputPath = path.join(req.app.locals.paths.uploads, 'temp', req.user.id);
|
||||
if (!fs.existsSync(outputPath)) {
|
||||
fs.mkdirSync(outputPath, { recursive: true });
|
||||
}
|
||||
cb(null, outputPath);
|
||||
},
|
||||
filename: function (req, file, cb) {
|
||||
req.file_id = crypto.randomUUID();
|
||||
@@ -74,8 +68,8 @@ const createFileFilter = (customFileConfig) => {
|
||||
};
|
||||
|
||||
const createMulterInstance = async () => {
|
||||
const appConfig = await getAppConfig();
|
||||
const fileConfig = mergeFileConfig(appConfig?.fileConfig);
|
||||
const customConfig = await getCustomConfig();
|
||||
const fileConfig = mergeFileConfig(customConfig?.fileConfig);
|
||||
const fileFilter = createFileFilter(fileConfig);
|
||||
return multer({
|
||||
storage,
|
||||
|
||||
@@ -8,7 +8,21 @@ const { createMulterInstance, storage, importFileFilter } = require('./multer');
|
||||
|
||||
// Mock only the config service that requires external dependencies
|
||||
jest.mock('~/server/services/Config', () => ({
|
||||
getAppConfig: jest.fn(),
|
||||
getCustomConfig: jest.fn(() =>
|
||||
Promise.resolve({
|
||||
fileConfig: {
|
||||
endpoints: {
|
||||
openAI: {
|
||||
supportedMimeTypes: ['image/jpeg', 'image/png', 'application/pdf'],
|
||||
},
|
||||
default: {
|
||||
supportedMimeTypes: ['image/jpeg', 'image/png', 'text/plain'],
|
||||
},
|
||||
},
|
||||
serverFileSizeLimit: 10000000, // 10MB
|
||||
},
|
||||
}),
|
||||
),
|
||||
}));
|
||||
|
||||
describe('Multer Configuration', () => {
|
||||
@@ -22,6 +36,13 @@ describe('Multer Configuration', () => {
|
||||
|
||||
mockReq = {
|
||||
user: { id: 'test-user-123' },
|
||||
app: {
|
||||
locals: {
|
||||
paths: {
|
||||
uploads: tempDir,
|
||||
},
|
||||
},
|
||||
},
|
||||
body: {},
|
||||
originalUrl: '/api/files/upload',
|
||||
};
|
||||
@@ -32,14 +53,6 @@ describe('Multer Configuration', () => {
|
||||
size: 1024,
|
||||
};
|
||||
|
||||
// Mock getAppConfig to return paths
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
getAppConfig.mockResolvedValue({
|
||||
paths: {
|
||||
uploads: tempDir,
|
||||
},
|
||||
});
|
||||
|
||||
// Clear mocks
|
||||
jest.clearAllMocks();
|
||||
});
|
||||
@@ -66,12 +79,7 @@ describe('Multer Configuration', () => {
|
||||
|
||||
it("should create directory recursively if it doesn't exist", (done) => {
|
||||
const deepPath = path.join(tempDir, 'deep', 'nested', 'path');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
getAppConfig.mockResolvedValue({
|
||||
paths: {
|
||||
uploads: deepPath,
|
||||
},
|
||||
});
|
||||
mockReq.app.locals.paths.uploads = deepPath;
|
||||
|
||||
const cb = jest.fn((err, destination) => {
|
||||
expect(err).toBeNull();
|
||||
@@ -323,11 +331,11 @@ describe('Multer Configuration', () => {
|
||||
});
|
||||
|
||||
it('should use real config merging', async () => {
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const { getCustomConfig } = require('~/server/services/Config');
|
||||
|
||||
const multerInstance = await createMulterInstance();
|
||||
|
||||
expect(getAppConfig).toHaveBeenCalled();
|
||||
expect(getCustomConfig).toHaveBeenCalled();
|
||||
expect(multerInstance).toBeDefined();
|
||||
});
|
||||
|
||||
@@ -457,24 +465,23 @@ describe('Multer Configuration', () => {
|
||||
it('should handle file system errors when directory creation fails', (done) => {
|
||||
// Test with a non-existent parent directory to simulate fs issues
|
||||
const invalidPath = '/nonexistent/path/that/should/not/exist';
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
getAppConfig.mockResolvedValue({
|
||||
paths: {
|
||||
uploads: invalidPath,
|
||||
},
|
||||
});
|
||||
mockReq.app.locals.paths.uploads = invalidPath;
|
||||
|
||||
// Call getDestination which should fail due to permission/path issues
|
||||
storage.getDestination(mockReq, mockFile, (err, destination) => {
|
||||
// Now we expect the error to be passed to the callback
|
||||
expect(err).toBeDefined();
|
||||
try {
|
||||
// Call getDestination which should fail due to permission/path issues
|
||||
storage.getDestination(mockReq, mockFile, (err, destination) => {
|
||||
// If callback is reached, we didn't get the expected error
|
||||
done(new Error('Expected mkdirSync to throw an error but callback was called'));
|
||||
});
|
||||
// If we get here without throwing, something unexpected happened
|
||||
done(new Error('Expected mkdirSync to throw an error but no error was thrown'));
|
||||
} catch (error) {
|
||||
// This is the expected behavior - mkdirSync throws synchronously for invalid paths
|
||||
// On Linux, this typically returns EACCES (permission denied)
|
||||
// On macOS/Darwin, this returns ENOENT (no such file or directory)
|
||||
expect(['EACCES', 'ENOENT']).toContain(err.code);
|
||||
expect(destination).toBeUndefined();
|
||||
expect(['EACCES', 'ENOENT']).toContain(error.code);
|
||||
done();
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
it('should handle malformed filenames with real sanitization', (done) => {
|
||||
@@ -531,10 +538,10 @@ describe('Multer Configuration', () => {
|
||||
|
||||
describe('Real Configuration Testing', () => {
|
||||
it('should handle missing custom config gracefully with real mergeFileConfig', async () => {
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const { getCustomConfig } = require('~/server/services/Config');
|
||||
|
||||
// Mock getAppConfig to return undefined
|
||||
getAppConfig.mockResolvedValueOnce(undefined);
|
||||
// Mock getCustomConfig to return undefined
|
||||
getCustomConfig.mockResolvedValueOnce(undefined);
|
||||
|
||||
const multerInstance = await createMulterInstance();
|
||||
expect(multerInstance).toBeDefined();
|
||||
@@ -542,28 +549,25 @@ describe('Multer Configuration', () => {
|
||||
});
|
||||
|
||||
it('should properly integrate real fileConfig with custom endpoints', async () => {
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const { getCustomConfig } = require('~/server/services/Config');
|
||||
|
||||
// Mock appConfig with fileConfig
|
||||
getAppConfig.mockResolvedValueOnce({
|
||||
paths: {
|
||||
uploads: tempDir,
|
||||
},
|
||||
// Mock a custom config with additional endpoints
|
||||
getCustomConfig.mockResolvedValueOnce({
|
||||
fileConfig: {
|
||||
endpoints: {
|
||||
anthropic: {
|
||||
supportedMimeTypes: ['text/plain', 'image/png'],
|
||||
},
|
||||
},
|
||||
serverFileSizeLimit: 20971520, // 20 MB in bytes (mergeFileConfig converts)
|
||||
serverFileSizeLimit: 20, // 20 MB
|
||||
},
|
||||
});
|
||||
|
||||
const multerInstance = await createMulterInstance();
|
||||
expect(multerInstance).toBeDefined();
|
||||
|
||||
// Verify that getAppConfig was called
|
||||
expect(getAppConfig).toHaveBeenCalled();
|
||||
// Verify that getCustomConfig was called (we can't spy on the actual merge function easily)
|
||||
expect(getCustomConfig).toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -1,13 +1,15 @@
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { MCPOAuthHandler } = require('@librechat/api');
|
||||
const { Router } = require('express');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { MCPOAuthHandler, getUserMCPAuthMap } = require('@librechat/api');
|
||||
const { getMCPSetupData, getServerConnectionStatus } = require('~/server/services/MCP');
|
||||
const { findToken, updateToken, createToken, deleteTokens } = require('~/models');
|
||||
const { setCachedTools, getCachedTools } = require('~/server/services/Config');
|
||||
const { updateMCPUserTools } = require('~/server/services/Config/mcpToolsCache');
|
||||
const { getUserPluginAuthValue } = require('~/server/services/PluginService');
|
||||
const { CacheKeys, Constants } = require('librechat-data-provider');
|
||||
const { getMCPManager, getFlowStateManager } = require('~/config');
|
||||
const { reinitMCPServer } = require('~/server/services/Tools/mcp');
|
||||
const { requireJwtAuth } = require('~/server/middleware');
|
||||
const { findPluginAuthsByKeys } = require('~/models');
|
||||
const { getLogStores } = require('~/cache');
|
||||
|
||||
const router = Router();
|
||||
@@ -142,33 +144,12 @@ router.get('/:serverName/oauth/callback', async (req, res) => {
|
||||
`[MCP OAuth] Successfully reconnected ${serverName} for user ${flowState.userId}`,
|
||||
);
|
||||
|
||||
const userTools = (await getCachedTools({ userId: flowState.userId })) || {};
|
||||
|
||||
const mcpDelimiter = Constants.mcp_delimiter;
|
||||
for (const key of Object.keys(userTools)) {
|
||||
if (key.endsWith(`${mcpDelimiter}${serverName}`)) {
|
||||
delete userTools[key];
|
||||
}
|
||||
}
|
||||
|
||||
const tools = await userConnection.fetchTools();
|
||||
for (const tool of tools) {
|
||||
const name = `${tool.name}${Constants.mcp_delimiter}${serverName}`;
|
||||
userTools[name] = {
|
||||
type: 'function',
|
||||
['function']: {
|
||||
name,
|
||||
description: tool.description,
|
||||
parameters: tool.inputSchema,
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
await setCachedTools(userTools, { userId: flowState.userId });
|
||||
|
||||
logger.debug(
|
||||
`[MCP OAuth] Cached ${tools.length} tools for ${serverName} user ${flowState.userId}`,
|
||||
);
|
||||
await updateMCPUserTools({
|
||||
userId: flowState.userId,
|
||||
serverName,
|
||||
tools,
|
||||
});
|
||||
} else {
|
||||
logger.debug(`[MCP OAuth] System-level OAuth completed for ${serverName}`);
|
||||
}
|
||||
@@ -323,124 +304,39 @@ router.post('/:serverName/reinitialize', requireJwtAuth, async (req, res) => {
|
||||
});
|
||||
}
|
||||
|
||||
const flowsCache = getLogStores(CacheKeys.FLOWS);
|
||||
const flowManager = getFlowStateManager(flowsCache);
|
||||
|
||||
await mcpManager.disconnectUserConnection(user.id, serverName);
|
||||
logger.info(
|
||||
`[MCP Reinitialize] Disconnected existing user connection for server: ${serverName}`,
|
||||
);
|
||||
|
||||
let customUserVars = {};
|
||||
/** @type {Record<string, Record<string, string>> | undefined} */
|
||||
let userMCPAuthMap;
|
||||
if (serverConfig.customUserVars && typeof serverConfig.customUserVars === 'object') {
|
||||
for (const varName of Object.keys(serverConfig.customUserVars)) {
|
||||
try {
|
||||
const value = await getUserPluginAuthValue(user.id, varName, false);
|
||||
customUserVars[varName] = value;
|
||||
} catch (err) {
|
||||
logger.error(`[MCP Reinitialize] Error fetching ${varName} for user ${user.id}:`, err);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let userConnection = null;
|
||||
let oauthRequired = false;
|
||||
let oauthUrl = null;
|
||||
|
||||
try {
|
||||
userConnection = await mcpManager.getUserConnection({
|
||||
user,
|
||||
serverName,
|
||||
flowManager,
|
||||
customUserVars,
|
||||
tokenMethods: {
|
||||
findToken,
|
||||
updateToken,
|
||||
createToken,
|
||||
deleteTokens,
|
||||
},
|
||||
returnOnOAuth: true,
|
||||
oauthStart: async (authURL) => {
|
||||
logger.info(`[MCP Reinitialize] OAuth URL received: ${authURL}`);
|
||||
oauthUrl = authURL;
|
||||
oauthRequired = true;
|
||||
},
|
||||
userMCPAuthMap = await getUserMCPAuthMap({
|
||||
userId: user.id,
|
||||
servers: [serverName],
|
||||
findPluginAuthsByKeys,
|
||||
});
|
||||
|
||||
logger.info(`[MCP Reinitialize] Successfully established connection for ${serverName}`);
|
||||
} catch (err) {
|
||||
logger.info(`[MCP Reinitialize] getUserConnection threw error: ${err.message}`);
|
||||
logger.info(
|
||||
`[MCP Reinitialize] OAuth state - oauthRequired: ${oauthRequired}, oauthUrl: ${oauthUrl ? 'present' : 'null'}`,
|
||||
);
|
||||
|
||||
const isOAuthError =
|
||||
err.message?.includes('OAuth') ||
|
||||
err.message?.includes('authentication') ||
|
||||
err.message?.includes('401');
|
||||
|
||||
const isOAuthFlowInitiated = err.message === 'OAuth flow initiated - return early';
|
||||
|
||||
if (isOAuthError || oauthRequired || isOAuthFlowInitiated) {
|
||||
logger.info(
|
||||
`[MCP Reinitialize] OAuth required for ${serverName} (isOAuthError: ${isOAuthError}, oauthRequired: ${oauthRequired}, isOAuthFlowInitiated: ${isOAuthFlowInitiated})`,
|
||||
);
|
||||
oauthRequired = true;
|
||||
} else {
|
||||
logger.error(
|
||||
`[MCP Reinitialize] Error initializing MCP server ${serverName} for user:`,
|
||||
err,
|
||||
);
|
||||
return res.status(500).json({ error: 'Failed to reinitialize MCP server for user' });
|
||||
}
|
||||
}
|
||||
|
||||
if (userConnection && !oauthRequired) {
|
||||
const userTools = (await getCachedTools({ userId: user.id })) || {};
|
||||
const result = await reinitMCPServer({
|
||||
req,
|
||||
serverName,
|
||||
userMCPAuthMap,
|
||||
});
|
||||
|
||||
const mcpDelimiter = Constants.mcp_delimiter;
|
||||
for (const key of Object.keys(userTools)) {
|
||||
if (key.endsWith(`${mcpDelimiter}${serverName}`)) {
|
||||
delete userTools[key];
|
||||
}
|
||||
}
|
||||
|
||||
const tools = await userConnection.fetchTools();
|
||||
for (const tool of tools) {
|
||||
const name = `${tool.name}${Constants.mcp_delimiter}${serverName}`;
|
||||
userTools[name] = {
|
||||
type: 'function',
|
||||
['function']: {
|
||||
name,
|
||||
description: tool.description,
|
||||
parameters: tool.inputSchema,
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
await setCachedTools(userTools, { userId: user.id });
|
||||
if (!result) {
|
||||
return res.status(500).json({ error: 'Failed to reinitialize MCP server for user' });
|
||||
}
|
||||
|
||||
logger.debug(
|
||||
`[MCP Reinitialize] Sending response for ${serverName} - oauthRequired: ${oauthRequired}, oauthUrl: ${oauthUrl ? 'present' : 'null'}`,
|
||||
);
|
||||
|
||||
const getResponseMessage = () => {
|
||||
if (oauthRequired) {
|
||||
return `MCP server '${serverName}' ready for OAuth authentication`;
|
||||
}
|
||||
if (userConnection) {
|
||||
return `MCP server '${serverName}' reinitialized successfully`;
|
||||
}
|
||||
return `Failed to reinitialize MCP server '${serverName}'`;
|
||||
};
|
||||
const { success, message, oauthRequired, oauthUrl } = result;
|
||||
|
||||
res.json({
|
||||
success: Boolean((userConnection && !oauthRequired) || (oauthRequired && oauthUrl)),
|
||||
message: getResponseMessage(),
|
||||
success,
|
||||
message,
|
||||
oauthUrl,
|
||||
serverName,
|
||||
oauthRequired,
|
||||
oauthUrl,
|
||||
});
|
||||
} catch (error) {
|
||||
logger.error('[MCP Reinitialize] Unexpected error', error);
|
||||
|
||||
@@ -8,7 +8,6 @@ const {
|
||||
deleteMemory,
|
||||
setMemory,
|
||||
} = require('~/models');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const { requireJwtAuth } = require('~/server/middleware');
|
||||
const { getRoleByName } = require('~/models/Role');
|
||||
|
||||
@@ -61,8 +60,7 @@ router.get('/', checkMemoryRead, async (req, res) => {
|
||||
return sum + (memory.tokenCount || 0);
|
||||
}, 0);
|
||||
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
const memoryConfig = appConfig?.memory;
|
||||
const memoryConfig = req.app.locals?.memory;
|
||||
const tokenLimit = memoryConfig?.tokenLimit;
|
||||
const charLimit = memoryConfig?.charLimit || 10000;
|
||||
|
||||
@@ -100,8 +98,7 @@ router.post('/', memoryPayloadLimit, checkMemoryCreate, async (req, res) => {
|
||||
return res.status(400).json({ error: 'Value is required and must be a non-empty string.' });
|
||||
}
|
||||
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
const memoryConfig = appConfig?.memory;
|
||||
const memoryConfig = req.app.locals?.memory;
|
||||
const charLimit = memoryConfig?.charLimit || 10000;
|
||||
|
||||
if (key.length > 1000) {
|
||||
@@ -120,9 +117,6 @@ router.post('/', memoryPayloadLimit, checkMemoryCreate, async (req, res) => {
|
||||
const tokenCount = Tokenizer.getTokenCount(value, 'o200k_base');
|
||||
|
||||
const memories = await getAllUserMemories(req.user.id);
|
||||
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
const memoryConfig = appConfig?.memory;
|
||||
const tokenLimit = memoryConfig?.tokenLimit;
|
||||
|
||||
if (tokenLimit) {
|
||||
@@ -206,8 +200,8 @@ router.patch('/:key', memoryPayloadLimit, checkMemoryUpdate, async (req, res) =>
|
||||
}
|
||||
|
||||
const newKey = bodyKey || urlKey;
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
const memoryConfig = appConfig?.memory;
|
||||
|
||||
const memoryConfig = req.app.locals?.memory;
|
||||
const charLimit = memoryConfig?.charLimit || 10000;
|
||||
|
||||
if (newKey.length > 1000) {
|
||||
|
||||
@@ -14,6 +14,7 @@ const {
|
||||
// Mock modules before importing
|
||||
jest.mock('~/server/services/Config', () => ({
|
||||
getCachedTools: jest.fn().mockResolvedValue({}),
|
||||
getCustomConfig: jest.fn(),
|
||||
}));
|
||||
|
||||
jest.mock('~/models/Role', () => ({
|
||||
|
||||
@@ -1,7 +1,10 @@
|
||||
const { Constants, actionDomainSeparator } = require('librechat-data-provider');
|
||||
const { Constants, EModelEndpoint, actionDomainSeparator } = require('librechat-data-provider');
|
||||
const { domainParser } = require('./ActionService');
|
||||
|
||||
jest.mock('keyv');
|
||||
jest.mock('~/server/services/Config', () => ({
|
||||
getCustomConfig: jest.fn(),
|
||||
}));
|
||||
|
||||
const globalCache = {};
|
||||
jest.mock('~/cache/getLogStores', () => {
|
||||
@@ -50,6 +53,26 @@ jest.mock('~/cache/getLogStores', () => {
|
||||
});
|
||||
|
||||
describe('domainParser', () => {
|
||||
const req = {
|
||||
app: {
|
||||
locals: {
|
||||
[EModelEndpoint.azureOpenAI]: {
|
||||
assistants: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
const reqNoAzure = {
|
||||
app: {
|
||||
locals: {
|
||||
[EModelEndpoint.azureOpenAI]: {
|
||||
assistants: false,
|
||||
},
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
const TLD = '.com';
|
||||
|
||||
// Non-azure request
|
||||
|
||||
@@ -1,4 +1,15 @@
|
||||
jest.mock('@librechat/data-schemas', () => ({
|
||||
jest.mock('~/models', () => ({
|
||||
initializeRoles: jest.fn(),
|
||||
seedDefaultRoles: jest.fn(),
|
||||
ensureDefaultCategories: jest.fn(),
|
||||
}));
|
||||
jest.mock('~/models/Role', () => ({
|
||||
updateAccessPermissions: jest.fn(),
|
||||
getRoleByName: jest.fn().mockResolvedValue(null),
|
||||
updateRoleByName: jest.fn(),
|
||||
}));
|
||||
|
||||
jest.mock('~/config', () => ({
|
||||
logger: {
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
@@ -6,11 +17,11 @@ jest.mock('@librechat/data-schemas', () => ({
|
||||
},
|
||||
}));
|
||||
|
||||
jest.mock('@librechat/api', () => ({
|
||||
...jest.requireActual('@librechat/api'),
|
||||
jest.mock('./Config/loadCustomConfig', () => jest.fn());
|
||||
jest.mock('./start/interface', () => ({
|
||||
loadDefaultInterface: jest.fn(),
|
||||
}));
|
||||
jest.mock('./start/tools', () => ({
|
||||
jest.mock('./ToolService', () => ({
|
||||
loadAndFormatTools: jest.fn().mockReturnValue({}),
|
||||
}));
|
||||
jest.mock('./start/checks', () => ({
|
||||
@@ -21,15 +32,15 @@ jest.mock('./start/checks', () => ({
|
||||
checkWebSearchConfig: jest.fn(),
|
||||
}));
|
||||
|
||||
jest.mock('./Config/loadCustomConfig', () => jest.fn());
|
||||
|
||||
const AppService = require('./AppService');
|
||||
const { loadDefaultInterface } = require('@librechat/api');
|
||||
const { loadDefaultInterface } = require('./start/interface');
|
||||
|
||||
describe('AppService interface configuration', () => {
|
||||
let app;
|
||||
let mockLoadCustomConfig;
|
||||
|
||||
beforeEach(() => {
|
||||
app = { locals: {} };
|
||||
jest.resetModules();
|
||||
jest.clearAllMocks();
|
||||
mockLoadCustomConfig = require('./Config/loadCustomConfig');
|
||||
@@ -39,16 +50,10 @@ describe('AppService interface configuration', () => {
|
||||
mockLoadCustomConfig.mockResolvedValue({});
|
||||
loadDefaultInterface.mockResolvedValue({ prompts: true, bookmarks: true });
|
||||
|
||||
const result = await AppService();
|
||||
await AppService(app);
|
||||
|
||||
expect(result).toEqual(
|
||||
expect.objectContaining({
|
||||
interfaceConfig: expect.objectContaining({
|
||||
prompts: true,
|
||||
bookmarks: true,
|
||||
}),
|
||||
}),
|
||||
);
|
||||
expect(app.locals.interfaceConfig.prompts).toBe(true);
|
||||
expect(app.locals.interfaceConfig.bookmarks).toBe(true);
|
||||
expect(loadDefaultInterface).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
@@ -56,16 +61,10 @@ describe('AppService interface configuration', () => {
|
||||
mockLoadCustomConfig.mockResolvedValue({ interface: { prompts: false, bookmarks: false } });
|
||||
loadDefaultInterface.mockResolvedValue({ prompts: false, bookmarks: false });
|
||||
|
||||
const result = await AppService();
|
||||
await AppService(app);
|
||||
|
||||
expect(result).toEqual(
|
||||
expect.objectContaining({
|
||||
interfaceConfig: expect.objectContaining({
|
||||
prompts: false,
|
||||
bookmarks: false,
|
||||
}),
|
||||
}),
|
||||
);
|
||||
expect(app.locals.interfaceConfig.prompts).toBe(false);
|
||||
expect(app.locals.interfaceConfig.bookmarks).toBe(false);
|
||||
expect(loadDefaultInterface).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
@@ -73,17 +72,10 @@ describe('AppService interface configuration', () => {
|
||||
mockLoadCustomConfig.mockResolvedValue({});
|
||||
loadDefaultInterface.mockResolvedValue({});
|
||||
|
||||
const result = await AppService();
|
||||
await AppService(app);
|
||||
|
||||
expect(result).toEqual(
|
||||
expect.objectContaining({
|
||||
interfaceConfig: expect.anything(),
|
||||
}),
|
||||
);
|
||||
|
||||
// Verify that prompts and bookmarks are undefined when not provided
|
||||
expect(result.interfaceConfig.prompts).toBeUndefined();
|
||||
expect(result.interfaceConfig.bookmarks).toBeUndefined();
|
||||
expect(app.locals.interfaceConfig.prompts).toBeUndefined();
|
||||
expect(app.locals.interfaceConfig.bookmarks).toBeUndefined();
|
||||
expect(loadDefaultInterface).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
@@ -91,16 +83,10 @@ describe('AppService interface configuration', () => {
|
||||
mockLoadCustomConfig.mockResolvedValue({ interface: { prompts: true, bookmarks: false } });
|
||||
loadDefaultInterface.mockResolvedValue({ prompts: true, bookmarks: false });
|
||||
|
||||
const result = await AppService();
|
||||
await AppService(app);
|
||||
|
||||
expect(result).toEqual(
|
||||
expect.objectContaining({
|
||||
interfaceConfig: expect.objectContaining({
|
||||
prompts: true,
|
||||
bookmarks: false,
|
||||
}),
|
||||
}),
|
||||
);
|
||||
expect(app.locals.interfaceConfig.prompts).toBe(true);
|
||||
expect(app.locals.interfaceConfig.bookmarks).toBe(false);
|
||||
expect(loadDefaultInterface).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
@@ -122,19 +108,14 @@ describe('AppService interface configuration', () => {
|
||||
},
|
||||
});
|
||||
|
||||
const result = await AppService();
|
||||
await AppService(app);
|
||||
|
||||
expect(result).toEqual(
|
||||
expect.objectContaining({
|
||||
interfaceConfig: expect.objectContaining({
|
||||
peoplePicker: expect.objectContaining({
|
||||
users: true,
|
||||
groups: true,
|
||||
roles: true,
|
||||
}),
|
||||
}),
|
||||
}),
|
||||
);
|
||||
expect(app.locals.interfaceConfig.peoplePicker).toBeDefined();
|
||||
expect(app.locals.interfaceConfig.peoplePicker).toMatchObject({
|
||||
users: true,
|
||||
groups: true,
|
||||
roles: true,
|
||||
});
|
||||
expect(loadDefaultInterface).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
@@ -156,19 +137,11 @@ describe('AppService interface configuration', () => {
|
||||
},
|
||||
});
|
||||
|
||||
const result = await AppService();
|
||||
await AppService(app);
|
||||
|
||||
expect(result).toEqual(
|
||||
expect.objectContaining({
|
||||
interfaceConfig: expect.objectContaining({
|
||||
peoplePicker: expect.objectContaining({
|
||||
users: true,
|
||||
groups: false,
|
||||
roles: true,
|
||||
}),
|
||||
}),
|
||||
}),
|
||||
);
|
||||
expect(app.locals.interfaceConfig.peoplePicker.users).toBe(true);
|
||||
expect(app.locals.interfaceConfig.peoplePicker.groups).toBe(false);
|
||||
expect(app.locals.interfaceConfig.peoplePicker.roles).toBe(true);
|
||||
});
|
||||
|
||||
it('should set default peoplePicker permissions when not provided', async () => {
|
||||
@@ -181,18 +154,11 @@ describe('AppService interface configuration', () => {
|
||||
},
|
||||
});
|
||||
|
||||
const result = await AppService();
|
||||
await AppService(app);
|
||||
|
||||
expect(result).toEqual(
|
||||
expect.objectContaining({
|
||||
interfaceConfig: expect.objectContaining({
|
||||
peoplePicker: expect.objectContaining({
|
||||
users: true,
|
||||
groups: true,
|
||||
roles: true,
|
||||
}),
|
||||
}),
|
||||
}),
|
||||
);
|
||||
expect(app.locals.interfaceConfig.peoplePicker).toBeDefined();
|
||||
expect(app.locals.interfaceConfig.peoplePicker.users).toBe(true);
|
||||
expect(app.locals.interfaceConfig.peoplePicker.groups).toBe(true);
|
||||
expect(app.locals.interfaceConfig.peoplePicker.roles).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
@@ -3,7 +3,6 @@ const {
|
||||
loadMemoryConfig,
|
||||
agentsConfigSetup,
|
||||
loadWebSearchConfig,
|
||||
loadDefaultInterface,
|
||||
} = require('@librechat/api');
|
||||
const {
|
||||
FileSources,
|
||||
@@ -13,26 +12,35 @@ const {
|
||||
} = require('librechat-data-provider');
|
||||
const {
|
||||
checkWebSearchConfig,
|
||||
checkAzureVariables,
|
||||
checkVariables,
|
||||
checkHealth,
|
||||
checkConfig,
|
||||
} = require('./start/checks');
|
||||
const { ensureDefaultCategories, seedDefaultRoles, initializeRoles } = require('~/models');
|
||||
const { azureAssistantsDefaults, assistantsConfigSetup } = require('./start/assistants');
|
||||
const { initializeAzureBlobService } = require('./Files/Azure/initialize');
|
||||
const { initializeFirebase } = require('./Files/Firebase/initialize');
|
||||
const handleRateLimits = require('./Config/handleRateLimits');
|
||||
const loadCustomConfig = require('./Config/loadCustomConfig');
|
||||
const handleRateLimits = require('./Config/handleRateLimits');
|
||||
const { loadDefaultInterface } = require('./start/interface');
|
||||
const { loadTurnstileConfig } = require('./start/turnstile');
|
||||
const { azureConfigSetup } = require('./start/azureOpenAI');
|
||||
const { processModelSpecs } = require('./start/modelSpecs');
|
||||
const { initializeS3 } = require('./Files/S3/initialize');
|
||||
const { loadAndFormatTools } = require('./start/tools');
|
||||
const { loadEndpoints } = require('./start/endpoints');
|
||||
const { loadAndFormatTools } = require('./ToolService');
|
||||
const { setCachedTools } = require('./Config');
|
||||
const paths = require('~/config/paths');
|
||||
|
||||
/**
|
||||
* Loads custom config and initializes app-wide variables.
|
||||
* @function AppService
|
||||
* @param {Express.Application} app - The Express application object.
|
||||
*/
|
||||
const AppService = async () => {
|
||||
const AppService = async (app) => {
|
||||
await initializeRoles();
|
||||
await seedDefaultRoles();
|
||||
await ensureDefaultCategories();
|
||||
/** @type {TCustomConfig} */
|
||||
const config = (await loadCustomConfig()) ?? {};
|
||||
const configDefaults = getConfigDefaults();
|
||||
@@ -71,57 +79,101 @@ const AppService = async () => {
|
||||
directory: paths.structuredTools,
|
||||
});
|
||||
|
||||
const mcpConfig = config.mcpServers || null;
|
||||
const registration = config.registration ?? configDefaults.registration;
|
||||
const interfaceConfig = await loadDefaultInterface({ config, configDefaults });
|
||||
const turnstileConfig = loadTurnstileConfig(config, configDefaults);
|
||||
const speech = config.speech;
|
||||
await setCachedTools(availableTools, { isGlobal: true });
|
||||
|
||||
const defaultConfig = {
|
||||
// Store MCP config for later initialization
|
||||
const mcpConfig = config.mcpServers || null;
|
||||
|
||||
const socialLogins =
|
||||
config?.registration?.socialLogins ?? configDefaults?.registration?.socialLogins;
|
||||
const interfaceConfig = await loadDefaultInterface(config, configDefaults);
|
||||
const turnstileConfig = loadTurnstileConfig(config, configDefaults);
|
||||
|
||||
const defaultLocals = {
|
||||
config,
|
||||
ocr,
|
||||
paths,
|
||||
config,
|
||||
memory,
|
||||
speech,
|
||||
balance,
|
||||
mcpConfig,
|
||||
webSearch,
|
||||
fileStrategy,
|
||||
registration,
|
||||
socialLogins,
|
||||
filteredTools,
|
||||
includedTools,
|
||||
availableTools,
|
||||
imageOutputType,
|
||||
interfaceConfig,
|
||||
turnstileConfig,
|
||||
fileStrategies: config.fileStrategies,
|
||||
balance,
|
||||
mcpConfig,
|
||||
};
|
||||
|
||||
const agentsDefaults = agentsConfigSetup(config);
|
||||
|
||||
if (!Object.keys(config).length) {
|
||||
const appConfig = {
|
||||
...defaultConfig,
|
||||
endpoints: {
|
||||
[EModelEndpoint.agents]: agentsDefaults,
|
||||
},
|
||||
app.locals = {
|
||||
...defaultLocals,
|
||||
[EModelEndpoint.agents]: agentsDefaults,
|
||||
};
|
||||
return appConfig;
|
||||
return;
|
||||
}
|
||||
|
||||
checkConfig(config);
|
||||
handleRateLimits(config?.rateLimits);
|
||||
const loadedEndpoints = loadEndpoints(config, agentsDefaults);
|
||||
|
||||
const appConfig = {
|
||||
...defaultConfig,
|
||||
const endpointLocals = {};
|
||||
const endpoints = config?.endpoints;
|
||||
|
||||
if (endpoints?.[EModelEndpoint.azureOpenAI]) {
|
||||
endpointLocals[EModelEndpoint.azureOpenAI] = azureConfigSetup(config);
|
||||
checkAzureVariables();
|
||||
}
|
||||
|
||||
if (endpoints?.[EModelEndpoint.azureOpenAI]?.assistants) {
|
||||
endpointLocals[EModelEndpoint.azureAssistants] = azureAssistantsDefaults();
|
||||
}
|
||||
|
||||
if (endpoints?.[EModelEndpoint.azureAssistants]) {
|
||||
endpointLocals[EModelEndpoint.azureAssistants] = assistantsConfigSetup(
|
||||
config,
|
||||
EModelEndpoint.azureAssistants,
|
||||
endpointLocals[EModelEndpoint.azureAssistants],
|
||||
);
|
||||
}
|
||||
|
||||
if (endpoints?.[EModelEndpoint.assistants]) {
|
||||
endpointLocals[EModelEndpoint.assistants] = assistantsConfigSetup(
|
||||
config,
|
||||
EModelEndpoint.assistants,
|
||||
endpointLocals[EModelEndpoint.assistants],
|
||||
);
|
||||
}
|
||||
|
||||
endpointLocals[EModelEndpoint.agents] = agentsConfigSetup(config, agentsDefaults);
|
||||
|
||||
const endpointKeys = [
|
||||
EModelEndpoint.openAI,
|
||||
EModelEndpoint.google,
|
||||
EModelEndpoint.bedrock,
|
||||
EModelEndpoint.anthropic,
|
||||
EModelEndpoint.gptPlugins,
|
||||
];
|
||||
|
||||
endpointKeys.forEach((key) => {
|
||||
if (endpoints?.[key]) {
|
||||
endpointLocals[key] = endpoints[key];
|
||||
}
|
||||
});
|
||||
|
||||
if (endpoints?.all) {
|
||||
endpointLocals.all = endpoints.all;
|
||||
}
|
||||
|
||||
app.locals = {
|
||||
...defaultLocals,
|
||||
fileConfig: config?.fileConfig,
|
||||
secureImageLinks: config?.secureImageLinks,
|
||||
modelSpecs: processModelSpecs(config?.endpoints, config.modelSpecs, interfaceConfig),
|
||||
endpoints: loadedEndpoints,
|
||||
modelSpecs: processModelSpecs(endpoints, config.modelSpecs, interfaceConfig),
|
||||
...endpointLocals,
|
||||
};
|
||||
|
||||
return appConfig;
|
||||
};
|
||||
|
||||
module.exports = AppService;
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -16,7 +16,6 @@ const { retrieveAndProcessFile } = require('~/server/services/Files/process');
|
||||
const { processRequiredActions } = require('~/server/services/ToolService');
|
||||
const { RunManager, waitForRun } = require('~/server/services/Runs');
|
||||
const { processMessages } = require('~/server/services/Threads');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const { createOnProgress } = require('~/server/utils');
|
||||
const { TextStream } = require('~/app/clients');
|
||||
|
||||
@@ -351,7 +350,6 @@ async function runAssistant({
|
||||
accumulatedMessages = [],
|
||||
in_progress: inProgress,
|
||||
}) {
|
||||
const appConfig = await getAppConfig({ role: openai.req.user?.role });
|
||||
let steps = accumulatedSteps;
|
||||
let messages = accumulatedMessages;
|
||||
const in_progress = inProgress ?? createInProgressHandler(openai, thread_id, messages);
|
||||
@@ -398,8 +396,8 @@ async function runAssistant({
|
||||
});
|
||||
|
||||
const { endpoint = EModelEndpoint.azureAssistants } = openai.req.body;
|
||||
/** @type {AppConfig['endpoints']['assistants']} */
|
||||
const assistantsEndpointConfig = appConfig.endpoints?.[endpoint] ?? {};
|
||||
/** @type {TCustomConfig.endpoints.assistants} */
|
||||
const assistantsEndpointConfig = openai.req.app.locals?.[endpoint] ?? {};
|
||||
const { pollIntervalMs, timeoutMs } = assistantsEndpointConfig;
|
||||
|
||||
const run = await waitForRun({
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
const bcrypt = require('bcryptjs');
|
||||
const jwt = require('jsonwebtoken');
|
||||
const { webcrypto } = require('node:crypto');
|
||||
const { isEnabled } = require('@librechat/api');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { isEnabled, checkEmailConfig } = require('@librechat/api');
|
||||
const { SystemRoles, errorsToString } = require('librechat-data-provider');
|
||||
const {
|
||||
findUser,
|
||||
@@ -21,9 +21,9 @@ const {
|
||||
generateRefreshToken,
|
||||
} = require('~/models');
|
||||
const { isEmailDomainAllowed } = require('~/server/services/domains');
|
||||
const { checkEmailConfig, sendEmail } = require('~/server/utils');
|
||||
const { getBalanceConfig } = require('~/server/services/Config');
|
||||
const { registerSchema } = require('~/strategies/validators');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const { sendEmail } = require('~/server/utils');
|
||||
|
||||
const domains = {
|
||||
client: process.env.DOMAIN_CLIENT,
|
||||
@@ -195,8 +195,7 @@ const registerUser = async (user, additionalData = {}) => {
|
||||
return { status: 200, message: genericVerificationMessage };
|
||||
}
|
||||
|
||||
const appConfig = await getAppConfig({ role: user.role });
|
||||
if (!(await isEmailDomainAllowed(email, appConfig?.registration?.allowedDomains))) {
|
||||
if (!(await isEmailDomainAllowed(email))) {
|
||||
const errorMessage =
|
||||
'The email address provided cannot be used. Please use a different email address.';
|
||||
logger.error(`[registerUser] [Registration not allowed] [Email: ${user.email}]`);
|
||||
@@ -220,8 +219,9 @@ const registerUser = async (user, additionalData = {}) => {
|
||||
|
||||
const emailEnabled = checkEmailConfig();
|
||||
const disableTTL = isEnabled(process.env.ALLOW_UNVERIFIED_EMAIL_LOGIN);
|
||||
const balanceConfig = await getBalanceConfig();
|
||||
|
||||
const newUser = await createUser(newUserData, appConfig.balance, disableTTL, true);
|
||||
const newUser = await createUser(newUserData, balanceConfig, disableTTL, true);
|
||||
newUserId = newUser._id;
|
||||
if (emailEnabled && !newUser.emailVerified) {
|
||||
await sendVerificationEmail({
|
||||
|
||||
@@ -1,68 +0,0 @@
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { CacheKeys } = require('librechat-data-provider');
|
||||
const AppService = require('~/server/services/AppService');
|
||||
const { setCachedTools } = require('./getCachedTools');
|
||||
const getLogStores = require('~/cache/getLogStores');
|
||||
|
||||
/**
|
||||
* Get the app configuration based on user context
|
||||
* @param {Object} [options]
|
||||
* @param {string} [options.role] - User role for role-based config
|
||||
* @param {boolean} [options.refresh] - Force refresh the cache
|
||||
* @returns {Promise<AppConfig>}
|
||||
*/
|
||||
async function getAppConfig(options = {}) {
|
||||
const { role, refresh } = options;
|
||||
|
||||
const cache = getLogStores(CacheKeys.CONFIG_STORE);
|
||||
const cacheKey = role ? `${CacheKeys.APP_CONFIG}:${role}` : CacheKeys.APP_CONFIG;
|
||||
|
||||
if (!refresh) {
|
||||
const cached = await cache.get(cacheKey);
|
||||
if (cached) {
|
||||
return cached;
|
||||
}
|
||||
}
|
||||
|
||||
let baseConfig = await cache.get(CacheKeys.APP_CONFIG);
|
||||
if (!baseConfig) {
|
||||
logger.info('[getAppConfig] App configuration not initialized. Initializing AppService...');
|
||||
baseConfig = await AppService();
|
||||
|
||||
if (!baseConfig) {
|
||||
throw new Error('Failed to initialize app configuration through AppService.');
|
||||
}
|
||||
|
||||
if (baseConfig.availableTools) {
|
||||
await setCachedTools(baseConfig.availableTools, { isGlobal: true });
|
||||
}
|
||||
|
||||
await cache.set(CacheKeys.APP_CONFIG, baseConfig);
|
||||
}
|
||||
|
||||
// For now, return the base config
|
||||
// In the future, this is where we'll apply role-based modifications
|
||||
if (role) {
|
||||
// TODO: Apply role-based config modifications
|
||||
// const roleConfig = await applyRoleBasedConfig(baseConfig, role);
|
||||
// await cache.set(cacheKey, roleConfig);
|
||||
// return roleConfig;
|
||||
}
|
||||
|
||||
return baseConfig;
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear the app configuration cache
|
||||
* @returns {Promise<boolean>}
|
||||
*/
|
||||
async function clearAppConfigCache() {
|
||||
const cache = getLogStores(CacheKeys.CONFIG_STORE);
|
||||
const cacheKey = CacheKeys.APP_CONFIG;
|
||||
return await cache.delete(cacheKey);
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
getAppConfig,
|
||||
clearAppConfigCache,
|
||||
};
|
||||
@@ -26,7 +26,7 @@ const ToolCacheKeys = {
|
||||
* @param {string[]} [options.roleIds] - Role IDs for role-based tools
|
||||
* @param {string[]} [options.groupIds] - Group IDs for group-based tools
|
||||
* @param {boolean} [options.includeGlobal=true] - Whether to include global tools
|
||||
* @returns {Promise<Object|null>} The available tools object or null if not cached
|
||||
* @returns {Promise<LCAvailableTools|null>} The available tools object or null if not cached
|
||||
*/
|
||||
async function getCachedTools(options = {}) {
|
||||
const cache = getLogStores(CacheKeys.CONFIG_STORE);
|
||||
@@ -41,13 +41,13 @@ async function getCachedTools(options = {}) {
|
||||
// Future implementation will merge tools from multiple sources
|
||||
// based on user permissions, roles, and groups
|
||||
if (userId) {
|
||||
// Check if we have pre-computed effective tools for this user
|
||||
/** @type {LCAvailableTools | null} Check if we have pre-computed effective tools for this user */
|
||||
const effectiveTools = await cache.get(ToolCacheKeys.EFFECTIVE(userId));
|
||||
if (effectiveTools) {
|
||||
return effectiveTools;
|
||||
}
|
||||
|
||||
// Otherwise, compute from individual sources
|
||||
/** @type {LCAvailableTools | null} Otherwise, compute from individual sources */
|
||||
const toolSources = [];
|
||||
|
||||
if (includeGlobal) {
|
||||
|
||||
69
api/server/services/Config/getCustomConfig.js
Normal file
69
api/server/services/Config/getCustomConfig.js
Normal file
@@ -0,0 +1,69 @@
|
||||
const { isEnabled } = require('@librechat/api');
|
||||
const { CacheKeys, EModelEndpoint } = require('librechat-data-provider');
|
||||
const { normalizeEndpointName } = require('~/server/utils');
|
||||
const loadCustomConfig = require('./loadCustomConfig');
|
||||
const getLogStores = require('~/cache/getLogStores');
|
||||
|
||||
/**
|
||||
* Retrieves the configuration object
|
||||
* @function getCustomConfig
|
||||
* @returns {Promise<TCustomConfig | null>}
|
||||
* */
|
||||
async function getCustomConfig() {
|
||||
const cache = getLogStores(CacheKeys.STATIC_CONFIG);
|
||||
return (await cache.get(CacheKeys.LIBRECHAT_YAML_CONFIG)) || (await loadCustomConfig());
|
||||
}
|
||||
|
||||
/**
|
||||
* Retrieves the configuration object
|
||||
* @function getBalanceConfig
|
||||
* @returns {Promise<TCustomConfig['balance'] | null>}
|
||||
* */
|
||||
async function getBalanceConfig() {
|
||||
const isLegacyEnabled = isEnabled(process.env.CHECK_BALANCE);
|
||||
const startBalance = process.env.START_BALANCE;
|
||||
/** @type {TCustomConfig['balance']} */
|
||||
const config = {
|
||||
enabled: isLegacyEnabled,
|
||||
startBalance: startBalance != null && startBalance ? parseInt(startBalance, 10) : undefined,
|
||||
};
|
||||
const customConfig = await getCustomConfig();
|
||||
if (!customConfig) {
|
||||
return config;
|
||||
}
|
||||
return { ...config, ...(customConfig?.['balance'] ?? {}) };
|
||||
}
|
||||
|
||||
/**
|
||||
*
|
||||
* @param {string | EModelEndpoint} endpoint
|
||||
* @returns {Promise<TEndpoint | undefined>}
|
||||
*/
|
||||
const getCustomEndpointConfig = async (endpoint) => {
|
||||
const customConfig = await getCustomConfig();
|
||||
if (!customConfig) {
|
||||
throw new Error(`Config not found for the ${endpoint} custom endpoint.`);
|
||||
}
|
||||
|
||||
const { endpoints = {} } = customConfig;
|
||||
const customEndpoints = endpoints[EModelEndpoint.custom] ?? [];
|
||||
return customEndpoints.find(
|
||||
(endpointConfig) => normalizeEndpointName(endpointConfig.name) === endpoint,
|
||||
);
|
||||
};
|
||||
|
||||
/**
|
||||
* @returns {Promise<boolean>}
|
||||
*/
|
||||
async function hasCustomUserVars() {
|
||||
const customConfig = await getCustomConfig();
|
||||
const mcpServers = customConfig?.mcpServers;
|
||||
return Object.values(mcpServers ?? {}).some((server) => server.customUserVars);
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
getCustomConfig,
|
||||
getBalanceConfig,
|
||||
hasCustomUserVars,
|
||||
getCustomEndpointConfig,
|
||||
};
|
||||
@@ -1,4 +1,3 @@
|
||||
const { loadCustomEndpointsConfig } = require('@librechat/api');
|
||||
const {
|
||||
CacheKeys,
|
||||
EModelEndpoint,
|
||||
@@ -7,8 +6,8 @@ const {
|
||||
defaultAgentCapabilities,
|
||||
} = require('librechat-data-provider');
|
||||
const loadDefaultEndpointsConfig = require('./loadDefaultEConfig');
|
||||
const loadConfigEndpoints = require('./loadConfigEndpoints');
|
||||
const getLogStores = require('~/cache/getLogStores');
|
||||
const { getAppConfig } = require('./app');
|
||||
|
||||
/**
|
||||
*
|
||||
@@ -22,36 +21,14 @@ async function getEndpointsConfig(req) {
|
||||
return cachedEndpointsConfig;
|
||||
}
|
||||
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
const defaultEndpointsConfig = await loadDefaultEndpointsConfig(appConfig);
|
||||
const customEndpointsConfig = loadCustomEndpointsConfig(appConfig?.endpoints?.custom);
|
||||
const defaultEndpointsConfig = await loadDefaultEndpointsConfig(req);
|
||||
const customConfigEndpoints = await loadConfigEndpoints(req);
|
||||
|
||||
/** @type {TEndpointsConfig} */
|
||||
const mergedConfig = {
|
||||
...defaultEndpointsConfig,
|
||||
...customEndpointsConfig,
|
||||
};
|
||||
|
||||
if (appConfig.endpoints?.[EModelEndpoint.azureOpenAI]) {
|
||||
/** @type {Omit<TConfig, 'order'>} */
|
||||
mergedConfig[EModelEndpoint.azureOpenAI] = {
|
||||
userProvide: false,
|
||||
};
|
||||
}
|
||||
|
||||
if (appConfig.endpoints?.[EModelEndpoint.azureOpenAI]?.assistants) {
|
||||
/** @type {Omit<TConfig, 'order'>} */
|
||||
mergedConfig[EModelEndpoint.azureAssistants] = {
|
||||
userProvide: false,
|
||||
};
|
||||
}
|
||||
|
||||
if (
|
||||
mergedConfig[EModelEndpoint.assistants] &&
|
||||
appConfig?.endpoints?.[EModelEndpoint.assistants]
|
||||
) {
|
||||
const mergedConfig = { ...defaultEndpointsConfig, ...customConfigEndpoints };
|
||||
if (mergedConfig[EModelEndpoint.assistants] && req.app.locals?.[EModelEndpoint.assistants]) {
|
||||
const { disableBuilder, retrievalModels, capabilities, version, ..._rest } =
|
||||
appConfig.endpoints[EModelEndpoint.assistants];
|
||||
req.app.locals[EModelEndpoint.assistants];
|
||||
|
||||
mergedConfig[EModelEndpoint.assistants] = {
|
||||
...mergedConfig[EModelEndpoint.assistants],
|
||||
@@ -61,9 +38,9 @@ async function getEndpointsConfig(req) {
|
||||
capabilities,
|
||||
};
|
||||
}
|
||||
if (mergedConfig[EModelEndpoint.agents] && appConfig?.endpoints?.[EModelEndpoint.agents]) {
|
||||
if (mergedConfig[EModelEndpoint.agents] && req.app.locals?.[EModelEndpoint.agents]) {
|
||||
const { disableBuilder, capabilities, allowedProviders, ..._rest } =
|
||||
appConfig.endpoints[EModelEndpoint.agents];
|
||||
req.app.locals[EModelEndpoint.agents];
|
||||
|
||||
mergedConfig[EModelEndpoint.agents] = {
|
||||
...mergedConfig[EModelEndpoint.agents],
|
||||
@@ -75,10 +52,10 @@ async function getEndpointsConfig(req) {
|
||||
|
||||
if (
|
||||
mergedConfig[EModelEndpoint.azureAssistants] &&
|
||||
appConfig?.endpoints?.[EModelEndpoint.azureAssistants]
|
||||
req.app.locals?.[EModelEndpoint.azureAssistants]
|
||||
) {
|
||||
const { disableBuilder, retrievalModels, capabilities, version, ..._rest } =
|
||||
appConfig.endpoints[EModelEndpoint.azureAssistants];
|
||||
req.app.locals[EModelEndpoint.azureAssistants];
|
||||
|
||||
mergedConfig[EModelEndpoint.azureAssistants] = {
|
||||
...mergedConfig[EModelEndpoint.azureAssistants],
|
||||
@@ -89,8 +66,8 @@ async function getEndpointsConfig(req) {
|
||||
};
|
||||
}
|
||||
|
||||
if (mergedConfig[EModelEndpoint.bedrock] && appConfig?.endpoints?.[EModelEndpoint.bedrock]) {
|
||||
const { availableRegions } = appConfig.endpoints[EModelEndpoint.bedrock];
|
||||
if (mergedConfig[EModelEndpoint.bedrock] && req.app.locals?.[EModelEndpoint.bedrock]) {
|
||||
const { availableRegions } = req.app.locals[EModelEndpoint.bedrock];
|
||||
mergedConfig[EModelEndpoint.bedrock] = {
|
||||
...mergedConfig[EModelEndpoint.bedrock],
|
||||
availableRegions,
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
const appConfig = require('./app');
|
||||
const { config } = require('./EndpointService');
|
||||
const getCachedTools = require('./getCachedTools');
|
||||
const getCustomConfig = require('./getCustomConfig');
|
||||
const mcpToolsCache = require('./mcpToolsCache');
|
||||
const loadCustomConfig = require('./loadCustomConfig');
|
||||
const loadConfigModels = require('./loadConfigModels');
|
||||
const loadDefaultModels = require('./loadDefaultModels');
|
||||
@@ -15,7 +16,8 @@ module.exports = {
|
||||
loadDefaultModels,
|
||||
loadOverrideConfig,
|
||||
loadAsyncEndpoints,
|
||||
...appConfig,
|
||||
...getCachedTools,
|
||||
...getCustomConfig,
|
||||
...mcpToolsCache,
|
||||
...getEndpointsConfig,
|
||||
};
|
||||
|
||||
@@ -1,16 +1,16 @@
|
||||
const path = require('path');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { EModelEndpoint } = require('librechat-data-provider');
|
||||
const { loadServiceKey, isUserProvided } = require('@librechat/api');
|
||||
const { EModelEndpoint } = require('librechat-data-provider');
|
||||
const { config } = require('./EndpointService');
|
||||
|
||||
const { openAIApiKey, azureOpenAIApiKey, useAzurePlugins, userProvidedOpenAI, googleKey } = config;
|
||||
|
||||
/**
|
||||
* Load async endpoints and return a configuration object
|
||||
* @param {AppConfig} [appConfig] - The app configuration object
|
||||
* @param {Express.Request} req - The request object
|
||||
*/
|
||||
async function loadAsyncEndpoints(appConfig) {
|
||||
async function loadAsyncEndpoints(req) {
|
||||
let serviceKey, googleUserProvides;
|
||||
|
||||
/** Check if GOOGLE_KEY is provided at all(including 'user_provided') */
|
||||
@@ -34,7 +34,7 @@ async function loadAsyncEndpoints(appConfig) {
|
||||
|
||||
const google = serviceKey || isGoogleKeyProvided ? { userProvide: googleUserProvides } : false;
|
||||
|
||||
const useAzure = !!appConfig?.endpoints?.[EModelEndpoint.azureOpenAI]?.plugins;
|
||||
const useAzure = req.app.locals[EModelEndpoint.azureOpenAI]?.plugins;
|
||||
const gptPlugins =
|
||||
useAzure || openAIApiKey || azureOpenAIApiKey
|
||||
? {
|
||||
|
||||
73
api/server/services/Config/loadConfigEndpoints.js
Normal file
73
api/server/services/Config/loadConfigEndpoints.js
Normal file
@@ -0,0 +1,73 @@
|
||||
const { EModelEndpoint, extractEnvVariable } = require('librechat-data-provider');
|
||||
const { isUserProvided, normalizeEndpointName } = require('~/server/utils');
|
||||
const { getCustomConfig } = require('./getCustomConfig');
|
||||
|
||||
/**
|
||||
* Load config endpoints from the cached configuration object
|
||||
* @param {Express.Request} req - The request object
|
||||
* @returns {Promise<TEndpointsConfig>} A promise that resolves to an object containing the endpoints configuration
|
||||
*/
|
||||
async function loadConfigEndpoints(req) {
|
||||
const customConfig = await getCustomConfig();
|
||||
|
||||
if (!customConfig) {
|
||||
return {};
|
||||
}
|
||||
|
||||
const { endpoints = {} } = customConfig ?? {};
|
||||
const endpointsConfig = {};
|
||||
|
||||
if (Array.isArray(endpoints[EModelEndpoint.custom])) {
|
||||
const customEndpoints = endpoints[EModelEndpoint.custom].filter(
|
||||
(endpoint) =>
|
||||
endpoint.baseURL &&
|
||||
endpoint.apiKey &&
|
||||
endpoint.name &&
|
||||
endpoint.models &&
|
||||
(endpoint.models.fetch || endpoint.models.default),
|
||||
);
|
||||
|
||||
for (let i = 0; i < customEndpoints.length; i++) {
|
||||
const endpoint = customEndpoints[i];
|
||||
const {
|
||||
baseURL,
|
||||
apiKey,
|
||||
name: configName,
|
||||
iconURL,
|
||||
modelDisplayLabel,
|
||||
customParams,
|
||||
} = endpoint;
|
||||
const name = normalizeEndpointName(configName);
|
||||
|
||||
const resolvedApiKey = extractEnvVariable(apiKey);
|
||||
const resolvedBaseURL = extractEnvVariable(baseURL);
|
||||
|
||||
endpointsConfig[name] = {
|
||||
type: EModelEndpoint.custom,
|
||||
userProvide: isUserProvided(resolvedApiKey),
|
||||
userProvideURL: isUserProvided(resolvedBaseURL),
|
||||
modelDisplayLabel,
|
||||
iconURL,
|
||||
customParams,
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
if (req.app.locals[EModelEndpoint.azureOpenAI]) {
|
||||
/** @type {Omit<TConfig, 'order'>} */
|
||||
endpointsConfig[EModelEndpoint.azureOpenAI] = {
|
||||
userProvide: false,
|
||||
};
|
||||
}
|
||||
|
||||
if (req.app.locals[EModelEndpoint.azureOpenAI]?.assistants) {
|
||||
/** @type {Omit<TConfig, 'order'>} */
|
||||
endpointsConfig[EModelEndpoint.azureAssistants] = {
|
||||
userProvide: false,
|
||||
};
|
||||
}
|
||||
|
||||
return endpointsConfig;
|
||||
}
|
||||
|
||||
module.exports = loadConfigEndpoints;
|
||||
@@ -1,7 +1,7 @@
|
||||
const { isUserProvided, normalizeEndpointName } = require('@librechat/api');
|
||||
const { EModelEndpoint, extractEnvVariable } = require('librechat-data-provider');
|
||||
const { isUserProvided, normalizeEndpointName } = require('~/server/utils');
|
||||
const { fetchModels } = require('~/server/services/ModelService');
|
||||
const { getAppConfig } = require('./app');
|
||||
const { getCustomConfig } = require('./getCustomConfig');
|
||||
|
||||
/**
|
||||
* Load config endpoints from the cached configuration object
|
||||
@@ -9,31 +9,35 @@ const { getAppConfig } = require('./app');
|
||||
* @param {Express.Request} req - The Express request object.
|
||||
*/
|
||||
async function loadConfigModels(req) {
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
if (!appConfig) {
|
||||
const customConfig = await getCustomConfig();
|
||||
|
||||
if (!customConfig) {
|
||||
return {};
|
||||
}
|
||||
|
||||
const { endpoints = {} } = customConfig ?? {};
|
||||
const modelsConfig = {};
|
||||
const azureConfig = appConfig.endpoints?.[EModelEndpoint.azureOpenAI];
|
||||
const azureEndpoint = endpoints[EModelEndpoint.azureOpenAI];
|
||||
const azureConfig = req.app.locals[EModelEndpoint.azureOpenAI];
|
||||
const { modelNames } = azureConfig ?? {};
|
||||
|
||||
if (modelNames && azureConfig) {
|
||||
if (modelNames && azureEndpoint) {
|
||||
modelsConfig[EModelEndpoint.azureOpenAI] = modelNames;
|
||||
}
|
||||
|
||||
if (modelNames && azureConfig && azureConfig.plugins) {
|
||||
if (modelNames && azureEndpoint && azureEndpoint.plugins) {
|
||||
modelsConfig[EModelEndpoint.gptPlugins] = modelNames;
|
||||
}
|
||||
|
||||
if (azureConfig?.assistants && azureConfig.assistantModels) {
|
||||
if (azureEndpoint?.assistants && azureConfig.assistantModels) {
|
||||
modelsConfig[EModelEndpoint.azureAssistants] = azureConfig.assistantModels;
|
||||
}
|
||||
|
||||
if (!Array.isArray(appConfig.endpoints?.[EModelEndpoint.custom])) {
|
||||
if (!Array.isArray(endpoints[EModelEndpoint.custom])) {
|
||||
return modelsConfig;
|
||||
}
|
||||
|
||||
const customEndpoints = appConfig.endpoints[EModelEndpoint.custom].filter(
|
||||
const customEndpoints = endpoints[EModelEndpoint.custom].filter(
|
||||
(endpoint) =>
|
||||
endpoint.baseURL &&
|
||||
endpoint.apiKey &&
|
||||
@@ -72,10 +76,11 @@ async function loadConfigModels(req) {
|
||||
fetchPromisesMap[uniqueKey] =
|
||||
fetchPromisesMap[uniqueKey] ||
|
||||
fetchModels({
|
||||
user: req.user.id,
|
||||
baseURL: BASE_URL,
|
||||
apiKey: API_KEY,
|
||||
name,
|
||||
apiKey: API_KEY,
|
||||
baseURL: BASE_URL,
|
||||
user: req.user.id,
|
||||
direct: endpoint.directEndpoint,
|
||||
userIdQuery: models.userIdQuery,
|
||||
});
|
||||
uniqueKeyToEndpointsMap[uniqueKey] = uniqueKeyToEndpointsMap[uniqueKey] || [];
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
const { fetchModels } = require('~/server/services/ModelService');
|
||||
const { getCustomConfig } = require('./getCustomConfig');
|
||||
const loadConfigModels = require('./loadConfigModels');
|
||||
const { getAppConfig } = require('./app');
|
||||
|
||||
jest.mock('~/server/services/ModelService');
|
||||
jest.mock('./app');
|
||||
jest.mock('./getCustomConfig');
|
||||
|
||||
const exampleConfig = {
|
||||
endpoints: {
|
||||
@@ -60,7 +60,7 @@ const exampleConfig = {
|
||||
};
|
||||
|
||||
describe('loadConfigModels', () => {
|
||||
const mockRequest = { user: { id: 'testUserId' } };
|
||||
const mockRequest = { app: { locals: {} }, user: { id: 'testUserId' } };
|
||||
|
||||
const originalEnv = process.env;
|
||||
|
||||
@@ -68,9 +68,6 @@ describe('loadConfigModels', () => {
|
||||
jest.resetAllMocks();
|
||||
jest.resetModules();
|
||||
process.env = { ...originalEnv };
|
||||
|
||||
// Default mock for getAppConfig
|
||||
getAppConfig.mockResolvedValue({});
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
@@ -78,15 +75,18 @@ describe('loadConfigModels', () => {
|
||||
});
|
||||
|
||||
it('should return an empty object if customConfig is null', async () => {
|
||||
getAppConfig.mockResolvedValue(null);
|
||||
getCustomConfig.mockResolvedValue(null);
|
||||
const result = await loadConfigModels(mockRequest);
|
||||
expect(result).toEqual({});
|
||||
});
|
||||
|
||||
it('handles azure models and endpoint correctly', async () => {
|
||||
getAppConfig.mockResolvedValue({
|
||||
mockRequest.app.locals.azureOpenAI = { modelNames: ['model1', 'model2'] };
|
||||
getCustomConfig.mockResolvedValue({
|
||||
endpoints: {
|
||||
azureOpenAI: { modelNames: ['model1', 'model2'] },
|
||||
azureOpenAI: {
|
||||
models: ['model1', 'model2'],
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
@@ -97,16 +97,18 @@ describe('loadConfigModels', () => {
|
||||
it('fetches custom models based on the unique key', async () => {
|
||||
process.env.BASE_URL = 'http://example.com';
|
||||
process.env.API_KEY = 'some-api-key';
|
||||
const customEndpoints = [
|
||||
{
|
||||
baseURL: '${BASE_URL}',
|
||||
apiKey: '${API_KEY}',
|
||||
name: 'CustomModel',
|
||||
models: { fetch: true },
|
||||
},
|
||||
];
|
||||
const customEndpoints = {
|
||||
custom: [
|
||||
{
|
||||
baseURL: '${BASE_URL}',
|
||||
apiKey: '${API_KEY}',
|
||||
name: 'CustomModel',
|
||||
models: { fetch: true },
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
getAppConfig.mockResolvedValue({ endpoints: { custom: customEndpoints } });
|
||||
getCustomConfig.mockResolvedValue({ endpoints: customEndpoints });
|
||||
fetchModels.mockResolvedValue(['customModel1', 'customModel2']);
|
||||
|
||||
const result = await loadConfigModels(mockRequest);
|
||||
@@ -115,7 +117,7 @@ describe('loadConfigModels', () => {
|
||||
});
|
||||
|
||||
it('correctly associates models to names using unique keys', async () => {
|
||||
getAppConfig.mockResolvedValue({
|
||||
getCustomConfig.mockResolvedValue({
|
||||
endpoints: {
|
||||
custom: [
|
||||
{
|
||||
@@ -144,7 +146,7 @@ describe('loadConfigModels', () => {
|
||||
|
||||
it('correctly handles multiple endpoints with the same baseURL but different apiKeys', async () => {
|
||||
// Mock the custom configuration to simulate the user's scenario
|
||||
getAppConfig.mockResolvedValue({
|
||||
getCustomConfig.mockResolvedValue({
|
||||
endpoints: {
|
||||
custom: [
|
||||
{
|
||||
@@ -208,7 +210,7 @@ describe('loadConfigModels', () => {
|
||||
process.env.MY_OPENROUTER_API_KEY = 'actual_openrouter_api_key';
|
||||
// Setup custom configuration with specific API keys for Mistral and OpenRouter
|
||||
// and "user_provided" for groq and Ollama, indicating no fetch for the latter two
|
||||
getAppConfig.mockResolvedValue(exampleConfig);
|
||||
getCustomConfig.mockResolvedValue(exampleConfig);
|
||||
|
||||
// Assuming fetchModels would be called only for Mistral and OpenRouter
|
||||
fetchModels.mockImplementation(({ name }) => {
|
||||
@@ -271,7 +273,7 @@ describe('loadConfigModels', () => {
|
||||
});
|
||||
|
||||
it('falls back to default models if fetching returns an empty array', async () => {
|
||||
getAppConfig.mockResolvedValue({
|
||||
getCustomConfig.mockResolvedValue({
|
||||
endpoints: {
|
||||
custom: [
|
||||
{
|
||||
@@ -304,7 +306,7 @@ describe('loadConfigModels', () => {
|
||||
});
|
||||
|
||||
it('falls back to default models if fetching returns a falsy value', async () => {
|
||||
getAppConfig.mockResolvedValue({
|
||||
getCustomConfig.mockResolvedValue({
|
||||
endpoints: {
|
||||
custom: [
|
||||
{
|
||||
@@ -365,7 +367,7 @@ describe('loadConfigModels', () => {
|
||||
},
|
||||
];
|
||||
|
||||
getAppConfig.mockResolvedValue({
|
||||
getCustomConfig.mockResolvedValue({
|
||||
endpoints: {
|
||||
custom: testCases,
|
||||
},
|
||||
|
||||
@@ -4,11 +4,11 @@ const { config } = require('./EndpointService');
|
||||
|
||||
/**
|
||||
* Load async endpoints and return a configuration object
|
||||
* @param {AppConfig} appConfig - The app configuration object
|
||||
* @param {Express.Request} req - The request object
|
||||
* @returns {Promise<Object.<string, EndpointWithOrder>>} An object whose keys are endpoint names and values are objects that contain the endpoint configuration and an order.
|
||||
*/
|
||||
async function loadDefaultEndpointsConfig(appConfig) {
|
||||
const { google, gptPlugins } = await loadAsyncEndpoints(appConfig);
|
||||
async function loadDefaultEndpointsConfig(req) {
|
||||
const { google, gptPlugins } = await loadAsyncEndpoints(req);
|
||||
const { assistants, azureAssistants, azureOpenAI, chatGPTBrowser } = config;
|
||||
|
||||
const enabledEndpoints = getEnabledEndpoints();
|
||||
|
||||
143
api/server/services/Config/mcpToolsCache.js
Normal file
143
api/server/services/Config/mcpToolsCache.js
Normal file
@@ -0,0 +1,143 @@
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { CacheKeys, Constants } = require('librechat-data-provider');
|
||||
const { getCachedTools, setCachedTools } = require('./getCachedTools');
|
||||
const { getLogStores } = require('~/cache');
|
||||
|
||||
/**
|
||||
* Updates MCP tools in the cache for a specific server and user
|
||||
* @param {Object} params - Parameters for updating MCP tools
|
||||
* @param {string} params.userId - User ID
|
||||
* @param {string} params.serverName - MCP server name
|
||||
* @param {Array} params.tools - Array of tool objects from MCP server
|
||||
* @returns {Promise<LCAvailableTools>}
|
||||
*/
|
||||
async function updateMCPUserTools({ userId, serverName, tools }) {
|
||||
try {
|
||||
const userTools = await getCachedTools({ userId });
|
||||
|
||||
const mcpDelimiter = Constants.mcp_delimiter;
|
||||
for (const key of Object.keys(userTools)) {
|
||||
if (key.endsWith(`${mcpDelimiter}${serverName}`)) {
|
||||
delete userTools[key];
|
||||
}
|
||||
}
|
||||
|
||||
for (const tool of tools) {
|
||||
const name = `${tool.name}${Constants.mcp_delimiter}${serverName}`;
|
||||
userTools[name] = {
|
||||
type: 'function',
|
||||
['function']: {
|
||||
name,
|
||||
description: tool.description,
|
||||
parameters: tool.inputSchema,
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
await setCachedTools(userTools, { userId });
|
||||
|
||||
const cache = getLogStores(CacheKeys.CONFIG_STORE);
|
||||
await cache.delete(CacheKeys.TOOLS);
|
||||
logger.debug(`[MCP Cache] Updated ${tools.length} tools for ${serverName} user ${userId}`);
|
||||
return userTools;
|
||||
} catch (error) {
|
||||
logger.error(`[MCP Cache] Failed to update tools for ${serverName}:`, error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Merges app-level tools with global tools
|
||||
* @param {import('@librechat/api').LCAvailableTools} appTools
|
||||
* @returns {Promise<void>}
|
||||
*/
|
||||
async function mergeAppTools(appTools) {
|
||||
try {
|
||||
const count = Object.keys(appTools).length;
|
||||
if (!count) {
|
||||
return;
|
||||
}
|
||||
const cachedTools = await getCachedTools({ includeGlobal: true });
|
||||
const mergedTools = { ...cachedTools, ...appTools };
|
||||
await setCachedTools(mergedTools, { isGlobal: true });
|
||||
const cache = getLogStores(CacheKeys.CONFIG_STORE);
|
||||
await cache.delete(CacheKeys.TOOLS);
|
||||
logger.debug(`Merged ${count} app-level tools`);
|
||||
} catch (error) {
|
||||
logger.error('Failed to merge app-level tools:', error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Merges user-level tools with global tools
|
||||
* @param {object} params
|
||||
* @param {string} params.userId
|
||||
* @param {Record<string, FunctionTool>} params.cachedUserTools
|
||||
* @param {import('@librechat/api').LCAvailableTools} params.userTools
|
||||
* @returns {Promise<void>}
|
||||
*/
|
||||
async function mergeUserTools({ userId, cachedUserTools, userTools }) {
|
||||
try {
|
||||
if (!userId) {
|
||||
return;
|
||||
}
|
||||
const count = Object.keys(userTools).length;
|
||||
if (!count) {
|
||||
return;
|
||||
}
|
||||
const cachedTools = cachedUserTools ?? (await getCachedTools({ userId }));
|
||||
const mergedTools = { ...cachedTools, ...userTools };
|
||||
await setCachedTools(mergedTools, { userId });
|
||||
const cache = getLogStores(CacheKeys.CONFIG_STORE);
|
||||
await cache.delete(CacheKeys.TOOLS);
|
||||
logger.debug(`Merged ${count} user-level tools`);
|
||||
} catch (error) {
|
||||
logger.error('Failed to merge user-level tools:', error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Clears all MCP tools for a specific server
|
||||
* @param {Object} params - Parameters for clearing MCP tools
|
||||
* @param {string} [params.userId] - User ID (if clearing user-specific tools)
|
||||
* @param {string} params.serverName - MCP server name
|
||||
* @returns {Promise<void>}
|
||||
*/
|
||||
async function clearMCPServerTools({ userId, serverName }) {
|
||||
try {
|
||||
const tools = await getCachedTools({ userId, includeGlobal: !userId });
|
||||
|
||||
// Remove all tools for this server
|
||||
const mcpDelimiter = Constants.mcp_delimiter;
|
||||
let removedCount = 0;
|
||||
for (const key of Object.keys(tools)) {
|
||||
if (key.endsWith(`${mcpDelimiter}${serverName}`)) {
|
||||
delete tools[key];
|
||||
removedCount++;
|
||||
}
|
||||
}
|
||||
|
||||
if (removedCount > 0) {
|
||||
await setCachedTools(tools, userId ? { userId } : { isGlobal: true });
|
||||
|
||||
const cache = getLogStores(CacheKeys.CONFIG_STORE);
|
||||
await cache.delete(CacheKeys.TOOLS);
|
||||
|
||||
logger.debug(
|
||||
`[MCP Cache] Removed ${removedCount} tools for ${serverName}${userId ? ` user ${userId}` : ' (global)'}`,
|
||||
);
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error(`[MCP Cache] Failed to clear tools for ${serverName}:`, error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
mergeAppTools,
|
||||
mergeUserTools,
|
||||
updateMCPUserTools,
|
||||
clearMCPServerTools,
|
||||
};
|
||||
@@ -16,7 +16,6 @@ const generateArtifactsPrompt = require('~/app/clients/prompts/artifacts');
|
||||
const { getProviderConfig } = require('~/server/services/Endpoints');
|
||||
const { processFiles } = require('~/server/services/Files/process');
|
||||
const { getFiles, getToolFilesByIds } = require('~/models/File');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const { getConvoFiles } = require('~/models/Conversation');
|
||||
const { getModelMaxTokens } = require('~/utils');
|
||||
|
||||
@@ -31,7 +30,13 @@ const { getModelMaxTokens } = require('~/utils');
|
||||
* @param {TEndpointOption} [params.endpointOption]
|
||||
* @param {Set<string>} [params.allowedProviders]
|
||||
* @param {boolean} [params.isInitialAgent]
|
||||
* @returns {Promise<Agent & { tools: StructuredTool[], attachments: Array<MongoFile>, toolContextMap: Record<string, unknown>, maxContextTokens: number }>}
|
||||
* @returns {Promise<Agent & {
|
||||
* tools: StructuredTool[],
|
||||
* attachments: Array<MongoFile>,
|
||||
* toolContextMap: Record<string, unknown>,
|
||||
* maxContextTokens: number,
|
||||
* userMCPAuthMap?: Record<string, Record<string, string>>
|
||||
* }>}
|
||||
*/
|
||||
const initializeAgent = async ({
|
||||
req,
|
||||
@@ -44,7 +49,6 @@ const initializeAgent = async ({
|
||||
allowedProviders,
|
||||
isInitialAgent = false,
|
||||
}) => {
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
if (
|
||||
isAgentsEndpoint(endpointOption?.endpoint) &&
|
||||
allowedProviders.size > 0 &&
|
||||
@@ -86,27 +90,29 @@ const initializeAgent = async ({
|
||||
const { attachments, tool_resources } = await primeResources({
|
||||
req,
|
||||
getFiles,
|
||||
appConfig,
|
||||
agentId: agent.id,
|
||||
attachments: currentFiles,
|
||||
tool_resources: agent.tool_resources,
|
||||
requestFileSet: new Set(requestFiles?.map((file) => file.file_id)),
|
||||
agentId: agent.id,
|
||||
});
|
||||
|
||||
const provider = agent.provider;
|
||||
const { tools: structuredTools, toolContextMap } =
|
||||
(await loadTools?.({
|
||||
req,
|
||||
res,
|
||||
provider,
|
||||
agentId: agent.id,
|
||||
tools: agent.tools,
|
||||
model: agent.model,
|
||||
tool_resources,
|
||||
})) ?? {};
|
||||
const {
|
||||
tools: structuredTools,
|
||||
toolContextMap,
|
||||
userMCPAuthMap,
|
||||
} = (await loadTools?.({
|
||||
req,
|
||||
res,
|
||||
provider,
|
||||
agentId: agent.id,
|
||||
tools: agent.tools,
|
||||
model: agent.model,
|
||||
tool_resources,
|
||||
})) ?? {};
|
||||
|
||||
agent.endpoint = provider;
|
||||
const { getOptions, overrideProvider } = getProviderConfig({ provider, appConfig });
|
||||
const { getOptions, overrideProvider } = await getProviderConfig(provider);
|
||||
if (overrideProvider !== agent.provider) {
|
||||
agent.provider = overrideProvider;
|
||||
}
|
||||
@@ -192,6 +198,7 @@ const initializeAgent = async ({
|
||||
tools,
|
||||
attachments,
|
||||
resendFiles,
|
||||
userMCPAuthMap,
|
||||
toolContextMap,
|
||||
useLegacyContent: !!options.useLegacyContent,
|
||||
maxContextTokens: Math.round((agentMaxContextTokens - maxTokens) * 0.9),
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { validateAgentModel } = require('@librechat/api');
|
||||
const { createContentAggregator } = require('@librechat/agents');
|
||||
const { validateAgentModel, getCustomEndpointConfig } = require('@librechat/api');
|
||||
const {
|
||||
Constants,
|
||||
EModelEndpoint,
|
||||
@@ -13,13 +13,16 @@ const {
|
||||
} = require('~/server/controllers/agents/callbacks');
|
||||
const { initializeAgent } = require('~/server/services/Endpoints/agents/agent');
|
||||
const { getModelsConfig } = require('~/server/controllers/ModelController');
|
||||
const { getCustomEndpointConfig } = require('~/server/services/Config');
|
||||
const { loadAgentTools } = require('~/server/services/ToolService');
|
||||
const AgentClient = require('~/server/controllers/agents/client');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const { getAgent } = require('~/models/Agent');
|
||||
const { logViolation } = require('~/cache');
|
||||
|
||||
function createToolLoader() {
|
||||
/**
|
||||
* @param {AbortSignal} signal
|
||||
*/
|
||||
function createToolLoader(signal) {
|
||||
/**
|
||||
* @param {object} params
|
||||
* @param {ServerRequest} params.req
|
||||
@@ -29,7 +32,11 @@ function createToolLoader() {
|
||||
* @param {string} params.provider
|
||||
* @param {string} params.model
|
||||
* @param {AgentToolResources} params.tool_resources
|
||||
* @returns {Promise<{ tools: StructuredTool[], toolContextMap: Record<string, unknown> } | undefined>}
|
||||
* @returns {Promise<{
|
||||
* tools: StructuredTool[],
|
||||
* toolContextMap: Record<string, unknown>,
|
||||
* userMCPAuthMap?: Record<string, Record<string, string>>
|
||||
* } | undefined>}
|
||||
*/
|
||||
return async function loadTools({ req, res, agentId, tools, provider, model, tool_resources }) {
|
||||
const agent = { id: agentId, tools, provider, model };
|
||||
@@ -38,6 +45,7 @@ function createToolLoader() {
|
||||
req,
|
||||
res,
|
||||
agent,
|
||||
signal,
|
||||
tool_resources,
|
||||
});
|
||||
} catch (error) {
|
||||
@@ -46,11 +54,10 @@ function createToolLoader() {
|
||||
};
|
||||
}
|
||||
|
||||
const initializeClient = async ({ req, res, endpointOption }) => {
|
||||
const initializeClient = async ({ req, res, signal, endpointOption }) => {
|
||||
if (!endpointOption) {
|
||||
throw new Error('Endpoint option not provided');
|
||||
}
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
|
||||
// TODO: use endpointOption to determine options/modelOptions
|
||||
/** @type {Array<UsageMetadata>} */
|
||||
@@ -90,9 +97,10 @@ const initializeClient = async ({ req, res, endpointOption }) => {
|
||||
}
|
||||
|
||||
const agentConfigs = new Map();
|
||||
const allowedProviders = new Set(appConfig?.endpoints?.[EModelEndpoint.agents]?.allowedProviders);
|
||||
/** @type {Set<string>} */
|
||||
const allowedProviders = new Set(req?.app?.locals?.[EModelEndpoint.agents]?.allowedProviders);
|
||||
|
||||
const loadTools = createToolLoader();
|
||||
const loadTools = createToolLoader(signal);
|
||||
/** @type {Array<MongoFile>} */
|
||||
const requestFiles = req.body.files ?? [];
|
||||
/** @type {string} */
|
||||
@@ -111,6 +119,7 @@ const initializeClient = async ({ req, res, endpointOption }) => {
|
||||
});
|
||||
|
||||
const agent_ids = primaryConfig.agent_ids;
|
||||
let userMCPAuthMap = primaryConfig.userMCPAuthMap;
|
||||
if (agent_ids?.length) {
|
||||
for (const agentId of agent_ids) {
|
||||
const agent = await getAgent({ id: agentId });
|
||||
@@ -140,17 +149,15 @@ const initializeClient = async ({ req, res, endpointOption }) => {
|
||||
endpointOption,
|
||||
allowedProviders,
|
||||
});
|
||||
Object.assign(userMCPAuthMap, config.userMCPAuthMap ?? {});
|
||||
agentConfigs.set(agentId, config);
|
||||
}
|
||||
}
|
||||
|
||||
let endpointConfig = appConfig.endpoints?.[primaryConfig.endpoint];
|
||||
let endpointConfig = req.app.locals[primaryConfig.endpoint];
|
||||
if (!isAgentsEndpoint(primaryConfig.endpoint) && !endpointConfig) {
|
||||
try {
|
||||
endpointConfig = getCustomEndpointConfig({
|
||||
endpoint: primaryConfig.endpoint,
|
||||
appConfig,
|
||||
});
|
||||
endpointConfig = await getCustomEndpointConfig(primaryConfig.endpoint);
|
||||
} catch (err) {
|
||||
logger.error(
|
||||
'[api/server/controllers/agents/client.js #titleConvo] Error getting custom endpoint config',
|
||||
@@ -191,7 +198,7 @@ const initializeClient = async ({ req, res, endpointOption }) => {
|
||||
: EModelEndpoint.agents,
|
||||
});
|
||||
|
||||
return { client };
|
||||
return { client, userMCPAuthMap };
|
||||
};
|
||||
|
||||
module.exports = { initializeClient };
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
const { isEnabled } = require('@librechat/api');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { CacheKeys } = require('librechat-data-provider');
|
||||
const getLogStores = require('~/cache/getLogStores');
|
||||
const { isEnabled } = require('~/server/utils');
|
||||
const { saveConvo } = require('~/models');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
/**
|
||||
* Add title to conversation in a way that avoids memory retention
|
||||
|
||||
@@ -2,10 +2,8 @@ const { EModelEndpoint } = require('librechat-data-provider');
|
||||
const { getUserKey, checkUserKeyExpiry } = require('~/server/services/UserService');
|
||||
const { getLLMConfig } = require('~/server/services/Endpoints/anthropic/llm');
|
||||
const AnthropicClient = require('~/app/clients/AnthropicClient');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
|
||||
const initializeClient = async ({ req, res, endpointOption, overrideModel, optionsOnly }) => {
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
const { ANTHROPIC_API_KEY, ANTHROPIC_REVERSE_PROXY, PROXY } = process.env;
|
||||
const expiresAt = req.body.key;
|
||||
const isUserProvided = ANTHROPIC_API_KEY === 'user_provided';
|
||||
@@ -25,14 +23,15 @@ const initializeClient = async ({ req, res, endpointOption, overrideModel, optio
|
||||
let clientOptions = {};
|
||||
|
||||
/** @type {undefined | TBaseEndpoint} */
|
||||
const anthropicConfig = appConfig.endpoints?.[EModelEndpoint.anthropic];
|
||||
const anthropicConfig = req.app.locals[EModelEndpoint.anthropic];
|
||||
|
||||
if (anthropicConfig) {
|
||||
clientOptions.streamRate = anthropicConfig.streamRate;
|
||||
clientOptions.titleModel = anthropicConfig.titleModel;
|
||||
}
|
||||
|
||||
const allConfig = appConfig.endpoints?.all;
|
||||
/** @type {undefined | TBaseEndpoint} */
|
||||
const allConfig = req.app.locals.all;
|
||||
if (allConfig) {
|
||||
clientOptions.streamRate = allConfig.streamRate;
|
||||
}
|
||||
@@ -40,8 +39,9 @@ const initializeClient = async ({ req, res, endpointOption, overrideModel, optio
|
||||
if (optionsOnly) {
|
||||
clientOptions = Object.assign(
|
||||
{
|
||||
reverseProxyUrl: ANTHROPIC_REVERSE_PROXY ?? null,
|
||||
proxy: PROXY ?? null,
|
||||
userId: req.user.id,
|
||||
reverseProxyUrl: ANTHROPIC_REVERSE_PROXY ?? null,
|
||||
modelOptions: endpointOption?.model_parameters ?? {},
|
||||
},
|
||||
clientOptions,
|
||||
|
||||
@@ -15,6 +15,7 @@ const { checkPromptCacheSupport, getClaudeHeaders, configureReasoning } = requir
|
||||
* @param {number} [options.modelOptions.topK] - Controls the number of top tokens to consider.
|
||||
* @param {string[]} [options.modelOptions.stop] - Sequences where the API will stop generating further tokens.
|
||||
* @param {boolean} [options.modelOptions.stream] - Whether to stream the response.
|
||||
* @param {string} options.userId - The user ID for tracking and personalization.
|
||||
* @param {string} [options.proxy] - Proxy server URL.
|
||||
* @param {string} [options.reverseProxyUrl] - URL for a reverse proxy, if used.
|
||||
*
|
||||
@@ -47,6 +48,11 @@ function getLLMConfig(apiKey, options = {}) {
|
||||
maxTokens:
|
||||
mergedOptions.maxOutputTokens || anthropicSettings.maxOutputTokens.reset(mergedOptions.model),
|
||||
clientOptions: {},
|
||||
invocationKwargs: {
|
||||
metadata: {
|
||||
user_id: options.userId,
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
requestOptions = configureReasoning(requestOptions, systemOptions);
|
||||
|
||||
@@ -7,7 +7,6 @@ const {
|
||||
getUserKeyValues,
|
||||
getUserKeyExpiry,
|
||||
} = require('~/server/services/UserService');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const OAIClient = require('~/app/clients/OpenAIClient');
|
||||
|
||||
class Files {
|
||||
@@ -49,7 +48,6 @@ class Files {
|
||||
}
|
||||
|
||||
const initializeClient = async ({ req, res, version, endpointOption, initAppClient = false }) => {
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
const { PROXY, OPENAI_ORGANIZATION, AZURE_ASSISTANTS_API_KEY, AZURE_ASSISTANTS_BASE_URL } =
|
||||
process.env;
|
||||
|
||||
@@ -83,7 +81,7 @@ const initializeClient = async ({ req, res, version, endpointOption, initAppClie
|
||||
};
|
||||
|
||||
/** @type {TAzureConfig | undefined} */
|
||||
const azureConfig = appConfig.endpoints?.[EModelEndpoint.azureOpenAI];
|
||||
const azureConfig = req.app.locals[EModelEndpoint.azureOpenAI];
|
||||
|
||||
/** @type {AzureOptions | undefined} */
|
||||
let azureOptions;
|
||||
|
||||
@@ -12,15 +12,6 @@ jest.mock('~/server/services/UserService', () => ({
|
||||
checkUserKeyExpiry: jest.requireActual('~/server/services/UserService').checkUserKeyExpiry,
|
||||
}));
|
||||
|
||||
jest.mock('~/server/services/Config', () => ({
|
||||
getAppConfig: jest.fn().mockResolvedValue({
|
||||
azureAssistants: {
|
||||
apiKey: 'test-key',
|
||||
baseURL: 'https://test.url',
|
||||
},
|
||||
}),
|
||||
}));
|
||||
|
||||
const today = new Date();
|
||||
const tenDaysFromToday = new Date(today.setDate(today.getDate() + 10));
|
||||
const isoString = tenDaysFromToday.toISOString();
|
||||
|
||||
@@ -9,10 +9,8 @@ const {
|
||||
removeNullishValues,
|
||||
} = require('librechat-data-provider');
|
||||
const { getUserKey, checkUserKeyExpiry } = require('~/server/services/UserService');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
|
||||
const getOptions = async ({ req, overrideModel, endpointOption }) => {
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
const {
|
||||
BEDROCK_AWS_SECRET_ACCESS_KEY,
|
||||
BEDROCK_AWS_ACCESS_KEY_ID,
|
||||
@@ -52,13 +50,14 @@ const getOptions = async ({ req, overrideModel, endpointOption }) => {
|
||||
let streamRate = Constants.DEFAULT_STREAM_RATE;
|
||||
|
||||
/** @type {undefined | TBaseEndpoint} */
|
||||
const bedrockConfig = appConfig.endpoints?.[EModelEndpoint.bedrock];
|
||||
const bedrockConfig = req.app.locals[EModelEndpoint.bedrock];
|
||||
|
||||
if (bedrockConfig && bedrockConfig.streamRate) {
|
||||
streamRate = bedrockConfig.streamRate;
|
||||
}
|
||||
|
||||
const allConfig = appConfig.endpoints?.all;
|
||||
/** @type {undefined | TBaseEndpoint} */
|
||||
const allConfig = req.app.locals.all;
|
||||
if (allConfig && allConfig.streamRate) {
|
||||
streamRate = allConfig.streamRate;
|
||||
}
|
||||
|
||||
@@ -1,6 +1,3 @@
|
||||
const { Providers } = require('@librechat/agents');
|
||||
const { isUserProvided, getCustomEndpointConfig } = require('@librechat/api');
|
||||
const { getOpenAIConfig, createHandleLLMNewToken, resolveHeaders } = require('@librechat/api');
|
||||
const {
|
||||
CacheKeys,
|
||||
ErrorTypes,
|
||||
@@ -8,23 +5,22 @@ const {
|
||||
FetchTokenConfig,
|
||||
extractEnvVariable,
|
||||
} = require('librechat-data-provider');
|
||||
const { Providers } = require('@librechat/agents');
|
||||
const { getOpenAIConfig, createHandleLLMNewToken, resolveHeaders } = require('@librechat/api');
|
||||
const { getUserKeyValues, checkUserKeyExpiry } = require('~/server/services/UserService');
|
||||
const { getCustomEndpointConfig } = require('~/server/services/Config');
|
||||
const { fetchModels } = require('~/server/services/ModelService');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const OpenAIClient = require('~/app/clients/OpenAIClient');
|
||||
const { isUserProvided } = require('~/server/utils');
|
||||
const getLogStores = require('~/cache/getLogStores');
|
||||
|
||||
const { PROXY } = process.env;
|
||||
|
||||
const initializeClient = async ({ req, res, endpointOption, optionsOnly, overrideEndpoint }) => {
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
const { key: expiresAt } = req.body;
|
||||
const endpoint = overrideEndpoint ?? req.body.endpoint;
|
||||
|
||||
const endpointConfig = getCustomEndpointConfig({
|
||||
endpoint,
|
||||
appConfig,
|
||||
});
|
||||
const endpointConfig = await getCustomEndpointConfig(endpoint);
|
||||
if (!endpointConfig) {
|
||||
throw new Error(`Config not found for the ${endpoint} custom endpoint.`);
|
||||
}
|
||||
@@ -121,7 +117,8 @@ const initializeClient = async ({ req, res, endpointOption, optionsOnly, overrid
|
||||
endpointTokenConfig,
|
||||
};
|
||||
|
||||
const allConfig = appConfig.endpoints?.all;
|
||||
/** @type {undefined | TBaseEndpoint} */
|
||||
const allConfig = req.app.locals.all;
|
||||
if (allConfig) {
|
||||
customOptions.streamRate = allConfig.streamRate;
|
||||
}
|
||||
|
||||
@@ -1,16 +1,21 @@
|
||||
const initializeClient = require('./initialize');
|
||||
|
||||
jest.mock('@librechat/api', () => ({
|
||||
...jest.requireActual('@librechat/api'),
|
||||
resolveHeaders: jest.fn(),
|
||||
getOpenAIConfig: jest.fn(),
|
||||
createHandleLLMNewToken: jest.fn(),
|
||||
getCustomEndpointConfig: jest.fn().mockReturnValue({
|
||||
apiKey: 'test-key',
|
||||
baseURL: 'https://test.com',
|
||||
headers: { 'x-user': '{{LIBRECHAT_USER_ID}}', 'x-email': '{{LIBRECHAT_USER_EMAIL}}' },
|
||||
models: { default: ['test-model'] },
|
||||
}),
|
||||
}));
|
||||
|
||||
jest.mock('librechat-data-provider', () => ({
|
||||
CacheKeys: { TOKEN_CONFIG: 'token_config' },
|
||||
ErrorTypes: { NO_USER_KEY: 'NO_USER_KEY', NO_BASE_URL: 'NO_BASE_URL' },
|
||||
envVarRegex: /\$\{([^}]+)\}/,
|
||||
FetchTokenConfig: {},
|
||||
extractEnvVariable: jest.fn((value) => value),
|
||||
}));
|
||||
|
||||
jest.mock('@librechat/agents', () => ({
|
||||
Providers: { OLLAMA: 'ollama' },
|
||||
}));
|
||||
|
||||
jest.mock('~/server/services/UserService', () => ({
|
||||
@@ -19,11 +24,11 @@ jest.mock('~/server/services/UserService', () => ({
|
||||
}));
|
||||
|
||||
jest.mock('~/server/services/Config', () => ({
|
||||
getAppConfig: jest.fn().mockResolvedValue({
|
||||
'test-endpoint': {
|
||||
apiKey: 'test-key',
|
||||
baseURL: 'https://test.com',
|
||||
},
|
||||
getCustomEndpointConfig: jest.fn().mockResolvedValue({
|
||||
apiKey: 'test-key',
|
||||
baseURL: 'https://test.com',
|
||||
headers: { 'x-user': '{{LIBRECHAT_USER_ID}}', 'x-email': '{{LIBRECHAT_USER_EMAIL}}' },
|
||||
models: { default: ['test-model'] },
|
||||
}),
|
||||
}));
|
||||
|
||||
@@ -37,6 +42,10 @@ jest.mock('~/app/clients/OpenAIClient', () => {
|
||||
}));
|
||||
});
|
||||
|
||||
jest.mock('~/server/utils', () => ({
|
||||
isUserProvided: jest.fn().mockReturnValue(false),
|
||||
}));
|
||||
|
||||
jest.mock('~/cache/getLogStores', () =>
|
||||
jest.fn().mockReturnValue({
|
||||
get: jest.fn(),
|
||||
@@ -46,25 +55,13 @@ jest.mock('~/cache/getLogStores', () =>
|
||||
describe('custom/initializeClient', () => {
|
||||
const mockRequest = {
|
||||
body: { endpoint: 'test-endpoint' },
|
||||
user: { id: 'user-123', email: 'test@example.com', role: 'user' },
|
||||
user: { id: 'user-123', email: 'test@example.com' },
|
||||
app: { locals: {} },
|
||||
};
|
||||
const mockResponse = {};
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
const { getCustomEndpointConfig, resolveHeaders, getOpenAIConfig } = require('@librechat/api');
|
||||
getCustomEndpointConfig.mockReturnValue({
|
||||
apiKey: 'test-key',
|
||||
baseURL: 'https://test.com',
|
||||
headers: { 'x-user': '{{LIBRECHAT_USER_ID}}', 'x-email': '{{LIBRECHAT_USER_EMAIL}}' },
|
||||
models: { default: ['test-model'] },
|
||||
});
|
||||
resolveHeaders.mockReturnValue({ 'x-user': 'user-123', 'x-email': 'test@example.com' });
|
||||
getOpenAIConfig.mockReturnValue({
|
||||
useLegacyContent: true,
|
||||
endpointTokenConfig: null,
|
||||
});
|
||||
});
|
||||
|
||||
it('calls resolveHeaders with headers, user, and body for body placeholder support', async () => {
|
||||
@@ -72,14 +69,14 @@ describe('custom/initializeClient', () => {
|
||||
await initializeClient({ req: mockRequest, res: mockResponse, optionsOnly: true });
|
||||
expect(resolveHeaders).toHaveBeenCalledWith({
|
||||
headers: { 'x-user': '{{LIBRECHAT_USER_ID}}', 'x-email': '{{LIBRECHAT_USER_EMAIL}}' },
|
||||
user: { id: 'user-123', email: 'test@example.com', role: 'user' },
|
||||
user: { id: 'user-123', email: 'test@example.com' },
|
||||
body: { endpoint: 'test-endpoint' }, // body - supports {{LIBRECHAT_BODY_*}} placeholders
|
||||
});
|
||||
});
|
||||
|
||||
it('throws if endpoint config is missing', async () => {
|
||||
const { getCustomEndpointConfig } = require('@librechat/api');
|
||||
getCustomEndpointConfig.mockReturnValueOnce(null);
|
||||
const { getCustomEndpointConfig } = require('~/server/services/Config');
|
||||
getCustomEndpointConfig.mockResolvedValueOnce(null);
|
||||
await expect(
|
||||
initializeClient({ req: mockRequest, res: mockResponse, optionsOnly: true }),
|
||||
).rejects.toThrow('Config not found for the test-endpoint custom endpoint.');
|
||||
|
||||
@@ -2,7 +2,6 @@ const path = require('path');
|
||||
const { EModelEndpoint, AuthKeys } = require('librechat-data-provider');
|
||||
const { getGoogleConfig, isEnabled, loadServiceKey } = require('@librechat/api');
|
||||
const { getUserKey, checkUserKeyExpiry } = require('~/server/services/UserService');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const { GoogleClient } = require('~/app');
|
||||
|
||||
const initializeClient = async ({ req, res, endpointOption, overrideModel, optionsOnly }) => {
|
||||
@@ -47,11 +46,10 @@ const initializeClient = async ({ req, res, endpointOption, overrideModel, optio
|
||||
|
||||
let clientOptions = {};
|
||||
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
/** @type {undefined | TBaseEndpoint} */
|
||||
const allConfig = appConfig.endpoints?.all;
|
||||
const allConfig = req.app.locals.all;
|
||||
/** @type {undefined | TBaseEndpoint} */
|
||||
const googleConfig = appConfig.endpoints?.[EModelEndpoint.google];
|
||||
const googleConfig = req.app.locals[EModelEndpoint.google];
|
||||
|
||||
if (googleConfig) {
|
||||
clientOptions.streamRate = googleConfig.streamRate;
|
||||
|
||||
@@ -8,14 +8,6 @@ jest.mock('~/server/services/UserService', () => ({
|
||||
getUserKey: jest.fn().mockImplementation(() => ({})),
|
||||
}));
|
||||
|
||||
jest.mock('~/server/services/Config', () => ({
|
||||
getAppConfig: jest.fn().mockResolvedValue({
|
||||
google: {
|
||||
apiKey: 'test-key',
|
||||
},
|
||||
}),
|
||||
}));
|
||||
|
||||
const app = { locals: {} };
|
||||
|
||||
describe('google/initializeClient', () => {
|
||||
|
||||
@@ -1,8 +1,7 @@
|
||||
const { isEnabled } = require('@librechat/api');
|
||||
const { EModelEndpoint, CacheKeys, Constants, googleSettings } = require('librechat-data-provider');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const getLogStores = require('~/cache/getLogStores');
|
||||
const initializeClient = require('./initialize');
|
||||
const { isEnabled } = require('~/server/utils');
|
||||
const { saveConvo } = require('~/models');
|
||||
|
||||
const addTitle = async (req, { text, response, client }) => {
|
||||
@@ -15,8 +14,7 @@ const addTitle = async (req, { text, response, client }) => {
|
||||
return;
|
||||
}
|
||||
const { GOOGLE_TITLE_MODEL } = process.env ?? {};
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
const providerConfig = appConfig.endpoints?.[EModelEndpoint.google];
|
||||
const providerConfig = req.app.locals[EModelEndpoint.google];
|
||||
let model =
|
||||
providerConfig?.titleModel ??
|
||||
GOOGLE_TITLE_MODEL ??
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
const { Providers } = require('@librechat/agents');
|
||||
const { EModelEndpoint } = require('librechat-data-provider');
|
||||
const { getCustomEndpointConfig } = require('@librechat/api');
|
||||
const initAnthropic = require('~/server/services/Endpoints/anthropic/initialize');
|
||||
const getBedrockOptions = require('~/server/services/Endpoints/bedrock/options');
|
||||
const initOpenAI = require('~/server/services/Endpoints/openAI/initialize');
|
||||
const initCustom = require('~/server/services/Endpoints/custom/initialize');
|
||||
const initGoogle = require('~/server/services/Endpoints/google/initialize');
|
||||
const { getCustomEndpointConfig } = require('~/server/services/Config');
|
||||
|
||||
/** Check if the provider is a known custom provider
|
||||
* @param {string | undefined} [provider] - The provider string
|
||||
@@ -31,16 +31,14 @@ const providerConfigMap = {
|
||||
|
||||
/**
|
||||
* Get the provider configuration and override endpoint based on the provider string
|
||||
* @param {Object} params
|
||||
* @param {string} params.provider - The provider string
|
||||
* @param {AppConfig} params.appConfig - The application configuration
|
||||
* @returns {{
|
||||
* getOptions: (typeof providerConfigMap)[keyof typeof providerConfigMap],
|
||||
* @param {string} provider - The provider string
|
||||
* @returns {Promise<{
|
||||
* getOptions: Function,
|
||||
* overrideProvider: string,
|
||||
* customEndpointConfig?: TEndpoint
|
||||
* }}
|
||||
* }>}
|
||||
*/
|
||||
function getProviderConfig({ provider, appConfig }) {
|
||||
async function getProviderConfig(provider) {
|
||||
let getOptions = providerConfigMap[provider];
|
||||
let overrideProvider = provider;
|
||||
/** @type {TEndpoint | undefined} */
|
||||
@@ -50,7 +48,7 @@ function getProviderConfig({ provider, appConfig }) {
|
||||
overrideProvider = provider.toLowerCase();
|
||||
getOptions = providerConfigMap[overrideProvider];
|
||||
} else if (!getOptions) {
|
||||
customEndpointConfig = getCustomEndpointConfig({ endpoint: provider, appConfig });
|
||||
customEndpointConfig = await getCustomEndpointConfig(provider);
|
||||
if (!customEndpointConfig) {
|
||||
throw new Error(`Provider ${provider} not supported`);
|
||||
}
|
||||
@@ -59,7 +57,7 @@ function getProviderConfig({ provider, appConfig }) {
|
||||
}
|
||||
|
||||
if (isKnownCustomProvider(overrideProvider) && !customEndpointConfig) {
|
||||
customEndpointConfig = getCustomEndpointConfig({ endpoint: provider, appConfig });
|
||||
customEndpointConfig = await getCustomEndpointConfig(provider);
|
||||
if (!customEndpointConfig) {
|
||||
throw new Error(`Provider ${provider} not supported`);
|
||||
}
|
||||
|
||||
@@ -8,7 +8,6 @@ const {
|
||||
createHandleLLMNewToken,
|
||||
} = require('@librechat/api');
|
||||
const { getUserKeyValues, checkUserKeyExpiry } = require('~/server/services/UserService');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const OpenAIClient = require('~/app/clients/OpenAIClient');
|
||||
|
||||
const initializeClient = async ({
|
||||
@@ -19,7 +18,6 @@ const initializeClient = async ({
|
||||
overrideEndpoint,
|
||||
overrideModel,
|
||||
}) => {
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
const {
|
||||
PROXY,
|
||||
OPENAI_API_KEY,
|
||||
@@ -66,7 +64,7 @@ const initializeClient = async ({
|
||||
|
||||
const isAzureOpenAI = endpoint === EModelEndpoint.azureOpenAI;
|
||||
/** @type {false | TAzureConfig} */
|
||||
const azureConfig = isAzureOpenAI && appConfig.endpoints?.[EModelEndpoint.azureOpenAI];
|
||||
const azureConfig = isAzureOpenAI && req.app.locals[EModelEndpoint.azureOpenAI];
|
||||
let serverless = false;
|
||||
if (isAzureOpenAI && azureConfig) {
|
||||
const { modelGroupMap, groupMap } = azureConfig;
|
||||
@@ -115,14 +113,15 @@ const initializeClient = async ({
|
||||
}
|
||||
|
||||
/** @type {undefined | TBaseEndpoint} */
|
||||
const openAIConfig = appConfig.endpoints?.[EModelEndpoint.openAI];
|
||||
const openAIConfig = req.app.locals[EModelEndpoint.openAI];
|
||||
|
||||
if (!isAzureOpenAI && openAIConfig) {
|
||||
clientOptions.streamRate = openAIConfig.streamRate;
|
||||
clientOptions.titleModel = openAIConfig.titleModel;
|
||||
}
|
||||
|
||||
const allConfig = appConfig.endpoints?.all;
|
||||
/** @type {undefined | TBaseEndpoint} */
|
||||
const allConfig = req.app.locals.all;
|
||||
if (allConfig) {
|
||||
clientOptions.streamRate = allConfig.streamRate;
|
||||
}
|
||||
|
||||
@@ -1,13 +1,4 @@
|
||||
jest.mock('~/cache/getLogStores', () => ({
|
||||
getLogStores: jest.fn().mockReturnValue({
|
||||
get: jest.fn().mockResolvedValue({
|
||||
openAI: { apiKey: 'test-key' },
|
||||
}),
|
||||
set: jest.fn(),
|
||||
delete: jest.fn(),
|
||||
}),
|
||||
}));
|
||||
|
||||
jest.mock('~/cache/getLogStores');
|
||||
const { EModelEndpoint, ErrorTypes, validateAzureGroups } = require('librechat-data-provider');
|
||||
const { getUserKey, getUserKeyValues } = require('~/server/services/UserService');
|
||||
const initializeClient = require('./initialize');
|
||||
@@ -20,40 +11,6 @@ jest.mock('~/server/services/UserService', () => ({
|
||||
checkUserKeyExpiry: jest.requireActual('~/server/services/UserService').checkUserKeyExpiry,
|
||||
}));
|
||||
|
||||
jest.mock('~/server/services/Config', () => ({
|
||||
getAppConfig: jest.fn().mockResolvedValue({
|
||||
endpoints: {
|
||||
openAI: {
|
||||
apiKey: 'test-key',
|
||||
},
|
||||
azureOpenAI: {
|
||||
apiKey: 'test-azure-key',
|
||||
modelNames: ['gpt-4-vision-preview', 'gpt-3.5-turbo', 'gpt-4'],
|
||||
modelGroupMap: {
|
||||
'gpt-4-vision-preview': {
|
||||
group: 'librechat-westus',
|
||||
deploymentName: 'gpt-4-vision-preview',
|
||||
version: '2024-02-15-preview',
|
||||
},
|
||||
},
|
||||
groupMap: {
|
||||
'librechat-westus': {
|
||||
apiKey: 'WESTUS_API_KEY',
|
||||
instanceName: 'librechat-westus',
|
||||
version: '2023-12-01-preview',
|
||||
models: {
|
||||
'gpt-4-vision-preview': {
|
||||
deploymentName: 'gpt-4-vision-preview',
|
||||
version: '2024-02-15-preview',
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}),
|
||||
}));
|
||||
|
||||
describe('initializeClient', () => {
|
||||
// Set up environment variables
|
||||
const originalEnvironment = process.env;
|
||||
@@ -122,7 +79,7 @@ describe('initializeClient', () => {
|
||||
},
|
||||
];
|
||||
|
||||
const { modelNames } = validateAzureGroups(validAzureConfigs);
|
||||
const { modelNames, modelGroupMap, groupMap } = validateAzureGroups(validAzureConfigs);
|
||||
|
||||
beforeEach(() => {
|
||||
jest.resetModules(); // Clears the cache
|
||||
@@ -155,29 +112,25 @@ describe('initializeClient', () => {
|
||||
test('should initialize client with Azure credentials when endpoint is azureOpenAI', async () => {
|
||||
process.env.AZURE_API_KEY = 'test-azure-api-key';
|
||||
(process.env.AZURE_OPENAI_API_INSTANCE_NAME = 'some-value'),
|
||||
(process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME = 'some-value'),
|
||||
(process.env.AZURE_OPENAI_API_VERSION = 'some-value'),
|
||||
(process.env.AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME = 'some-value'),
|
||||
(process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME = 'some-value'),
|
||||
(process.env.OPENAI_API_KEY = 'test-openai-api-key');
|
||||
(process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME = 'some-value'),
|
||||
(process.env.AZURE_OPENAI_API_VERSION = 'some-value'),
|
||||
(process.env.AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME = 'some-value'),
|
||||
(process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME = 'some-value'),
|
||||
(process.env.OPENAI_API_KEY = 'test-openai-api-key');
|
||||
process.env.DEBUG_OPENAI = 'false';
|
||||
process.env.OPENAI_SUMMARIZE = 'false';
|
||||
|
||||
const req = {
|
||||
body: {
|
||||
key: null,
|
||||
endpoint: 'azureOpenAI',
|
||||
model: 'gpt-4-vision-preview',
|
||||
},
|
||||
body: { key: null, endpoint: 'azureOpenAI' },
|
||||
user: { id: '123' },
|
||||
app,
|
||||
};
|
||||
const res = {};
|
||||
const endpointOption = {};
|
||||
const endpointOption = { modelOptions: { model: 'test-model' } };
|
||||
|
||||
const client = await initializeClient({ req, res, endpointOption });
|
||||
|
||||
expect(client.openAIApiKey).toBe('WESTUS_API_KEY');
|
||||
expect(client.openAIApiKey).toBe('test-azure-api-key');
|
||||
expect(client.client).toBeInstanceOf(OpenAIClient);
|
||||
});
|
||||
|
||||
@@ -338,7 +291,7 @@ describe('initializeClient', () => {
|
||||
let userValues = getUserKey();
|
||||
try {
|
||||
userValues = JSON.parse(userValues);
|
||||
} catch {
|
||||
} catch (e) {
|
||||
throw new Error(
|
||||
JSON.stringify({
|
||||
type: ErrorTypes.INVALID_USER_KEY,
|
||||
@@ -354,9 +307,6 @@ describe('initializeClient', () => {
|
||||
});
|
||||
|
||||
test('should initialize client correctly for Azure OpenAI with valid configuration', async () => {
|
||||
// Set up Azure environment variables
|
||||
process.env.WESTUS_API_KEY = 'test-westus-key';
|
||||
|
||||
const req = {
|
||||
body: {
|
||||
key: null,
|
||||
@@ -364,6 +314,15 @@ describe('initializeClient', () => {
|
||||
model: modelNames[0],
|
||||
},
|
||||
user: { id: '123' },
|
||||
app: {
|
||||
locals: {
|
||||
[EModelEndpoint.azureOpenAI]: {
|
||||
modelNames,
|
||||
modelGroupMap,
|
||||
groupMap,
|
||||
},
|
||||
},
|
||||
},
|
||||
};
|
||||
const res = {};
|
||||
const endpointOption = {};
|
||||
|
||||
@@ -2,10 +2,10 @@ const axios = require('axios');
|
||||
const fs = require('fs').promises;
|
||||
const FormData = require('form-data');
|
||||
const { Readable } = require('stream');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { genAzureEndpoint } = require('@librechat/api');
|
||||
const { extractEnvVariable, STTProviders } = require('librechat-data-provider');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const { getCustomConfig } = require('~/server/services/Config');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
/**
|
||||
* Maps MIME types to their corresponding file extensions for audio files.
|
||||
@@ -84,7 +84,12 @@ function getFileExtensionFromMime(mimeType) {
|
||||
* @class
|
||||
*/
|
||||
class STTService {
|
||||
constructor() {
|
||||
/**
|
||||
* Creates an instance of STTService.
|
||||
* @param {Object} customConfig - The custom configuration object.
|
||||
*/
|
||||
constructor(customConfig) {
|
||||
this.customConfig = customConfig;
|
||||
this.providerStrategies = {
|
||||
[STTProviders.OPENAI]: this.openAIProvider,
|
||||
[STTProviders.AZURE_OPENAI]: this.azureOpenAIProvider,
|
||||
@@ -99,20 +104,21 @@ class STTService {
|
||||
* @throws {Error} If the custom config is not found.
|
||||
*/
|
||||
static async getInstance() {
|
||||
return new STTService();
|
||||
const customConfig = await getCustomConfig();
|
||||
if (!customConfig) {
|
||||
throw new Error('Custom config not found');
|
||||
}
|
||||
return new STTService(customConfig);
|
||||
}
|
||||
|
||||
/**
|
||||
* Retrieves the configured STT provider and its schema.
|
||||
* @param {ServerRequest} req - The request object.
|
||||
* @returns {Promise<[string, Object]>} A promise that resolves to an array containing the provider name and its schema.
|
||||
* @throws {Error} If no STT schema is set, multiple providers are set, or no provider is set.
|
||||
*/
|
||||
async getProviderSchema(req) {
|
||||
const appConfig = await getAppConfig({
|
||||
role: req?.user?.role,
|
||||
});
|
||||
const sttSchema = appConfig?.speech?.stt;
|
||||
async getProviderSchema() {
|
||||
const sttSchema = this.customConfig.speech.stt;
|
||||
|
||||
if (!sttSchema) {
|
||||
throw new Error(
|
||||
'No STT schema is set. Did you configure STT in the custom config (librechat.yaml)?',
|
||||
@@ -268,7 +274,7 @@ class STTService {
|
||||
* @param {Object} res - The response object.
|
||||
* @returns {Promise<void>}
|
||||
*/
|
||||
async processSpeechToText(req, res) {
|
||||
async processTextToSpeech(req, res) {
|
||||
if (!req.file) {
|
||||
return res.status(400).json({ message: 'No audio file provided in the FormData' });
|
||||
}
|
||||
@@ -281,7 +287,7 @@ class STTService {
|
||||
};
|
||||
|
||||
try {
|
||||
const [provider, sttSchema] = await this.getProviderSchema(req);
|
||||
const [provider, sttSchema] = await this.getProviderSchema();
|
||||
const text = await this.sttRequest(provider, sttSchema, { audioBuffer, audioFile });
|
||||
res.json({ text });
|
||||
} catch (error) {
|
||||
@@ -291,7 +297,7 @@ class STTService {
|
||||
try {
|
||||
await fs.unlink(req.file.path);
|
||||
logger.debug('[/speech/stt] Temp. audio upload file deleted');
|
||||
} catch {
|
||||
} catch (error) {
|
||||
logger.debug('[/speech/stt] Temp. audio upload file already deleted');
|
||||
}
|
||||
}
|
||||
@@ -316,7 +322,7 @@ async function createSTTService() {
|
||||
*/
|
||||
async function speechToText(req, res) {
|
||||
const sttService = await createSTTService();
|
||||
await sttService.processSpeechToText(req, res);
|
||||
await sttService.processTextToSpeech(req, res);
|
||||
}
|
||||
|
||||
module.exports = { speechToText };
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
const axios = require('axios');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { genAzureEndpoint } = require('@librechat/api');
|
||||
const { extractEnvVariable, TTSProviders } = require('librechat-data-provider');
|
||||
const { getRandomVoiceId, createChunkProcessor, splitTextIntoChunks } = require('./streamAudio');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const { getCustomConfig } = require('~/server/services/Config');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
/**
|
||||
* Service class for handling Text-to-Speech (TTS) operations.
|
||||
@@ -32,7 +32,11 @@ class TTSService {
|
||||
* @throws {Error} If the custom config is not found.
|
||||
*/
|
||||
static async getInstance() {
|
||||
return new TTSService();
|
||||
const customConfig = await getCustomConfig();
|
||||
if (!customConfig) {
|
||||
throw new Error('Custom config not found');
|
||||
}
|
||||
return new TTSService(customConfig);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -289,13 +293,10 @@ class TTSService {
|
||||
return res.status(400).send('Missing text in request body');
|
||||
}
|
||||
|
||||
const appConfig = await getAppConfig({
|
||||
role: req.user?.role,
|
||||
});
|
||||
try {
|
||||
res.setHeader('Content-Type', 'audio/mpeg');
|
||||
const provider = this.getProvider();
|
||||
const ttsSchema = appConfig?.speech?.tts?.[provider];
|
||||
const ttsSchema = this.customConfig.speech.tts[provider];
|
||||
const voice = await this.getVoice(ttsSchema, requestVoice);
|
||||
|
||||
if (input.length < 4096) {
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const { getCustomConfig } = require('~/server/services/Config');
|
||||
const { logger } = require('~/config');
|
||||
|
||||
/**
|
||||
* This function retrieves the speechTab settings from the custom configuration
|
||||
@@ -15,28 +15,26 @@ const { getAppConfig } = require('~/server/services/Config');
|
||||
*/
|
||||
async function getCustomConfigSpeech(req, res) {
|
||||
try {
|
||||
const appConfig = await getAppConfig({
|
||||
role: req.user?.role,
|
||||
});
|
||||
const customConfig = await getCustomConfig();
|
||||
|
||||
if (!appConfig) {
|
||||
if (!customConfig) {
|
||||
return res.status(200).send({
|
||||
message: 'not_found',
|
||||
});
|
||||
}
|
||||
|
||||
const sttExternal = !!appConfig.speech?.stt;
|
||||
const ttsExternal = !!appConfig.speech?.tts;
|
||||
const sttExternal = !!customConfig.speech?.stt;
|
||||
const ttsExternal = !!customConfig.speech?.tts;
|
||||
let settings = {
|
||||
sttExternal,
|
||||
ttsExternal,
|
||||
};
|
||||
|
||||
if (!appConfig.speech?.speechTab) {
|
||||
if (!customConfig.speech?.speechTab) {
|
||||
return res.status(200).send(settings);
|
||||
}
|
||||
|
||||
const speechTab = appConfig.speech.speechTab;
|
||||
const speechTab = customConfig.speech.speechTab;
|
||||
|
||||
if (speechTab.advancedMode !== undefined) {
|
||||
settings.advancedMode = speechTab.advancedMode;
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
const { TTSProviders } = require('librechat-data-provider');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const { getCustomConfig } = require('~/server/services/Config');
|
||||
const { getProvider } = require('./TTSService');
|
||||
|
||||
/**
|
||||
@@ -14,15 +14,13 @@ const { getProvider } = require('./TTSService');
|
||||
*/
|
||||
async function getVoices(req, res) {
|
||||
try {
|
||||
const appConfig = await getAppConfig({
|
||||
role: req.user?.role,
|
||||
});
|
||||
const customConfig = await getCustomConfig();
|
||||
|
||||
if (!appConfig || !appConfig?.speech?.tts) {
|
||||
if (!customConfig || !customConfig?.speech?.tts) {
|
||||
throw new Error('Configuration or TTS schema is missing');
|
||||
}
|
||||
|
||||
const ttsSchema = appConfig?.speech?.tts;
|
||||
const ttsSchema = customConfig?.speech?.tts;
|
||||
const provider = await getProvider(ttsSchema);
|
||||
let voices;
|
||||
|
||||
|
||||
@@ -3,7 +3,6 @@ const path = require('path');
|
||||
const sharp = require('sharp');
|
||||
const { logger } = require('@librechat/data-schemas');
|
||||
const { resizeImageBuffer } = require('../images/resize');
|
||||
const { getAppConfig } = require('~/server/services/Config');
|
||||
const { updateUser, updateFile } = require('~/models');
|
||||
const { saveBufferToAzure } = require('./crud');
|
||||
|
||||
@@ -31,7 +30,6 @@ async function uploadImageToAzure({
|
||||
containerName,
|
||||
}) {
|
||||
try {
|
||||
const appConfig = await getAppConfig({ role: req.user?.role });
|
||||
const inputFilePath = file.path;
|
||||
const inputBuffer = await fs.promises.readFile(inputFilePath);
|
||||
const {
|
||||
@@ -43,12 +41,12 @@ async function uploadImageToAzure({
|
||||
const userId = req.user.id;
|
||||
let webPBuffer;
|
||||
let fileName = `${file_id}__${path.basename(inputFilePath)}`;
|
||||
const targetExtension = `.${appConfig.imageOutputType}`;
|
||||
const targetExtension = `.${req.app.locals.imageOutputType}`;
|
||||
|
||||
if (extension.toLowerCase() === targetExtension) {
|
||||
webPBuffer = resizedBuffer;
|
||||
} else {
|
||||
webPBuffer = await sharp(resizedBuffer).toFormat(appConfig.imageOutputType).toBuffer();
|
||||
webPBuffer = await sharp(resizedBuffer).toFormat(req.app.locals.imageOutputType).toBuffer();
|
||||
const extRegExp = new RegExp(path.extname(fileName) + '$');
|
||||
fileName = fileName.replace(extRegExp, targetExtension);
|
||||
if (!path.extname(fileName)) {
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user