Compare commits

...

145 Commits

Author SHA1 Message Date
Ruben Talstra
faf349e0db fix: Cosmos DB: E11000 duplicate key error
I’ve updated this PR to remove unique: true and sparse: true from the optional social login fields (googleId, facebookId, etc.) and switch them to simple indexes (index: true). This resolves the Cosmos DB “duplicate key” errors caused by multiple null values and ensures compatibility with both Cosmos DB and MongoDB. The email field remains required and unique, preserving overall identity uniqueness.
2025-02-12 19:22:37 +01:00
Ruben Talstra
915022bc08 Merge branch 'main' into partial-filter-index 2025-02-12 19:05:32 +01:00
Danny Avila
2a506df443 🪄 fix: Agent Artifacts condition 2025-02-11 19:44:20 -05:00
Danny Avila
bfbaaebd2b 🪄 feat: Agent Artifacts (#5804)
* refactor: remove artifacts toggle

* refactor: allow hiding side panel while allowing artifacts view

* chore: rename SidePanelGroup to SidePanel for clarity

* Revert "refactor: remove artifacts toggle"

This reverts commit f884c2cfcd.

* feat: add artifacts capability to agent configuration

* refactor: conditionally set artifacts mode based on endpoint type

* feat: Artifacts Capability for Agents

* refactor: enhance getStreamText method to handle intermediate replies and add `stream_options` for openai/azure

* feat: localize progress text and improve UX in CodeAnalyze and ExecuteCode components for expanding analysis
2025-02-11 18:00:38 -05:00
Danny Avila
46f034250d v0.7.7-rc1 (#5801) 2025-02-11 11:45:07 -05:00
Danny Avila
4de9619bd9 🧠 fix: Handle Reasoning Chunk Edge Cases (#5800)
* refactor: better reasoning parsing

* style: better model selector mobile styling

* chore: bump vite
2025-02-11 11:28:18 -05:00
Ruben Talstra
404b27d045 📦 chore: Bump Packages (#5791)
* chore: started with updating packages to new version.
(a lot are outdated)

* fix: eslint to pass when no matching files changed.

* fix: eslint to pass when no matching files changed.

* fix: issue with strict in actions with the test

* chore: update more dependencies

* feat: scan for unused imported packages

* feat: scan for unused imported packages

* feat: scan for unused imported packages

* feat: scan for unused imported packages

* feat: scan for unused imported packages

* feat: scan for unused imported packages

* feat: scan for unused imported packages

* chore: removed Unused NPM Packages

* chore: removed Unused NPM Packages in `client/package.json`

* chore: removed Unused NPM Packages in `client/package.json`

* chore: Only comments when there are actual unused dependencies.

* chore: Only comments when there are actual unused dependencies.

* ci: test if it detects unused packages.

* ci: removed unused packages.

* ci: both static and dynamic i18n keys

* ci: revert back to no dynamic. use official nesting

* chore: remove override package: ajv
2025-02-11 09:55:13 -05:00
github-actions[bot]
936199b950 🌍 i18n: Update translation.json with latest translations (#5789)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-02-11 09:53:26 -05:00
owengo
d844e56c50 🔨 feat: Use x-strict attribute in OpenAPI Actions for Strict Function Definition (#4639)
* feat: manage an 'x-strict': true attribute in openapi specs for assistants which generates function calls with stric attribute

* fix typo and lint errors

---------

Co-authored-by: Olivier Schiavo <olivier.schiavo@wengo.com>
2025-02-10 16:02:21 -05:00
Ruben Talstra
aea055b597 🔄 chore: Refactor Locize Workflow for Improved Translation Sync (#5781) 2025-02-10 16:01:27 -05:00
Ruben Talstra
3d0c27f525 🛠️ ci: Add Workflow to Detect Unused i18next Keys in PRs (#5782)
* created: checks for unused i18n keys in codebase.

* updated the file to test this new check on this PR.

* updated the file to test this new check on this PR.

* updated the file to test this new check on this PR.

* updated the file to test this new check on this PR.

* updated the file to test this new check on this PR.

* removed the testing option. will now only run in `client/src/**`
2025-02-10 16:00:57 -05:00
Ruben Talstra
d99a9db3f6 feat: OAuth for Actions (#5693)
* feat: OAuth for Actions

* WIP: PoC flow state manager

* refactor: Add identifier field to token model from action schema

* chore: fix potential file type issues

* ci: fix type issue with action metadata auth

* fix: ensure FlowManagerOptions has a default ttl value

* WIP: OAUTH actions

* WIP: first pass OAuth Action

* fix: standardize identifier usage in OAuth flow handling

* fix: update token retrieval to include userId in query and use correct identifier

* refacotr: update token retrieval to use userId for OAuth token query

* feat: Tool Call Auth styling

* fix: streamline token creation and add type field to token schema

* refactor: cleanup OAuth flow by encrypting client credentials and ensuring oauth operations only run under condition

* refactor: use encrypted credentials in OAuth callback

* fix: update Token collection indexes to use expiresAt TTL index and not createdAt legacy index

* refactor: enhance Token index cleanup by improving logging and removing redundant index creation logic

* refactor: remove unused OAuth login route and related logic for improved clarity

* refactor: replace fetch with axios for OAuth token exchange and improve error handling

* refactor: better UX after authentication before oauth tool execution

* refactor: implement cleanup handlers for FlowStateManager intervals to enhance resource management

* refactor: encrypt OAuth tokens before storing and decrypt upon retrieval for enhanced security

* refactor: enhance authentication success page with improved styling and countdown feature

* refactor: add response_type parameter to OAuth redirect URI for improved compatibility

* chore: update translation.json new localizations

* chore: remove unused OGDialog import from OGDialogTemplate component

* refactor: Actions Auth using new Dialog styling, use same component with Agents/Assistants

* refactor: update removeNullishValues function to support removal of empty strings and adjust transform usage in schemas

* chore: bump version of librechat-data-provider to 0.7.6991

* refactor: integrate removeNullishValues function to clean metadata before encryption in agent and assistant routes

* refactor: update OAuth input fields to use 'password' type for better security

* refactor: update localization placeholders for sign-in message to use double curly braces

* refactor: add access_type parameter for offline access in createActionTool function

* refactor: implement handleOAuthToken function for token management and encryption

* feat: refresh token support

* refactor: add default expiration for access token and error handling for missing token

* feat: localizations for ActionAuth

* refactor: set refresh token expiration to null to not expire if expiry never given

* fix: prevent crash fromerror within async handleAbortError in AskController, EditController, and AgentController

* feat: Action Callback URL

* 🌍 i18n: Update translation.json with latest translations

* refactor: handle errors in flow state checking to prevent unhandled promise rejections

* fix: improve flow state concurrency to prevent multiple token creation calls

* refactor: RequestExecutor to use separate axios instance

* refactor: improve concurrency flows by keeping completed state until TTL expiry

* refactor: increase TTL for flow state management and adjust monitoring interval

* ci: mock axios instance creation in actions spec

* feat: add Babel and Jest configuration files; implement FlowStateManager tests with concurrency handling

* chore: add disableOAuth prop to ActionsAuth (not implemented for Assistants yet)

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-02-10 15:56:08 -05:00
Ruben Talstra
71c30a3640 🎯 ci: Update ESLint Workflow to target api/ and client/ changes (#5771) 2025-02-10 09:05:03 -05:00
Ruben Talstra
d90c9c4b77 📜 ci: Consolidate Locize Workflows for Missing Keys & PR Creation (#5769) 2025-02-10 09:03:59 -05:00
github-actions[bot]
37f6099f0a 🌍 i18n: Update translation.json with latest translations (#5765)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-02-10 09:02:56 -05:00
Marco Beretta
93415ebbd7 📝 docs: Update Language Request Template & Update README (#5766)
* Update README.md

* Update NEW-LANGUAGE-REQUEST.yml

* Updated: README.md
Removed: TRANSLATION.md

---------

Co-authored-by: Ruben Talstra <RubenTalstra1211@outlook.com>
2025-02-10 09:02:33 -05:00
github-actions[bot]
15c55d226e 🌍 i18n: Update translation.json with latest translations (#5764)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-02-09 15:47:25 -05:00
Ruben Talstra
1f31171fca 🤖 ci: locize-pull-published-sync-pr.yml (#5763)
* fix: ci for locize-pull-published-sync-pr.yml

* fix: ci for locize-pull-published-sync-pr.yml

* fixed missing parameter: base: main

* removed running on pull_request
2025-02-09 15:18:01 -05:00
Ruben Talstra
96f1133f0d 🤖 ci: locize-pull-published-sync-pr.yml (#5762)
* fix: ci for locize-pull-published-sync-pr.yml

* fix: ci for locize-pull-published-sync-pr.yml

* fixed missing parameter: base: main
2025-02-09 14:51:28 -05:00
Ruben Talstra
86134415e9 🧹 chore: Migrate to Flat ESLint Config & Update Prettier Settings (#5737)
* chore: migrated eslint v8 to v9

* chore: migrated eslint v8 to v9

* ESLint only checks the files that have changed in the pull request.

* fix: ESLint only checks the files that have changed in the pull request.

* refactor: eslint only on changed files

* refactor: eslint only on changed files or added files

* refactor: eslint only on changed files or added files

* refactor: eslint only on changed files or added files

but only include files that are not deleted (ACMRTUXB: A, C, M, R, T, U, X, B).

* whoops missed something
2025-02-09 12:15:20 -05:00
Ruben Talstra
aae413cc71 🌎 i18n: React-i18next & i18next Integration (#5720)
* better i18n support an internationalization-framework.

* removed unused package

* auto sort for translation.json

* fixed tests with the new locales function

* added new CI actions from locize

* to use locize a mention in the README.md

* to use locize a mention in the README.md

* updated README.md and added TRANSLATION.md to the repo

* updated TRANSLATION.md badges

* updated README.md to go to the TRANSLATION.md when clicking on the Translation Progress badge

* updated TRANSLATION.md and added a new issue template.

* updated TRANSLATION.md and added a new issue template.

* updated issue template to add the iso code link.

* updated the new GitHub actions for `locize`

* updated label for new issue template --> i18n

* fixed type issue

* Fix eslint

* Fix eslint with key-spacing spacing

* fix: error type

* fix: handle undefined values in SortFilterHeader component

* fix: typing in Image component

* fix: handle optional promptGroup in PromptCard component

* fix: update localize function to accept string type and remove unnecessary JSX element

* fix: update localize function to enforce TranslationKeys type for better type safety

* fix: improve type safety and handle null values in Assistants component

* fix: enhance null checks for fileId in FilesListView component

* fix: localize 'Go back' button text in FilesListView component

* fix: update aria-label for menu buttons and add translation for 'Close Menu'

* docs: add Reasoning UI section for Chain-of-Thought AI models in README

* fix: enhance type safety by adding type for message in MultiMessage component

* fix: improve null checks and optional chaining in useAutoSave hook

* fix: improve handling of optional properties in cleanupPreset function

* fix: ensure isFetchingNextPage defaults to false and improve null checks for messages in Search component

* fix: enhance type safety and null checks in useBuildMessageTree hook

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-02-09 12:05:31 -05:00
Kay Belardinelli
2e8d969e35 🔇 a11y: Silence Unnecessary Icons for Screen Readers (#5726)
* a11y: silence miscellaneous icons that should not be read by screen reader (#5723, #5724)

* 📝 chore: Update bug report template with additional guidance and version information

* 📝 chore: Update bug report template to guide users on using Discussions for general inquiries and setup help

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-02-09 10:53:43 -05:00
Ruben Talstra
1519afd4b9 🧹 chore: Enhance Issue Templates with Emoji Labels (#5754)
* updated the labels in the templates.

* fixed spacing in label in the templates.
2025-02-09 14:41:57 +01:00
Stefan Siegel
d786bf263c 📱 feat: improve mobile viewport behavior with interactive-widget meta (#5675)
fixed mobile viewport behavior when keyboard appears: content now resizes properly instead of scrolling, keeping the top area visible
2025-02-08 00:15:49 +01:00
Danny Avila
8b2ffa141e 🔍 a11y: MultiSearch Clear Input (#5718)
* add accessibility features to model search

* chore: linting

* fix: Improve accessibility by adding aria-label to MultiSearch input

* refactor: MultiSearch component as button

* refactor: Update MultiSearch component styles for improved theming

* refactor: Update MultiSearch component styles for improved visual consistency

---------

Co-authored-by: Derek Jackson <derek_jackson@harvard.edu>
Co-authored-by: derek jackson <63861027+derekjackson-das@users.noreply.github.com>
Co-authored-by: Ruben Talstra <RubenTalstra1211@outlook.com>
2025-02-07 09:38:18 -05:00
5026
18339ec7bb 🌍 i18n: "Balance" Localization For ZhTraditional (#5682) 2025-02-06 20:16:22 -05:00
Marco Beretta
70e410f38b 💬 fix: Temporary Chat PR's broken components and improved UI (#5705)
* 💬 fix: Temporary Chat PR's broken components and improved UI

* 💬 fix: bring back hover effect on AudioRecorder button

* style: adjust position of Mention component popover

* refactor: PromptsCommand typing and style position

* refactor: virtualize mention UI

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-02-06 20:15:38 -05:00
Danny Avila
63afb317c6 🚀 fix: Resolve Google Client Issues, CDN Screenshots, Update Models (#5703)
* 🤖 refactor: streamline model selection logic for title model in GoogleClient

* refactor: add options for empty object schemas in convertJsonSchemaToZod

* refactor: add utility function to check for empty object schemas in convertJsonSchemaToZod

* fix: Google MCP Tool errors, and remove Object Unescaping as Google fixed this

* fix: google safetySettings

* feat: add safety settings exclusion via GOOGLE_EXCLUDE_SAFETY_SETTINGS environment variable

* fix: rename environment variable for console JSON string length

* fix: disable portal for dropdown in ExportModal component

* fix: screenshot functionality to use image placeholder for remote images

* feat: add visionMode property to BaseClient and initialize in GoogleClient to fix resendFiles issue

* fix: enhance formatMessages to include image URLs in message content for Vertex AI

* fix: safety settings for titleChatCompletion

* fix: remove deprecated model assignment in GoogleClient and streamline title model retrieval

* fix: remove unused image preloading logic in ScreenshotContext

* chore: update default google models to latest models shared by vertex ai and gen ai

* refactor: enhance Google error messaging

* fix: update token values and model limits for Gemini models

* ci: fix model matching

* chore: bump version of librechat-data-provider to 0.7.699
2025-02-06 18:13:18 -05:00
Andrés Restrepo
33e60c379b 📜 feat: Configure JSON Log Truncation Size (#5215) 2025-02-06 13:36:25 -05:00
Ruben Talstra
ae7814a2b3 🔧 fix: Wrong import useGetStartupConfig (#5692)
* fixed build failed error

* chore: import order

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-02-06 13:30:15 -05:00
Yuichi Oneda
8c404ae056 💬 feat: Temporary Chats (#5493)
* feat: add expiredAt property to Conversation and Message models

Added `expiredAt` property to both Conversation and Message schemas.
Configured `expireAfterSeconds` index in MongoDB to automatically delete documents after a specified period.

* feat(data-provider): add isTemporary and expiredAt properties to support temporary chats

Added `isTemporary` property to TPayload and TSubmission for API calls for temporary chat.
Additionally, added `expiredAt` property to `tConversationSchema` to determine if a chat is temporary.

* feat: implement isTemporary state management

Add Recoil state for tracking temporary conversations, update event handlers to respect temporary chat status

* feat: add configuration to interfaceconfig to hide the temporary chat switch

* feat: add Temporary Chat UI with switch and modify related behaviors

- Added a Temporary Chat switch button at the end of dropdown lists in each model.
- Updated the form background color to black when Temporary Chat is enabled.
- Modified Navigation to exclude Temporary Chats from the chat list.

* fix: exclude Temporary Chats from search results

Updated the getConvosQueried query to ensure that Temporary Chats are not included in the search results.

* fix: hide bookmark button for Temporary Chats

Updated the UI to ensure that the bookmark button is not displayed when a chat is as Temporary Chat.

* chore: update isTemporary state management in ChatRoute

* chore: fix to pass the tests
2025-02-06 11:11:47 -05:00
Marco Beretta
5f9543f6fc 🛠️ fix: enhance UI/UX and address a11y issues in SetKeyDialog (#5672)
*  refactor: Improve UI consistency and accessibility in SetKeyDialog components

* 🎨 style: Add cursor pointer to Slider component for better UX

* 🐛 chore: Remove unnecessary console log from SetKeyDialog component
2025-02-05 16:35:07 -05:00
Marco Beretta
73fe0835cf 🎨 style: Prompt UI Refresh & A11Y Improvements (#5614)
* 🚀 feat: Add animated search input and improve filtering UI

* 🏄 refactor: Clean up category options and optimize event handlers in ChatGroupItem

* 🚀 refactor: 'Rename Prompt' option and enhance prompt filtering UI
Changed the useUpdatePromptGroup mutation in prompts.ts to replace the JSON.parse(JSON.stringify(...)) clones with structuredClone. This avoids errors when data contains non‑JSON values and improves data cloning reliability

* 🔧 refactor: Update Sharing Prompts UI; fix: Show info message only after updating switch status

* 🔧 refactor: Simplify condition checks and replace button with custom Button component in SharePrompt

* 🔧 refactor: Update DashGroupItem styles and improve accessibility with updated aria-label

* 🔧 refactor: Adjust layout styles in GroupSidePanel and enhance loading skeletons in List component

* 🔧 refactor: Improve layout and styling of AdvancedSwitch component; adjust DashBreadcrumb margin for better alignment

* 🔧 refactor: Add new surface colors for destructive actions and update localization strings for confirmation prompts

* 🔧 refactor: Update PromptForm and PromptName components for improved layout and styling; replace button with custom Button component

* 🔧 refactor: Enhance styling and layout of DashGroupItem, FilterPrompts, and Label components for improved user experience

* 🔧 refactor: Update DeleteBookmarkButton and Label components for improved layout and text handling

* 🔧 refactor: Simplify CategorySelector usage and update destructive surface colors for a11y

* 🔧 refactor: Update styling and layout of PromptName, SharePrompt, and DashGroupItem components; enhance Dropdown functionality with custom renderValue

* 🔧 refactor: Improve layout and styling of various components; update button sizes and localization strings for better accessibility and user experience

* 🔧 refactor: Add useCurrentPromptData hook and enhance RightPanel component; update CategorySelector for improved functionality and accessibility

* 🔧 refactor: Update input components and styling for Command and Description; enhance layout and accessibility in PromptVariables and PromptForm

* 🔧 refactor: Remove useCurrentPromptData hook and clean up related components; enhance PromptVersions layout

* 🔧 refactor: Enhance accessibility by adding aria-labels to buttons and inputs; improve localization for filter prompts

* 🔧 refactor: Enhance accessibility by adding aria-labels to various components; improve layout and styling in PromptForm and CategorySelector

* 🔧 refactor: Enhance accessibility by adding aria-labels to buttons and components; improve dialog roles and descriptions in SharePrompt and PromptForm

* 🔧 refactor: Improve accessibility by adding aria-labels and roles; enhance layout and styling in ChatGroupItem, ListCard, and ManagePrompts components

* 🔧 refactor: Update UI components for improved styling and accessibility; replace button elements with custom Button component and enhance layout in VariableForm, PromptDetails, and PromptVariables

* 🔧 refactor: Improve null checks for group and instanceProjectId in SharePrompt component; enhance readability and maintainability

* style: Enhance AnimatedSearchInput component with TypeScript types; improve conditional rendering for search states and accessibility

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-02-05 11:37:17 -05:00
heptapod
a44f5b4b6e 🌍 i18n: Fix "Balance" Localization For De (#5656) 2025-02-05 10:28:12 -05:00
RedwindA
40d9b1d2a2 🌍 i18n: Fix "Balance" Localization For Zh&ZhTraditional (#5632)
* Update translation of `balance` in Zh.ts

* Update translation of `balance` in ZhTraditional.ts
2025-02-05 15:58:23 +01:00
Danny Avila
6c33dc2eb3 🤖 refactor: Prevent Vertex AI from Setting Parameter Defaults (#5653)
* refactor: remove google defaults

* refactor: improve GoogleClient stream handling and metadata usage

* chore: update @librechat/agents to version 2.0.1

* fix: return client instance in GoogleClient configuration
2025-02-04 21:45:43 -05:00
Danny Avila
0312d4f4f4 🔧 refactor: Revamp Model and Tool Filtering Logic (#5637)
* 🔧 fix: Update regex to correctly match OpenAI model identifiers

* 🔧 fix: Enhance tool filtering logic in ToolService to handle inclusion and exclusion criteria for basic tools and toolkits

* feat: support o3-mini Azure streaming

* chore: Update model filtering logic to exclude audio and realtime models

* ci: linting error
2025-02-03 16:08:34 -05:00
Ruben Talstra
7c8a930061 feat: added Github Enterprise SSO login (#5621)
* https://github.com/danny-avila/LibreChat/issues/2812

* refactored the code to simplify it.

* removed unneeded code

* removed unneeded code
2025-02-03 15:30:02 -05:00
Ruben Talstra
93f5713c74 🛜 ci: OpenID Strategy Test Async Handling (#5613) 2025-02-03 10:57:49 -05:00
Igor
20aa0be85d 🌍 i18n: Add Missing "Balance" Localization For All Languages (#5594)
* Update AccountSettings.tsx

---------

Co-authored-by: Danny Avila <danacordially@gmail.com>
2025-02-03 10:56:44 -05:00
Sam Lewis
d7dc58dd23 🔧 fix: Fetch PWA Manifest with credentials over CORS (#5156)
When behind authentication (for eg: Cloudflare Access), browsers
won't send credentials when fetching the manifest file by default.

To fix, this change adds `crossorigin="use-credentials"` to the
manifest link tag by enabling the `useCredentials` option in
VitePWA.
2025-02-03 10:54:10 -05:00
Danny Avila
45dd2b262f 🛂 feat: OpenID Logout Redirect to end_session_endpoint (#5626)
* WIP: end session endpoint

* refactor: move useGetBannerQuery outside of package

* refactor: add queriesEnabled and move useGetEndpointsConfigQuery to data-provider (local)

* refactor: move useGetEndpointsQuery import to data-provider

* refactor: relocate useGetEndpointsQuery import to improve module organization

* refactor: move `useGetStartupConfig` from package to `~/data-provider`

* refactor: move useGetUserBalance to data-provider and update imports

* refactor: update query enabled conditions to include config check

* refactor: remove unused useConfigOverride import from useAppStartup

* refactor: integrate queriesEnabled state into file and search queries and move useGetSearchEnabledQuery to data-provider (local)

* refactor: move useGetUserQuery to data-provider and update imports

* refactor: enhance loginUser mutation with success and error handling as pass in options to hook

* refactor: update enabled condition in queries to handle undefined config

* refactor: enhance authentication mutations with queriesEnabled state management

* refactor: improve conditional rendering for error messages and feature flags in Login component

* refactor: remove unused queriesEnabled state from AuthContextProvider

* refactor: implement queriesEnabled state management in LoginLayout with timeout handling

* refactor: add conditional check for end session endpoint in OpenID strategy

* ci: fix tests after changes

* refactor: remove endSessionEndpoint from user schema and update logoutController to use OpenID issuer's end_session_endpoint

* refactor: update logoutController to use end_session_endpoint from issuer metadata
2025-02-03 10:53:04 -05:00
Danny Avila
d93f5c9061 ☁️ feat: Additional AI Gateway Provider Support; fix: Reasoning Effort for Presets/Agents (#5600)
* 🐛 fix: Prevent processing of non-artifact nodes in artifact plugin

* refactor: remove deprecated fields, add `reasoning_effort`

* refactor: move `reasoning_effort` to the second column in OpenAI settings

* feat: add support for additional AI Gateway provider in extractBaseURL function

* refactor: move `reasoning_effort` field to conversationPreset and remove from agentOptions
2025-02-02 09:04:10 -05:00
Danny Avila
352565c9a6 🎥 feat: YouTube Tool (#5582)
* adding youtube tool

* refactor: use short `url` param instead of `videoUrl`

* refactor: move API key retrieval to a separate credentials module

* refactor: remove unnecessary `isEdited` message property

* refactor: remove unnecessary `isEdited` message property pt. 2

* refactor: YouTube Tool with new `tool()` generator, handle tools already created by new `tool` generator

* fix: only reset request data for multi-convo messages

* refactor: enhance YouTube tool by adding transcript parsing and returning structured JSON responses

* refactor: update transcript parsing to handle raw response and clean up text output

* feat: support toolkits and refactor YouTube tool as a toolkit for better LLM usage

* refactor: remove unused OpenAPI specs and streamline tools transformation in loadAsyncEndpoints

* refactor: implement manifestToolMap for better tool management and streamline authentication handling

* feat: support toolkits for assistants

* refactor: rename loadedTools to toolDefinitions for clarity in PluginController and assistant controllers

* feat: complete support of toolkits for assistants

---------

Co-authored-by: Danilo Pejakovic <danilo.pejakovic@leoninestudios.com>
2025-01-31 19:11:04 -05:00
Danny Avila
33f6093775 🤖 feat: o3-mini (#5581)
* 🤖 feat: `o3-mini`

* chore: re-order vision models list to prioritize gpt-4o as a vision model over o1
2025-01-31 16:49:01 -05:00
Danny Avila
fdf0b41d08 🐛 fix: Handle content generation errors in GoogleClient (#5575) 2025-01-31 11:22:15 -05:00
Danny Avila
6920e23fb2 🤖 fix: Azure Agents after Upstream Breaking Change (#5571)
* 🤖 fix: Azure Agents after Upstream Breaking Change

* chore: bump @langchain/core & @librechat/agents

* fix: correct formatting in assistant actions update logic and use correctly filtered actions variable

* fix: linting errors
2025-01-31 09:50:49 -05:00
Ruben Talstra
e1a6268904 🍎 feat: Apple auth (#5473)
* implemented Apple Auth login.

Closes: #3438

TODO:
- write config Doc

* removed some comments

* removed comment

* Add unit tests for Apple login strategy

Introduce comprehensive tests for the Apple login strategy, covering new user creation, existing user updates, and error handling scenarios during the authentication flow. Mocks implemented for external dependencies to ensure isolated testing.

* Remove unnecessary blank line in socialLogins.js
2025-01-31 09:49:09 -05:00
Marco Beretta
1c459ed3af 🖱️ feat: Switch Scroll Button setting (#5332) 2025-01-31 07:52:52 -05:00
owengo
8a0c7d92bd 👷 feat: Allow Admin to Edit Agent/Assistant Actions (#4591)
* feat: allows admin to see and edits all actions

* feat: allows admin to see and edits all actions

* rollback: admins can edit all actions, no configuration

* fix: admins don't override the user of existing actions and they preserve the user of the assistant when creating a new action

---------

Co-authored-by: Olivier Schiavo <olivier.schiavo@wengo.com>
2025-01-31 07:45:02 -05:00
JM Addington
9373f77bb7 feat: Add Scripts for listing users and resetting passwords (#5438)
*  feat: Add user management scripts for listing users and resetting passwords

* chore: update package.json

---------

Co-authored-by: Danny Avila <danacordially@gmail.com>
2025-01-31 07:40:06 -05:00
Fuegovic
6f0ded058f 📝 docs: Update librechat.example.yaml (#5544)
Enable modelSelect and Presets by default
2025-01-31 07:35:18 -05:00
Danny Avila
19fa4d9f54 🧹 chore: Remove Deprecated BingAI Code & Address Mobile Focus (#5565)
* chore: remove all bing code

* chore: remove bing code and auto-focus effects

* chore: add back escapeRegExp helper function for regex special character handling

* chore: remove deprecated fields from settings and conversation schema

* fix: ensure default endpoint is set correctly in conversation setup

* feat: add disableFocus option to newConversation for improved search behavior
2025-01-30 17:22:29 -05:00
James Lamine
1226f56d0c 🔧 fix: Add missing finish_reason to stream chunks (#5563) 2025-01-30 15:24:43 -05:00
James Lamine
85c6a706c3 🔧 fix: handle known OpenAI errors with empty intermediate reply (#5562) 2025-01-30 15:20:34 -05:00
Danny Avila
587d46a20b 🚀 feat: o1 Tool Calling & reasoning_effort (#5553)
* fix: Update @librechat/agents to version 1.9.98

* feat: o1 tool calling

* fix: Improve error logging in RouteErrorBoundary

* refactor: Move extractContent function to utils and clean up Artifact component

* refactor: optimize reasoning UI post-streaming and deprecate plugins rendering

* feat: reasoning_effort support

* fix: update request content type handling in openapiToFunction to remove default 'application/x-www-form-urlencoded'

* chore: bump v0.7.696 data-provider
2025-01-30 12:36:35 -05:00
Danny Avila
591a019766 🏄‍♂️ refactor: Optimize Reasoning UI & Token Streaming (#5546)
*  feat: Implement Show Thinking feature; refactor: testing thinking render optimizations

*  feat: Refactor Thinking component styles and enhance Markdown rendering

* chore: add back removed code, revert type changes

* chore: Add back resetCounter effect to Markdown component for improved code block indexing

* chore: bump @librechat/agents and google langchain packages

* WIP: reasoning type updates

* WIP: first pass, reasoning content blocks

* chore: revert code

* chore: bump @librechat/agents

* refactor: optimize reasoning tag handling

* style: ul indent padding

* feat: add Reasoning component to handle reasoning display

* feat: first pass, content reasoning part styling

* refactor: add content placeholder for endpoints using new stream handler

* refactor: only cache messages when requesting stream audio

* fix: circular dep.

* fix: add default param

* refactor: tts, only request after message stream, fix chrome autoplay

* style: update label for submitting state and add localization for 'Thinking...'

* fix: improve global audio pause logic and reset active run ID

* fix: handle artifact edge cases

* fix: remove unnecessary console log from artifact update test

* feat: add support for continued message handling with new streaming method

---------

Co-authored-by: Marco Beretta <81851188+berry-13@users.noreply.github.com>
2025-01-29 19:46:58 -05:00
James Lamine
d60a149ad9 🗨️ fix: Loading Shared Saved Prompts (#5515) 2025-01-28 10:35:17 -05:00
Evren Tan
ad4cfba710 🌱 feat(.env.example): add o1 models (#5106)
* feat(.env.example): add o1-mini and o1-preview to .env.example

* feat(.env.example): add o1 to .env.example

---------

Co-authored-by: Evren Tan <evren.tan@pointr.tech>
2025-01-28 15:56:05 +01:00
Danny Avila
4110209494 ♻️ fix: Prevent Instructions from Removal when nearing Max Context (#5516)
* refactor: getMessagesWithinTokenLimit to accept params object

* refactor: always include instructions in payload if provided

* ci: remove obsolete test

* refactor: update logoutUser to accept request object and handle session destruction

* test: enhance getMessagesWithinTokenLimit tests for instruction handling
2025-01-27 20:37:38 -05:00
Danny Avila
528ee62eb1 🤖 fix: GoogleClient Context Handling & GenAI Parameters (#5503)
* fix: remove legacy code for GoogleClient and fix model parameters for GenAI

* refactor: streamline client init logic

* refactor: remove legacy vertex clients, WIP remote vertex token count

* refactor: enhance GoogleClient with improved type definitions and streamline token count method

* refactor: remove unused methods and consolidate methods

* refactor: remove examples

* refactor: improve input handling logic in DynamicInput component

* refactor: enhance GoogleClient with token usage tracking and context handling improvements

* refactor: update GoogleClient to support 'learnlm' model and streamline model checks

* refactor: remove unused text model handling in GoogleClient

* refactor: record token usage for GoogleClient titles and handle edge cases

* chore: remove unused undici, addresses verbose version warning
2025-01-27 12:21:33 -05:00
oonishi3
47b72e8159 🉐 fix: incorrect handling for composing CJK texts in Safari (#5496) 2025-01-27 11:22:38 -05:00
Ruben Talstra
5f8fade7eb 🔧 chore: bump ``vite`` to patch CVE-2025-24010 (#5495)
Replaced an outdated Vite entry and corrected inconsistencies in dependencies.

Severity: moderate
Websites were able to send any requests to the development server and read the response in vite - https://github.com/advisories/GHSA-vg6x-rcgg-rjx6
2025-01-27 11:20:08 -05:00
Marco Beretta
e7de9c1576 🛡️ refactor: enhance email verification process (#5485) 2025-01-26 20:57:03 -05:00
Danny Avila
12a9a07eb0 🐛 fix: Update deletePromptController to include user role in query (#5488) 2025-01-26 19:03:12 -05:00
Danny Avila
8b31f255f5 🪙 fix: Deepseek Pricing 2025-01-25 10:13:46 -05:00
Danny Avila
60c846b679 🪙 fix: Deepseek Pricing & Titling (#5459) 2025-01-25 10:10:53 -05:00
Danny Avila
af430e46f4 feat: Add Google Parameters, Ollama/Openrouter Reasoning, & UI Optimizations (#5456)
* feat: Google Model Parameters

* fix: dynamic input number value, previously coerced by zod schema

* refactor: support openrouter reasoning tokens and XML for thinking directive to conform to ollama

* fix: virtualize combobox to prevent performance drop on re-renders of long model/agent/assistant lists

* refactor: simplify Fork component by removing unnecessary chat context index

* fix: prevent rendering of Thinking component when children are null

* refactor: update Markdown component to replace <think> tags and simplify remarkPlugins configuration

* refactor: reorder remarkPlugins to improve plugin configuration in Markdown component
2025-01-24 18:15:47 -05:00
Danny Avila
7818ae5c60 🐳 feat: Deepseek Reasoning UI (#5440) 2025-01-24 10:52:08 -05:00
Marco Beretta
b8b7f40e98 🌄 feat: Add RouteErrorBoundary for Improved Client Error handling (#5396)
* feat: Add RouteErrorBoundary for improved error handling and integrate react-error-boundary package

* feat: update error message

* fix: correct typo in containerClassName prop in Landing component
2025-01-24 08:34:44 -05:00
Danny Avila
ed57bb4711 🚀 feat: Artifact Editing & Downloads (#5428)
* refactor: expand container

* chore: bump @codesandbox/sandpack-react to latest

* WIP: first pass, show editor

* feat: implement ArtifactCodeEditor and ArtifactTabs components for enhanced artifact management

* refactor: fileKey

* refactor: auto scrolling code editor and add messageId to artifact

* feat: first pass, editing artifact

* feat: first pass, robust artifact replacement

* fix: robust artifact replacement & re-render when expected

* feat: Download Artifacts

* refactor: improve artifact editing UX

* fix: layout shift of new download button

* fix: enhance missing output checks and logging in StreamRunManager
2025-01-23 18:19:04 -05:00
Danny Avila
87383fec27 🔧 chore: Update Deepseek Pricing, Google Safety Settings (#5409)
* fix: google-thinking model safety settings fix

* chore: update pricing/context for deepseek models

* ci: update Deepseek model token limits to use dynamic mapping
2025-01-22 07:50:09 -05:00
Marco Beretta
2d3dd9e351 ️ a11y: Enhance Accessibility in ToolSelectDialog, ThemeSelector and ChatGroupItem (#5395)
* feat: Add keyboard shortcut for theme switching and improve accessibility announcements

* fix: Improve accessibility of ToolSelectDialog close button

* feat: Enhance accessibility in ChatGroupItem component
2025-01-21 21:54:13 -05:00
Danny Avila
199e5e6eaf 🛠️ fix: Optionally add OpenID Sig. Algo. from Server Discovery (#5398)
* fix: Optionally add OpenID Sig. Algorithm from Server Discovery

* chore: bump vite to 5.4.14 for CVE-2025-24010

* chore: remove deprecated code

* fix: install missing undici

* fix: Add @waylaidwanderer/fetch-event-source package
2025-01-21 21:49:27 -05:00
Marco Beretta
fa9e778399 🔗 feat: Enhance Share Functionality, Optimize DataTable & Fix Critical Bugs (#5220)
* 🔄 refactor: frontend and backend share link logic; feat: qrcode for share link; feat: refresh link

* 🐛 fix: Conditionally render shared link and refactor share link creation logic

* 🐛 fix: Correct conditional check for shareId in ShareButton component

* 🔄 refactor: Update shared links API and data handling; improve query parameters and response structure

* 🔄 refactor: Update shared links pagination and response structure; replace pageNumber with cursor for improved data fetching

* 🔄 refactor: DataTable performance optimization

* fix: delete shared link cache update

* 🔄 refactor: Enhance shared links functionality; add conversationId to shared link model and update related components

* 🔄 refactor: Add delete functionality to SharedLinkButton; integrate delete mutation and confirmation dialog

* 🔄 feat: Add AnimatedSearchInput component with gradient animations and search functionality; update search handling in API and localization

* 🔄 refactor: Improve SharedLinks component; enhance delete functionality and loading states, optimize AnimatedSearchInput, and refine DataTable scrolling behavior

* fix: mutation type issues with deleted shared link mutation

* fix: MutationOptions types

* fix: Ensure only public shared links are retrieved in getSharedLink function

* fix: `qrcode.react` install location

* fix: ensure non-public shared links are not fetched when checking for existing shared links, and remove deprecated .exec() method for queries

* fix: types and import order

* refactor: cleanup share button UI logic, make more intuitive

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-01-21 09:31:05 -05:00
Danny Avila
460cde0c0b 🔒 chore: bump katex package to patch CVE-2025-23207 (#5383)
* chore: bump `katex` to patch `CVE-2025-23207`

* chore: prevent adding Parameters panel for agent endpoints in SideNav
2025-01-20 22:02:18 -05:00
Danny Avila
d6b4d83b68 🔥 feat: deepseek-reasoner Thought Streaming (#5379)
* 🔧 refactor: Remove unused penalties and enhance reasoning token handling in OpenAIClient

* 🔧 refactor: `addInstructions` default to adding instructions at index 0, flag for legacy behavior

* chore: remove long placeholder

* chore: update localization strings across multiple languages

* ci: adjust tests for new `addInstructions` behavior
2025-01-20 18:21:18 -05:00
Marco Beretta
79585e22d2 🔈fix: Accessible name on 'Prev' button in Prompts UI (#5369)
Fixes #5310

Add `aria-label="previous"` attribute to the 'Prev' button in the Prompts Panel.

* Modify `client/src/components/Chat/Prompts.tsx` to include `aria-label="previous"` attribute for the button.
2025-01-20 17:14:49 -05:00
Ragavendaran Puliyadi
a2305c3a7c 🐛 fix: use OpenID token signature algo as discovered from the server (#5348)
* 🐛 fix: use OpenID token signature algo as discovered from the server.

* 📜 refactor: Keeping other props that uses alg.

* 🔧 fix: handle missing property

* 📘 refactor: add comment block
2025-01-20 17:14:07 -05:00
Ragavendaran P R
d048a10b2e 📜 refactor: Log Error Messages when OAuth Fails (#5337) 2025-01-18 09:32:41 -05:00
Danny Avila
e6670cd411 🔧 chore: bump mongoose to patch CVE-2025-23061 (#5351) 2025-01-17 13:09:46 -05:00
Danny Avila
b35a8b78e2 🔧 refactor: Improve Agent Context & Minor Fixes (#5349)
* refactor: Improve Context for Agents

* 🔧 fix: Safeguard against undefined properties in OpenAIClient response handling

* refactor: log error before re-throwing for original stack trace

* refactor: remove toolResource state from useFileHandling, allow svg files

* refactor: prevent verbose logs from axios errors when using actions

* refactor: add silent method recordTokenUsage in AgentClient

* refactor: streamline token count assignment in BaseClient

* refactor: enhance safety settings handling for Gemini 2.0 model

* fix: capabilities structure in MCPConnection

* refactor: simplify civic integrity threshold handling in GoogleClient and llm

* refactor: update token count retrieval method in BaseClient tests

* ci: fix test for svg
2025-01-17 12:55:48 -05:00
Danny Avila
e309c6abef 🎯 fix: Prevent UI De-sync By Removing Redundant States (#5333)
* fix: remove local state from Dropdown causing de-sync

* refactor: cleanup STT code, avoid redundant states to prevent de-sync and side effects

* fix: reset transcript after sending final text to prevent data loss

* fix: clear timeout on component unmount to prevent memory leaks
2025-01-16 17:38:59 -05:00
Marco Beretta
b55e695541 🔧 fix: Maximize Chat Space for Agent Messages (#5330) 2025-01-16 17:28:33 -05:00
Danny Avila
24d30d7428 🏃‍♂️➡️ feat: Upgrade Meilisearch to v1.12.3 (#5327) 2025-01-16 08:25:33 -05:00
Danny Avila
aa80e4594e ♻️ refactor: Logout UX, Improved State Teardown, & Remove Unused Code (#5292)
* refactor: SearchBar and Nav components to streamline search functionality and improve state management

* refactor: remove refresh conversations

* chore: update useNewConvo calls to remove hardcoded default index

* refactor: null check for submission in useSSE hook

* refactor: remove useConversation hook and update useSearch to utilize useNewConvo

* refactor: remove conversation and banner store files; consolidate state management into misc; improve typing of families and add messagesSiblingIdxFamily

* refactor: more effectively clear all user/convo state without side effects on logout/delete user

* refactor: replace useParams with useLocation in SearchBar to correctly load conversation

* refactor: update SearchButtons to use button element and improve conversation ID handling

* refactor: use named function for `newConversation` for better call stack tracing

* refactor: enhance TermsAndConditionsModal to support array content and improve type definitions for terms of service

* refactor: add SetConvoProvider and message invalidation when navigating from search results to prevent initial route rendering edge cases

* refactor: rename getLocalStorageItems to localStorage and update imports for consistency

* refactor: move clearLocalStorage function to utils and simplify localStorage clearing logic

* refactor: migrate authentication mutations to a dedicated Auth data provider and update related tests
2025-01-12 12:57:10 -05:00
Danny Avila
24beda3d69 🐛 fix: Resolve 'Icon is Not a Function' Error in PresetItems (#5260)
* refactor: improve typing

* fix: "TypeError: Icon is not a function" with proper use of Functional Component and Improved Typing
2025-01-10 19:00:44 -05:00
Danny Avila
0855677a36 🌤️ feat: Add OpenWeather Tool for Weather Data Retrieval (#5246)
*  feat: Add OpenWeather Tool for Weather Data Retrieval 🌤️

* chore: linting

* chore: move test files

* fix: tool icon, allow user-provided keys, conform to app key assignment pattern

* chore: linting not included in #5212

---------

Co-authored-by: Jonathan Addington <jonathan.addington@jmaddington.com>
2025-01-10 08:54:08 -05:00
Danny Avila
ea1a5c8a30 🐛 fix: Handle optional endpoints in processModelSpecs function 2025-01-09 18:18:14 -05:00
Danny Avila
0f95604a67 ️ refactor: Optimize Rendering Performance for Icons, Conversations (#5234)
* refactor: HoverButtons and Fork components to use explicit props

* refactor: improve typing for Fork Component

* fix: memoize SpecIcon to avoid unnecessary re-renders

* feat: introduce URLIcon component and update SpecIcon for improved icon handling

* WIP: optimizing icons

* refactor: simplify modelLabel assignment in Message components

* refactor: memoize ConvoOptions component to optimize rendering performance
2025-01-09 15:40:10 -05:00
Danny Avila
687ab32bd3 🔧 fix: Streamline Builder Links and Enhance UI Consistency (#5229)
* fix: Include iconURL in Bedrock client initialization

* fix: unnecessary filtering for agent file_search files

* chore: use theme bg colors

* refactor: rely on endpoint config for enabling builder links in side navigation instead of parameters

* fix: remove unnecessary keyProvided check for agent builder link
2025-01-09 12:03:35 -05:00
Lars Kiesow
dd927583a7 Provide production-ready memory store for eypress-session (#5212)
The `express-session` library comes with a session storage meant for
testing by default. That is why you get a message like this when you
start up LibreChat with OIDC enabled:

    Warning: connect.session() MemoryStore is not
    designed for a production environment, as it will leak
    memory, and will not scale past a single process.

LibreChat can already use Redis as a session storage, although Redis support
is still marked as experimental. It also makes the set-up more complex, since
you will need to configure and run yet another service.

This pull request provides a simple alternative by using a in-memory session
store marked as a production-ready alternative by the guys from
`express-session`¹. You can still configure Redis, but this provides a simple,
good default for everyone else.

See also https://github.com/danny-avila/LibreChat/discussions/1014

¹⁾ https://github.com/expressjs/session?tab=readme-ov-file#compatible-session-stores
2025-01-09 11:23:51 -05:00
Danny Avila
69a9b8b911 🐛 fix: Ensure Default ModelSpecs Are Set Correctly (#5218)
* 🐛 fix: default modelSpecs not being set

* feat: Add imageDetail parameter for OpenAI endpoints in tQueryParamsSchema

* feat: Implement processModelSpecs function to enhance model specs processing from configuration

* feat: Refactor configuration schemas and types for improved structure and clarity

* feat: Add append_current_datetime parameter to tQueryParamsSchema for enhanced endpoint functionality

* fix: Add endpointType to getSaveOptions and enhance endpoint handling in Settings component

* fix: Change endpointType to be nullable and optional in tConversationSchema for improved flexibility

* fix: allow save & submit for google endpoint
2025-01-08 21:57:00 -05:00
Danny Avila
916faf6447 🐛 fix: Correct Endpoint/Icon Handling, Update Module Resolutions (#5205)
* fix: agent modelSpec iconURLs not being recorded

* fix: prioritize message properties over conversation defaults in icon data

* fix: determine endpoint type from endpointsConfig

* chore: type issue with setting.columnSpan

* chore: remove redundant key indexing for keySchema

* chore: bump version to 0.7.691 in package.json

* chore: add stricter remark-gfm and mdast-util-gfm resolutions/overrides

* chore: remove rollup override and bump vite-plugin-pwa

* chore: reinstall remark-gfm for correct module resolution

* chore: reinstall vite-plugun-pwa
2025-01-07 11:09:18 -05:00
Danny Avila
8aa1e731ca feat: Quality-of-Life Chat/Edit-Message Enhancements (#5194)
* fix: rendering error for mermaid flowchart syntax

* feat: add submit button ref and enable submit on Ctrl+Enter in EditMessage component

* feat: add save button and keyboard shortcuts for saving and canceling in EditMessage component

* feat: collapse chat on max height

* refactor: implement scrollable detection for textarea on key down events and initial render

* feat: add regenerate button for error handling in HoverButtons, closes #3658

* feat: add functionality to edit latest user message with the up arrow key when the input is empty
2025-01-06 22:47:24 -05:00
Danny Avila
b01c744eb8 🧵 fix: Prevent Unnecessary Re-renders when Loading Chats (#5189)
* chore: typing

* chore: typing

* fix: enhance message scrolling logic to handle empty messages tree and ref checks

* fix: optimize message selection logic with useCallback for better performance

* chore: typing

* refactor: optimize icon rendering

* refactor: further optimize chat props

* fix: remove unnecessary console log in useQueryParams cleanup

* refactor: add queryClient to reset message data on new conversation initiation

* refactor: update data-testid attributes for consistency and improve code readability

* refactor: integrate queryClient to reset message data on new conversation initiation
2025-01-06 10:32:44 -05:00
Danny Avila
7987e04a2c 🔗 feat: Convo Settings via URL Query Params & Mention Models (#5184)
* feat: first pass, convo settings from query params

* feat: Enhance query parameter handling for assistants and agents endpoints

* feat: Update message formatting and localization for AI responses, bring awareness to mention command

* docs: Update translations README with detailed instructions for translation script usage and contribution guidelines

* chore: update localizations

* fix: missing agent_id assignment

* feat: add models as initial mention option

* feat: update query parameters schema to confine possible query params

* fix: normalize custom endpoints

* refactor: optimize custom endpoint type check
2025-01-04 20:36:12 -05:00
Danny Avila
766657da83 🔖 fix: Remove Local State from Bookmark Menu (#5181)
* chore: remove redundant

* fix: bookmark menu statefulness by removing local state
2025-01-04 12:01:13 -05:00
Danny Avila
7c61115a88 🐛 fix: Prevent Default Values in OpenAI/Custom Endpoint Agents (#5180)
* fix: prevent OpenAI/custom-endpoint agents from using default values

* fix: order of assigning client options

* chore: typing for runnable config
2025-01-04 09:41:59 -05:00
Danny Avila
c26b54c74d 🔄 refactor: Consolidate Tokenizer; Fix Jest Open Handles (#5175)
* refactor: consolidate tokenizer to singleton

* fix: remove legacy tokenizer code, add Tokenizer singleton tests

* ci: fix jest open handles
2025-01-03 18:11:14 -05:00
Danny Avila
bf0a84e45a ®️ feat: Support Rscript for Code Interpreter & recursionLimit for Agents (#5170)
* chore: bump @librechat/agents to v1.9.8 for rscript support

* chore: fix @langchain/google-genai dep., match agents

* chore: fix @langchain/google-vertexai to v0.1.5, match with agents

* chore: bump @librechat/agents to v1.9.9

* chore: update @librechat/agents to v1.9.91 and @langchain/google-vertexai to v0.1.6

* chore: increase MAX_FILE_SIZE to 150MB for file uploads

* chore: bump @librechat/agents to v1.9.92

* feat: support `recursionLimit` for agents

* chore: update configuration version to 1.2.1 in librechat.yaml and config.ts

* feat: add R language SVG icon to the assets and include it in ApiKeyDialog

* feat: add support for new vision model 'o1' and exclude 'o1-mini'
2025-01-03 16:50:00 -05:00
Julian Dreykorn
28966e3ddc 🧾 docs: Update Example librechat.yaml
* docs: Add mcpServers, agents and actions to the config
2025-01-03 08:35:00 -05:00
Thinger Soft
65b2d647a1 🔧 fix: Handle Concurrent File Mgmt. For Agents (#5159)
* fix: handle concurrent file upload for agents rag

Closes #4746:

* fix: handle concurrent file deletions for agents rag

Closes #5160:

* refactor: remove useless promise wrapping
2025-01-02 08:29:07 -05:00
Danny Avila
6c9a468b8e 🐛 fix: Artifacts Type Error, Tool Token Counts, and Agent Chat Import (#5142)
* fix: message import functionality to support content field

* fix: handle tool calls token counts in context window management

* fix: handle potential undefined size in FilePreview component
2024-12-30 13:01:47 -05:00
Marco Beretta
cb1921626e 🎨 feat: enhance Chat Input UI, File Mgmt. UI, Bookmarks a11y (#5112)
* 🎨 feat: improve file display and overflow handling in SidePanel components

* 🎨 feat: enhance bookmarks management UI and improve accessibility features

* 🎨 feat: enhance BookmarkTable and BookmarkTableRow components for improved layout and performance

* 🎨 feat: enhance file display and interaction in FilesView and ImagePreview components

* 🎨 feat: adjust minimum width for filename filter input in DataTable component

* 🎨 feat: enhance file upload UI with improved layout and styling adjustments

* 🎨 feat: add surface-hover-alt color and update FileContainer styling for improved UI

* 🎨 feat: update ImagePreview component styling for improved visual consistency

* 🎨 feat: add MaximizeChatSpace component and integrate chat space maximization feature

* 🎨 feat: enhance DataTable component with transition effects and update Checkbox styling for improved accessibility

* fix: enhance a11y for Bookmark buttons by adding space key support, ARIA labels, and correct html role for key presses

* fix: return focus back to trigger for BookmarkEditDialog (Edit and new bookmark buttons)

* refactor: ShareButton and ExportModal components children prop support; refactor DropdownPopup item handling

* refactor: enhance ExportAndShareMenu and ShareButton components with improved props handling and accessibility features

* refactor: add ref prop support to MenuItemProps and update ExportAndShareMenu and DropdownPopup components so focus correctly returns to menu item

* refactor: enhance ConvoOptions and DeleteButton components with improved props handling and accessibility features

* refactor: add triggerRef support to DeleteButton and update ConvoOptions for improved dialog handling

* refactor: accessible bookmarks menu

* refactor: improve styling and accessibility for bookmarks components

* refactor: add focusLoop support to DropdownPopup and update BookmarkMenu with Tooltip

* refactor: integrate TooltipAnchor into ExportAndShareMenu for enhanced accessibility

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2024-12-29 17:31:41 -05:00
Danny Avila
d9c59b08e6 🔑 feat: Implement TTL Mgmt. for In-Memory Keyv Stores (#5127)
This commit updates the cache stores in the `getLogStores.js` file to use Redis as the store if the `USE_REDIS` environment variable is enabled. It also adds a new environment variable `DEBUG_MEMORY_CACHE` to enable debugging of the memory cache.
2024-12-28 17:32:05 -05:00
Danny Avila
24cad6bbd4 🤖 feat: Support Google Agents, fix Various Provider Configurations (#5126)
* feat: Refactor ModelEndHandler to collect usage metadata only if it exists

* feat: google tool end handling, custom anthropic class for better token ux

* refactor: differentiate between client <> request options

* feat: initial support for google agents

* feat: only cache messages with non-empty text

* feat: Cache non-empty messages in chatV2 controller

* fix: anthropic llm client options llmConfig

* refactor: streamline client options handling in LLM configuration

* fix: VertexAI Agent Auth & Tool Handling

* fix: additional fields for llmConfig, however customHeaders are not supported by langchain, requires PR

* feat: set default location for vertexai LLM configuration

* fix: outdated OpenAI Client options for getLLMConfig

* chore: agent provider options typing

* chore: add note about currently unsupported customHeaders in langchain GenAI client

* fix: skip transaction creation when rawAmount is NaN
2024-12-28 17:15:03 -05:00
Danny Avila
a423eb8c7b fix: Improve Accessibility in Endpoints Menu/Navigation (#5123)
* fix: prevent mobile nav toggle from being focusable when not in mobile view, add types to <NavToggle/>

* fix: appropriate endpoint menu item role, add up/down focus mgmt, ensure set api key is focusable and accessible

* fix: localize link titles and update text color for improved accessibility in Nav component
2024-12-28 12:58:12 -05:00
Marco Beretta
d6f1ecf75c 🔒 fix: update refresh token handling to use plain token instead of hashed token (#5088)
* 🔒 fix: update refresh token handling to use plain token instead of hashed token

* 🔒 fix: simplify logoutUser by using plain refresh token for session lookup
2024-12-23 18:38:16 +01:00
Alex Torregrosa
04923dd185 🐋 refactor: Reduce Dockerfile.multi container size (#5066)
* fix: Reduce Dockerfile.multi container size

Reduced container size from 1.46 GB to 1.12 GB.

* Use `npm ci` without devDependencies for final image
* Remove unneeded `npm prune commands`

* Update Dockerfile.multi

---------

Co-authored-by: Danny Avila <danacordially@gmail.com>
2024-12-23 05:17:05 -05:00
Marco Beretta
dfe5498301 🎨 feat: enhance UI & accessibility in file handling components (#5086)
*  feat: Add localization for page display and enhance button styles

*  refactor: improve image preview component styles

*  refactor: enhance modal close behavior and prevent refocus on certain elements

*  refactor: enhance file row layout and improve image preview animation
2024-12-23 05:14:40 -05:00
Marco Beretta
bdb222d5f4 🔒 fix: resolve session persistence post password reset (#5077)
*  feat: Implement session management with CRUD operations and integrate into user workflows

*  refactor: Update session model import paths and enhance session creation logic in AuthService

*  refactor: Validate session and user ID formats in session management functions

*  style: Enhance UI components with improved styling and accessibility features

* chore: Update login form tests to use getByTestId instead of getByRole, remove console.log()

* chore: Update login form tests to use getByTestId instead of getByRole

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2024-12-23 05:12:07 -05:00
Marco Beretta
9bca2ae953 📘 docs: update readme.md (#5065) 2024-12-23 04:46:51 -05:00
Danny Avila
9b118d42de v0.7.6 (#5064)
* docs: Update README to include Model Context Protocol support and enhance access descriptions

* fix: Update civic integrity threshold to use 'BLOCK_NONE' as default

* fix: Update GOOGLE_MODELS in .env.example and adjust civic integrity threshold for new model compatibility

*  v0.7.6

* feat: Add 'gemini-2.0-flash-thinking-exp' model to googleModels context windows
2024-12-20 11:43:37 -05:00
Danny Avila
792ae03017 🌍 i18n: Updated Localizations (#5050)
*  feat: Add Arabic localization for API key input and related UI elements

* i18n: updated translations
2024-12-19 14:27:53 -05:00
Danny Avila
3fbbcb1cfe refactor: Integrate Capabilities into Agent File Uploads and Tool Handling (#5048)
* refactor: support drag/drop files for agents, handle undefined tool_resource edge cases

* refactor: consolidate endpoints config logic to dedicated getter

* refactor: Enhance agent tools loading logic to respect capabilities and filter tools accordingly

* refactor: Integrate endpoint capabilities into file upload dropdown for dynamic resource handling

* refactor: Implement capability checks for agent file upload operations

* fix: non-image tool_resource check
2024-12-19 13:04:48 -05:00
Danny Avila
d68c874db4 🤖 feat: Support new o1 model (#5039) 2024-12-18 14:40:58 -05:00
Danny Avila
f873587e5f 🐛 fix: Correct Model Parameters Merging and Panel UI (#5038)
* fix: Model Panel, watching wrong form field

* fix: Refactor agent initialization to merge model parameters correctly
2024-12-18 13:53:59 -05:00
Alex Torregrosa
000641c619 🐛 fix: Gemini system instructions not sent with null RAG_API_URL (#4920)
System instructions were not being sent to gemini models when RAG_API_URL was not set, as the original promptPrefix was not being populated.
2024-12-18 13:26:54 -05:00
Danny Avila
3ceb227507 🛡️ feat: Google Reverse Proxy support, CIVIC_INTEGRITY harm category (#5037)
* 🛡️ feat: Google Reverse Proxy support, `CIVIC_INTEGRITY` harm category

* 🔧 chore: Update @langchain/google-vertexai to version 0.1.4 in package.json and package-lock.json

* fix: revert breaking Vertex AI changes

---------

Co-authored-by: KiGamji <maloyh44@gmail.com>
2024-12-18 12:13:16 -05:00
Danny Avila
22a87b6162 🔧 fix: Update maxContextTokens calculation to use correct model identifier for Azure (#5035) 2024-12-18 11:11:38 -05:00
Danny Avila
e8bde332c2 feat: Implement Conversation Duplication & UI Improvements (#5036)
* feat(ui): enhance conversation components and add duplication

- feat: add conversation duplication functionality
- fix: resolve OGDialogTemplate display issues
- style: improve mobile dropdown component design
- chore: standardize shared link title formatting

* style: update active item background color in select-item

* feat(conversation): add duplicate conversation functionality and UI integration

* feat(conversation): enable title renaming on double-click and improve input focus styles

* fix(conversation): remove "(Copy)" suffix from duplicated conversation title in logging

* fix(RevokeKeysButton): correct className duration property for smoother transitions

* refactor(conversation): ensure proper parent-child relationships and timestamps when message cloning

---------

Co-authored-by: Marco Beretta <81851188+berry-13@users.noreply.github.com>
2024-12-18 11:10:34 -05:00
Danny Avila
649c7a6032 🔧 fix: Model Key Retrieval to Account for Bedrock Regions (#5029)
* 🔧 fix: model key retrieval logic to account for Bedrock region

* fix: edit preset dialog styling and potential max depth error with agents endpoint
2024-12-17 23:04:51 -05:00
Danny Avila
d3cafeee96 🔍 feat: Add Entity ID Support for File Search Shared Resources (#5028) 2024-12-17 22:11:18 -05:00
Danny Avila
18ad89be2c 🤖 feat: Add Agent Duplication Functionality with Permission (#5022)
* 🤖 feat: Add Agent Duplication Functionality with Permission

* 🐛 fix: Enhance Agent Duplication Logic and Filter Sensitive Data

* refactor(agents/v1): reorganized variables and error logging

* refactor: remove duplication permission

* chore: update librechat-data-provider version to 0.7.64

* fix: optimize agent duplication

---------

Co-authored-by: Marco Beretta <81851188+berry-13@users.noreply.github.com>
2024-12-17 19:47:39 -05:00
Danny Avila
16eed5f32d 🦙 feat: update AWS Bedrock pricing and token metadata for Meta models (#5024) 2024-12-17 17:18:49 -05:00
Danny Avila
e391347b9e 🔧 feat: Initial MCP Support (Tools) (#5015)
* 📝 chore: Add comment to clarify purpose of check_updates.sh script

* feat: mcp package

* feat: add librechat-mcp package and update dependencies

* feat: refactor MCPConnectionSingleton to handle transport initialization and connection management

* feat: change private methods to public in MCPConnectionSingleton for improved accessibility

* feat: filesystem demo

* chore: everything demo and move everything under mcp workspace

* chore: move ts-node to mcp workspace

* feat: mcp examples

* feat: working sse MCP example

* refactor: rename MCPConnectionSingleton to MCPConnection for clarity

* refactor: replace MCPConnectionSingleton with MCPConnection for consistency

* refactor: manager/connections

* refactor: update MCPConnection to use type definitions from mcp types

* refactor: update MCPManager to use winston logger and enhance server initialization

* refactor: share logger between connections and manager

* refactor: add schema definitions and update MCPManager to accept logger parameter

* feat: map available MCP tools

* feat: load manifest tools

* feat: add MCP tools delimiter constant and update plugin key generation

* feat: call MCP tools

* feat: update librechat-data-provider version to 0.7.63 and enhance StdioOptionsSchema with additional properties

* refactor: simplify typing

* chore: update types/packages

* feat: MCP Tool Content parsing

* chore: update dependencies and improve package configurations

* feat: add 'mcp' directory to package and update configurations

* refactor: return CONTENT_AND_ARTIFACT format for MCP callTool

* chore: bump @librechat/agents

* WIP: MCP artifacts

* chore: bump @librechat/agents to v1.8.7

* fix: ensure filename has extension when saving base64 image

* fix: move base64 buffer conversion before filename extension check

* chore: update backend review workflow to install MCP package

* fix: use correct `mime` method

* fix: enhance file metadata with message and tool call IDs in image saving process

* fix: refactor ToolCall component to handle MCP tool calls and improve domain extraction

* fix: update ToolItem component for default isInstalled value and improve localization in ToolSelectDialog

* fix: update ToolItem component to use consistent text color for tool description

* style: add theming to ToolSelectDialog

* fix: improve domain extraction logic in ToolCall component

* refactor: conversation item theming, fix rename UI bug, optimize props, add missing types

* feat: enhance MCP options schema with base options (iconPath to start) and make transport type optional, infer based on other option fields

* fix: improve reconnection logic with parallel init and exponential backoff and enhance transport debug logging

* refactor: improve logging format

* refactor: improve logging of available tools by displaying tool names

* refactor: improve reconnection/connection logic

* feat: add MCP package build process to Dockerfile

* feat: add fallback icon for tools without an image in ToolItem component

* feat: Assistants Support for MCP Tools

* fix(build): configure rollup to use output.dir for dynamic imports

* chore: update @librechat/agents to version 1.8.8 and add @langchain/anthropic dependency

* fix: update CONFIG_VERSION to 1.2.0
2024-12-17 13:12:57 -05:00
Danny Avila
0a97ad3915 🛠️ Fix: Update Agent Cache and Improve Actions UI (#5020)
* style: improve a11y, localization, and styling consistency of actions input form

* refactor: move agent mutations to dedicated module

* fix: update agent cache on agent deletion + delete and update actions
2024-12-17 12:45:58 -05:00
Danny Avila
6ef05dd2e6 🔧 fix: Add modelLabel to OpenAIClient and PluginsClient options (#4995) 2024-12-14 15:31:50 -05:00
Danny Avila
f15035542f 🐛 fix: Enforced Model Spec Icons/Labels and Agent Descriptions (#4979)
* fix: Previous convos missing model spec info when enforce is set to `true` #4749

* refactor: Include description field in agent list response
2024-12-13 16:15:48 -05:00
Danny Avila
0a5bc503b0 🙌 a11y: Accessibility Improvements (#4978)
* 🔃 fix: Safeguard against null token in SSE refresh token handling

* 🔃 fix: Update import path for AnnounceOptions in LiveAnnouncer component

* 🔃 a11y: Add aria-live attribute for accessibility in error messages

* fix: prevent double screen reader notification for toast

* 🔃 a11y: Enhance accessibility for main menus and buttons with ARIA roles and labels

* refactor: better alt text for logo on login page #4095

* refactor: remove unused import for DropdownNoState in Voices component

* fix: Focus management issue in the Export Options Modal #4100
2024-12-13 15:44:22 -05:00
Danny Avila
763693cc1b 🔐 fix: Assign ADMIN role based on first registration in LDAP strategy (#4974) 2024-12-13 11:40:24 -05:00
rio2dev
4587d56d92 🔊 feat: Add Estonian, Latvian, and Lithuanian to Language Dropdown (#4881)
* Add Estonian, Latvian and Lithuanian language to STT dropdown list

* Add Estonian, Latvian and Lithuanian language to STT dropdown list
2024-12-13 11:40:30 +01:00
Andrés Restrepo
6f9bbba3fc 🔃 fix: Exclude OAuth Routes From Service Worker Navigation (#4956) 2024-12-12 13:03:06 -05:00
Andrés Restrepo
43d10a4e43 fix: Handle Circular References in CONSOLE_JSON Log Truncation (#4958) 2024-12-12 13:02:44 -05:00
Danny Avila
69bd8e3644 🔐 feat: Implement Allowed Action Domains (#4964)
* chore: RequestExecutor typing

* feat: allowed action domains

* fix: rename TAgentsEndpoint to TAssistantEndpoint in typedefs

* chore: update librechat-data-provider version to 0.7.62
2024-12-12 12:52:42 -05:00
Danny Avila
e82af236bc 🤖 feat: Add Agents librechat.yaml Configuration (#4953)
* feat: CONFIG_VERSION v1.1.9, agents config

* refactor: Assistants Code Interpreter Toggle Improved Accessibility

* feat: Agents Config
2024-12-12 08:58:00 -05:00
Danny Avila
51e016ef2c 📑 docs: fix Portkey AI bad indentation 2024-12-11 15:28:49 -05:00
Danny Avila
1dbe6ee75d feat: Add Current Datetime to Assistants (v1/v2) (#4952)
* Feature: Added ability to send current date and time to v1 and v2 assistants

* remove date_feature.patch

* fix: rename append_today_date to append_current_datetime

* feat: Refactor time handling in chatV1 and chatV2, add date and time utility functions

* fix: Add warning log and response for missing run values in abortRun middleware

---------

Co-authored-by: Max Sanna <max@maxsanna.com>
2024-12-11 15:26:18 -05:00
Danny Avila
b5c9144127 🚀 feat: Add Gemini 2.0 Support, Update Packages and Deprecations (#4951)
* chore: Comment out deprecated MongoDB connection options in connectDb.js

* replaced deprecrated MongoDB count() function with countDocuments()

* npm audit fix (package-lock cleanup)

* chore: Specify .env file in launch configuration

* feat: gemini-2.0

* chore: bump express to 4.21.2 to address CVE-2024-52798

* chore: remove redundant comment for .env file specification in launch configuration

---------

Co-authored-by: neturmel <neturmel@gmx.de>
2024-12-11 14:11:27 -05:00
Danny Avila
4640e1b124 🛡️ feat: Add Role Dropdown to Prompt/Agents Admin Settings (#4922)
* style: update AdminSettings dialog content styles for improved accessibility/theming

* style: update icon colors in ExportAndShareMenu for improved theming

* feat: enhance DropdownPopup component with additional props for customization

* feat: add role selection dropdown to AdminSettings for enhanced user permissions management

* feat: add role selection dropdown to AdminSettings for Prompt permission management

* style: add gap to button in AdminSettings for improved layout

* feat: add warning message for Admin role access in Permissions settings
2024-12-09 19:50:03 -05:00
Danny Avila
1c05251826 🧵 fix: Assistants API Thread ID Handling (#4912) 2024-12-09 08:38:39 -05:00
Danny Avila
cd1184a302 📑 docs: update README.md (#4904)
* 📑 docs: update README.md to enhance feature descriptions and organization

* 📑 docs: Revise README.md for improved feature clarity and organization

* 📑 docs: Update README.md for improved clarity and organization of AI provider compatibility

* 📑 docs: Update AI Model Selection section in README.md for improved clarity and consistency

* 📑 docs: Update README.md to include Email Login support in Multi-User Authentication section
2024-12-07 21:53:36 -05:00
Danny Avila
dc728480f4 🤖 feat: Add Vision Models; fix: Agents user_provided Keys (#4903)
* 🤖 feat: add new vision models

* fix: agent key expiry setting and typing in useChatFunctions
2024-12-07 21:21:03 -05:00
Danny Avila
a1c7110a94 refactor(userSchema): unique index definitions using partialFilterExpression instead of sparse 2024-03-07 11:06:14 -05:00
762 changed files with 43112 additions and 53425 deletions

View File

@@ -53,7 +53,7 @@ DEBUG_CONSOLE=false
# Endpoints #
#===================================================#
# ENDPOINTS=openAI,assistants,azureOpenAI,bingAI,google,gptPlugins,anthropic
# ENDPOINTS=openAI,assistants,azureOpenAI,google,gptPlugins,anthropic
PROXY=
@@ -105,13 +105,6 @@ ANTHROPIC_API_KEY=user_provided
# AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME= # Deprecated
# PLUGINS_USE_AZURE="true" # Deprecated
#============#
# BingAI #
#============#
BINGAI_TOKEN=user_provided
# BINGAI_HOST=https://cn.bing.com
#=================#
# AWS Bedrock #
#=================#
@@ -138,10 +131,13 @@ BINGAI_TOKEN=user_provided
#============#
GOOGLE_KEY=user_provided
# GOOGLE_REVERSE_PROXY=
# Some reverse proxies do not support the X-goog-api-key header, uncomment to pass the API key in Authorization header instead.
# GOOGLE_AUTH_HEADER=true
# Gemini API (AI Studio)
# GOOGLE_MODELS=gemini-exp-1121,gemini-exp-1114,gemini-1.5-flash-latest,gemini-1.0-pro,gemini-1.0-pro-001,gemini-1.0-pro-latest,gemini-1.0-pro-vision-latest,gemini-1.5-pro-latest,gemini-pro,gemini-pro-vision
# GOOGLE_MODELS=gemini-2.0-flash-exp,gemini-2.0-flash-thinking-exp-1219,gemini-exp-1121,gemini-exp-1114,gemini-1.5-flash-latest,gemini-1.0-pro,gemini-1.0-pro-001,gemini-1.0-pro-latest,gemini-1.0-pro-vision-latest,gemini-1.5-pro-latest,gemini-pro,gemini-pro-vision
# Vertex AI
# GOOGLE_MODELS=gemini-1.5-flash-preview-0514,gemini-1.5-pro-preview-0514,gemini-1.0-pro-vision-001,gemini-1.0-pro-002,gemini-1.0-pro-001,gemini-pro-vision,gemini-1.0-pro
@@ -167,13 +163,14 @@ GOOGLE_KEY=user_provided
# GOOGLE_SAFETY_HATE_SPEECH=BLOCK_ONLY_HIGH
# GOOGLE_SAFETY_HARASSMENT=BLOCK_ONLY_HIGH
# GOOGLE_SAFETY_DANGEROUS_CONTENT=BLOCK_ONLY_HIGH
# GOOGLE_SAFETY_CIVIC_INTEGRITY=BLOCK_ONLY_HIGH
#============#
# OpenAI #
#============#
OPENAI_API_KEY=user_provided
# OPENAI_MODELS=gpt-4o,chatgpt-4o-latest,gpt-4o-mini,gpt-3.5-turbo-0125,gpt-3.5-turbo-0301,gpt-3.5-turbo,gpt-4,gpt-4-0613,gpt-4-vision-preview,gpt-3.5-turbo-0613,gpt-3.5-turbo-16k-0613,gpt-4-0125-preview,gpt-4-turbo-preview,gpt-4-1106-preview,gpt-3.5-turbo-1106,gpt-3.5-turbo-instruct,gpt-3.5-turbo-instruct-0914,gpt-3.5-turbo-16k
# OPENAI_MODELS=o1,o1-mini,o1-preview,gpt-4o,chatgpt-4o-latest,gpt-4o-mini,gpt-3.5-turbo-0125,gpt-3.5-turbo-0301,gpt-3.5-turbo,gpt-4,gpt-4-0613,gpt-4-vision-preview,gpt-3.5-turbo-0613,gpt-3.5-turbo-16k-0613,gpt-4-0125-preview,gpt-4-turbo-preview,gpt-4-1106-preview,gpt-3.5-turbo-1106,gpt-3.5-turbo-instruct,gpt-3.5-turbo-instruct-0914,gpt-3.5-turbo-16k
DEBUG_OPENAI=false
@@ -252,11 +249,16 @@ AZURE_AI_SEARCH_SEARCH_OPTION_SELECT=
# DALLE3_AZURE_API_VERSION=
# DALLE2_AZURE_API_VERSION=
# Google
#-----------------
GOOGLE_SEARCH_API_KEY=
GOOGLE_CSE_ID=
# YOUTUBE
#-----------------
YOUTUBE_API_KEY=
# SerpAPI
#-----------------
SERPAPI_API_KEY=
@@ -387,12 +389,22 @@ FACEBOOK_CALLBACK_URL=/oauth/facebook/callback
GITHUB_CLIENT_ID=
GITHUB_CLIENT_SECRET=
GITHUB_CALLBACK_URL=/oauth/github/callback
# GitHub Eenterprise
# GITHUB_ENTERPRISE_BASE_URL=
# GITHUB_ENTERPRISE_USER_AGENT=
# Google
GOOGLE_CLIENT_ID=
GOOGLE_CLIENT_SECRET=
GOOGLE_CALLBACK_URL=/oauth/google/callback
# Apple
APPLE_CLIENT_ID=
APPLE_TEAM_ID=
APPLE_KEY_ID=
APPLE_PRIVATE_KEY_PATH=
APPLE_CALLBACK_URL=/oauth/apple/callback
# OpenID
OPENID_CLIENT_ID=
OPENID_CLIENT_SECRET=
@@ -510,4 +522,9 @@ HELP_AND_FAQ_URL=https://librechat.ai
# no-cache: Forces validation with server before using cached version
# no-store: Prevents storing the response entirely
# must-revalidate: Prevents using stale content when offline
# must-revalidate: Prevents using stale content when offline
#=====================================================#
# OpenWeather #
#=====================================================#
OPENWEATHER_API_KEY=

View File

@@ -1,173 +0,0 @@
module.exports = {
env: {
browser: true,
es2021: true,
node: true,
commonjs: true,
es6: true,
},
extends: [
'eslint:recommended',
'plugin:react/recommended',
'plugin:react-hooks/recommended',
'plugin:jest/recommended',
'prettier',
'plugin:jsx-a11y/recommended',
],
ignorePatterns: [
'client/dist/**/*',
'client/public/**/*',
'e2e/playwright-report/**/*',
'packages/data-provider/types/**/*',
'packages/data-provider/dist/**/*',
'packages/data-provider/test_bundle/**/*',
'data-node/**/*',
'meili_data/**/*',
'node_modules/**/*',
],
parser: '@typescript-eslint/parser',
parserOptions: {
ecmaVersion: 'latest',
sourceType: 'module',
ecmaFeatures: {
jsx: true,
},
},
plugins: ['react', 'react-hooks', '@typescript-eslint', 'import', 'jsx-a11y'],
rules: {
'react/react-in-jsx-scope': 'off',
'@typescript-eslint/ban-ts-comment': ['error', { 'ts-ignore': 'allow' }],
indent: ['error', 2, { SwitchCase: 1 }],
'max-len': [
'error',
{
code: 120,
ignoreStrings: true,
ignoreTemplateLiterals: true,
ignoreComments: true,
},
],
'linebreak-style': 0,
curly: ['error', 'all'],
semi: ['error', 'always'],
'object-curly-spacing': ['error', 'always'],
'no-multiple-empty-lines': ['error', { max: 1 }],
'no-trailing-spaces': 'error',
'comma-dangle': ['error', 'always-multiline'],
// "arrow-parens": [2, "as-needed", { requireForBlockBody: true }],
// 'no-plusplus': ['error', { allowForLoopAfterthoughts: true }],
'no-console': 'off',
'import/no-cycle': 'error',
'import/no-self-import': 'error',
'import/extensions': 'off',
'no-promise-executor-return': 'off',
'no-param-reassign': 'off',
'no-continue': 'off',
'no-restricted-syntax': 'off',
'react/prop-types': ['off'],
'react/display-name': ['off'],
'no-nested-ternary': 'error',
'no-unused-vars': ['error', { varsIgnorePattern: '^_' }],
quotes: ['error', 'single'],
},
overrides: [
{
files: ['**/*.ts', '**/*.tsx'],
rules: {
'no-unused-vars': 'off', // off because it conflicts with '@typescript-eslint/no-unused-vars'
'react/display-name': 'off',
'@typescript-eslint/no-unused-vars': 'warn',
},
},
{
files: ['rollup.config.js', '.eslintrc.js', 'jest.config.js'],
env: {
node: true,
},
},
{
files: [
'**/*.test.js',
'**/*.test.jsx',
'**/*.test.ts',
'**/*.test.tsx',
'**/*.spec.js',
'**/*.spec.jsx',
'**/*.spec.ts',
'**/*.spec.tsx',
'setupTests.js',
],
env: {
jest: true,
node: true,
},
rules: {
'react/display-name': 'off',
'react/prop-types': 'off',
'react/no-unescaped-entities': 'off',
},
},
{
files: ['**/*.ts', '**/*.tsx'],
parser: '@typescript-eslint/parser',
parserOptions: {
project: './client/tsconfig.json',
},
plugins: ['@typescript-eslint/eslint-plugin', 'jest'],
extends: [
'plugin:@typescript-eslint/eslint-recommended',
'plugin:@typescript-eslint/recommended',
],
rules: {
'@typescript-eslint/no-explicit-any': 'error',
'@typescript-eslint/no-unnecessary-condition': 'warn',
'@typescript-eslint/strict-boolean-expressions': 'warn',
},
},
{
files: './packages/data-provider/**/*.ts',
overrides: [
{
files: '**/*.ts',
parser: '@typescript-eslint/parser',
parserOptions: {
project: './packages/data-provider/tsconfig.json',
},
},
],
},
{
files: './config/translations/**/*.ts',
parser: '@typescript-eslint/parser',
parserOptions: {
project: './config/translations/tsconfig.json',
},
},
{
files: ['./packages/data-provider/specs/**/*.ts'],
parserOptions: {
project: './packages/data-provider/tsconfig.spec.json',
},
},
],
settings: {
react: {
createClass: 'createReactClass', // Regex for Component Factory to use,
// default to "createReactClass"
pragma: 'React', // Pragma to use, default to "React"
fragment: 'Fragment', // Fragment to use (may be a property of <pragma>), default to "Fragment"
version: 'detect', // React version. "detect" automatically picks the version you have installed.
},
'import/parsers': {
'@typescript-eslint/parser': ['.ts', '.tsx'],
},
'import/resolver': {
typescript: {
project: ['./client/tsconfig.json'],
},
node: {
project: ['./client/tsconfig.json'],
},
},
},
};

View File

@@ -1,12 +1,19 @@
name: Bug Report
description: File a bug report
title: "[Bug]: "
labels: ["bug"]
labels: ["🐛 bug"]
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this bug report!
Before submitting, please:
- Search existing [Issues and Discussions](https://github.com/danny-avila/LibreChat/discussions) to see if your bug has already been reported
- Use [Discussions](https://github.com/danny-avila/LibreChat/discussions) instead of Issues for:
- General inquiries
- Help with setup
- Questions about whether you're experiencing a bug
- type: textarea
id: what-happened
attributes:
@@ -15,6 +22,23 @@ body:
placeholder: Please give as many details as possible
validations:
required: true
- type: textarea
id: version-info
attributes:
label: Version Information
description: |
If using Docker, please run and provide the output of:
```bash
docker images | grep librechat
```
If running from source, please run and provide the output of:
```bash
git rev-parse HEAD
```
placeholder: Paste the output here
validations:
required: true
- type: textarea
id: steps-to-reproduce
attributes:
@@ -39,7 +63,21 @@ body:
id: logs
attributes:
label: Relevant log output
description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
description: |
Please paste relevant logs that were created when reproducing the error.
Log locations:
- Docker: Project root directory ./logs
- npm: ./api/logs
There are two types of logs that can help diagnose the issue:
- debug logs (debug-YYYY-MM-DD.log)
- error logs (error-YYYY-MM-DD.log)
Error logs contain exact stack traces and are especially helpful, but both can provide valuable information.
Please only include the relevant portions of logs that correspond to when you reproduced the error.
For UI-related issues, browser console logs can be very helpful. You can provide these as screenshots or paste the text here.
render: shell
- type: textarea
id: screenshots
@@ -53,4 +91,4 @@ body:
description: By submitting this issue, you agree to follow our [Code of Conduct](https://github.com/danny-avila/LibreChat/blob/main/.github/CODE_OF_CONDUCT.md)
options:
- label: I agree to follow this project's Code of Conduct
required: true
required: true

View File

@@ -1,7 +1,7 @@
name: Feature Request
description: File a feature request
title: "Enhancement: "
labels: ["enhancement"]
title: "[Enhancement]: "
labels: ["enhancement"]
body:
- type: markdown
attributes:

View File

@@ -0,0 +1,33 @@
name: New Language Request
description: Request to add a new language for LibreChat translations.
title: "New Language Request: "
labels: ["✨ enhancement", "🌍 i18n"]
body:
- type: markdown
attributes:
value: |
Thank you for taking the time to submit a new language request! Please fill out the following details so we can review your request.
- type: input
id: language_name
attributes:
label: Language Name
description: Please provide the full name of the language (e.g., Spanish, Mandarin).
placeholder: e.g., Spanish
validations:
required: true
- type: input
id: iso_code
attributes:
label: ISO 639-1 Code
description: Please provide the ISO 639-1 code for the language (e.g., es for Spanish). You can refer to [this list](https://www.w3schools.com/tags/ref_language_codes.asp) for valid codes.
placeholder: e.g., es
validations:
required: true
- type: checkboxes
id: terms
attributes:
label: Code of Conduct
description: By submitting this issue, you agree to follow our [Code of Conduct](https://github.com/danny-avila/LibreChat/blob/main/.github/CODE_OF_CONDUCT.md).
options:
- label: I agree to follow this project's Code of Conduct
required: true

View File

@@ -1,7 +1,7 @@
name: Question
description: Ask your question
title: "[Question]: "
labels: ["question"]
labels: ["question"]
body:
- type: markdown
attributes:

View File

@@ -33,9 +33,12 @@ jobs:
- name: Install dependencies
run: npm ci
- name: Install Data Provider
- name: Install Data Provider Package
run: npm run build:data-provider
- name: Install MCP Package
run: npm run build:mcp
- name: Create empty auth.json file
run: |
mkdir -p api/data
@@ -58,9 +61,4 @@ jobs:
run: cd api && npm run test:ci
- name: Run librechat-data-provider unit tests
run: cd packages/data-provider && npm run test:ci
- name: Run linters
uses: wearerequired/lint-action@v2
with:
eslint: true
run: cd packages/data-provider && npm run test:ci

73
.github/workflows/eslint-ci.yml vendored Normal file
View File

@@ -0,0 +1,73 @@
name: ESLint Code Quality Checks
on:
pull_request:
branches:
- main
- dev
- release/*
paths:
- 'api/**'
- 'client/**'
jobs:
eslint_checks:
name: Run ESLint Linting
runs-on: ubuntu-latest
permissions:
contents: read
security-events: write
actions: read
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Node.js 20.x
uses: actions/setup-node@v4
with:
node-version: 20
cache: npm
- name: Install dependencies
run: npm ci
# Run ESLint on changed files within the api/ and client/ directories.
- name: Run ESLint on changed files
env:
SARIF_ESLINT_IGNORE_SUPPRESSED: "true"
run: |
# Extract the base commit SHA from the pull_request event payload.
BASE_SHA=$(jq --raw-output .pull_request.base.sha "$GITHUB_EVENT_PATH")
echo "Base commit SHA: $BASE_SHA"
# Get changed files (only JS/TS files in api/ or client/)
CHANGED_FILES=$(git diff --name-only --diff-filter=ACMRTUXB "$BASE_SHA" HEAD | grep -E '^(api|client)/.*\.(js|jsx|ts|tsx)$' || true)
# Debug output
echo "Changed files:"
echo "$CHANGED_FILES"
# Ensure there are files to lint before running ESLint
if [[ -z "$CHANGED_FILES" ]]; then
echo "No matching files changed. Skipping ESLint."
echo "UPLOAD_SARIF=false" >> $GITHUB_ENV
exit 0
fi
# Set variable to allow SARIF upload
echo "UPLOAD_SARIF=true" >> $GITHUB_ENV
# Run ESLint
npx eslint --no-error-on-unmatched-pattern \
--config eslint.config.mjs \
--format @microsoft/eslint-formatter-sarif \
--output-file eslint-results.sarif $CHANGED_FILES || true
- name: Upload analysis results to GitHub
if: env.UPLOAD_SARIF == 'true'
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: eslint-results.sarif
wait-for-processing: true

84
.github/workflows/i18n-unused-keys.yml vendored Normal file
View File

@@ -0,0 +1,84 @@
name: Detect Unused i18next Strings
on:
pull_request:
paths:
- "client/src/**"
jobs:
detect-unused-i18n-keys:
runs-on: ubuntu-latest
permissions:
pull-requests: write # Required for posting PR comments
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Find unused i18next keys
id: find-unused
run: |
echo "🔍 Scanning for unused i18next keys..."
# Define paths
I18N_FILE="client/src/locales/en/translation.json"
SOURCE_DIR="client/src"
# Check if translation file exists
if [[ ! -f "$I18N_FILE" ]]; then
echo "::error title=Missing i18n File::Translation file not found: $I18N_FILE"
exit 1
fi
# Extract all keys from the JSON file
KEYS=$(jq -r 'keys[]' "$I18N_FILE")
# Track unused keys
UNUSED_KEYS=()
# Check if each key is used in the source code
for KEY in $KEYS; do
if ! grep -r --include=\*.{js,jsx,ts,tsx} -q "$KEY" "$SOURCE_DIR"; then
UNUSED_KEYS+=("$KEY")
fi
done
# Output results
if [[ ${#UNUSED_KEYS[@]} -gt 0 ]]; then
echo "🛑 Found ${#UNUSED_KEYS[@]} unused i18n keys:"
echo "unused_keys=$(echo "${UNUSED_KEYS[@]}" | jq -R -s -c 'split(" ")')" >> $GITHUB_ENV
for KEY in "${UNUSED_KEYS[@]}"; do
echo "::warning title=Unused i18n Key::'$KEY' is defined but not used in the codebase."
done
else
echo "✅ No unused i18n keys detected!"
echo "unused_keys=[]" >> $GITHUB_ENV
fi
- name: Post verified comment on PR
if: env.unused_keys != '[]'
run: |
PR_NUMBER=$(jq --raw-output .pull_request.number "$GITHUB_EVENT_PATH")
# Format the unused keys list correctly, filtering out empty entries
FILTERED_KEYS=$(echo "$unused_keys" | jq -r '.[]' | grep -v '^\s*$' | sed 's/^/- `/;s/$/`/' )
COMMENT_BODY=$(cat <<EOF
### 🚨 Unused i18next Keys Detected
The following translation keys are defined in \`translation.json\` but are **not used** in the codebase:
$FILTERED_KEYS
⚠️ **Please remove these unused keys to keep the translation files clean.**
EOF
)
gh api "repos/${{ github.repository }}/issues/${PR_NUMBER}/comments" \
-f body="$COMMENT_BODY" \
-H "Authorization: token ${{ secrets.GITHUB_TOKEN }}"
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Fail workflow if unused keys found
if: env.unused_keys != '[]'
run: exit 1 # This makes the PR fail if unused keys exist

72
.github/workflows/locize-i18n-sync.yml vendored Normal file
View File

@@ -0,0 +1,72 @@
name: Sync Locize Translations & Create Translation PR
on:
push:
branches: [main]
repository_dispatch:
types: [locize/versionPublished]
jobs:
sync-translations:
name: Sync Translation Keys with Locize
runs-on: ubuntu-latest
steps:
- name: Checkout Repository
uses: actions/checkout@v4
- name: Set Up Node.js
uses: actions/setup-node@v4
with:
node-version: 20
- name: Install locize CLI
run: npm install -g locize-cli
# Sync translations (Push missing keys & remove deleted ones)
- name: Sync Locize with Repository
if: ${{ github.event_name == 'push' }}
run: |
cd client/src/locales
locize sync --api-key ${{ secrets.LOCIZE_API_KEY }} --project-id ${{ secrets.LOCIZE_PROJECT_ID }} --language en
# When triggered by repository_dispatch, skip sync step.
- name: Skip sync step on non-push events
if: ${{ github.event_name != 'push' }}
run: echo "Skipping sync as the event is not a push."
create-pull-request:
name: Create Translation PR on Version Published
runs-on: ubuntu-latest
needs: sync-translations
permissions:
contents: write
pull-requests: write
steps:
# 1. Check out the repository.
- name: Checkout Repository
uses: actions/checkout@v4
# 2. Download translation files from locize.
- name: Download Translations from locize
uses: locize/download@v1
with:
project-id: ${{ secrets.LOCIZE_PROJECT_ID }}
path: "client/src/locales"
# 3. Create a Pull Request using built-in functionality.
- name: Create Pull Request
uses: peter-evans/create-pull-request@v7
with:
token: ${{ secrets.GITHUB_TOKEN }}
sign-commits: true
commit-message: "🌍 i18n: Update translation.json with latest translations"
base: main
branch: i18n/locize-translation-update
reviewers: danny-avila
title: "🌍 i18n: Update translation.json with latest translations"
body: |
**Description**:
- 🎯 **Objective**: Update `translation.json` with the latest translations from locize.
- 🔍 **Details**: This PR is automatically generated upon receiving a versionPublished event with version "latest". It reflects the newest translations provided by locize.
- ✅ **Status**: Ready for review.
labels: "🌍 i18n"

147
.github/workflows/unused-packages.yml vendored Normal file
View File

@@ -0,0 +1,147 @@
name: Detect Unused NPM Packages
on: [pull_request]
jobs:
detect-unused-packages:
runs-on: ubuntu-latest
permissions:
pull-requests: write
steps:
- uses: actions/checkout@v4
- name: Use Node.js 20.x
uses: actions/setup-node@v4
with:
node-version: 20
cache: 'npm'
- name: Install depcheck
run: npm install -g depcheck
- name: Validate JSON files
run: |
for FILE in package.json client/package.json api/package.json; do
if [[ -f "$FILE" ]]; then
jq empty "$FILE" || (echo "::error title=Invalid JSON::$FILE is invalid" && exit 1)
fi
done
- name: Extract Dependencies Used in Scripts
id: extract-used-scripts
run: |
extract_deps_from_scripts() {
local package_file=$1
if [[ -f "$package_file" ]]; then
jq -r '.scripts | to_entries[].value' "$package_file" | \
grep -oE '([a-zA-Z0-9_-]+)' | sort -u > used_scripts.txt
else
touch used_scripts.txt
fi
}
extract_deps_from_scripts "package.json"
mv used_scripts.txt root_used_deps.txt
extract_deps_from_scripts "client/package.json"
mv used_scripts.txt client_used_deps.txt
extract_deps_from_scripts "api/package.json"
mv used_scripts.txt api_used_deps.txt
- name: Extract Dependencies Used in Source Code
id: extract-used-code
run: |
extract_deps_from_code() {
local folder=$1
local output_file=$2
if [[ -d "$folder" ]]; then
grep -rEho "require\\(['\"]([a-zA-Z0-9@/._-]+)['\"]\\)" "$folder" --include=\*.{js,ts,mjs,cjs} | \
sed -E "s/require\\(['\"]([a-zA-Z0-9@/._-]+)['\"]\\)/\1/" > "$output_file"
grep -rEho "import .* from ['\"]([a-zA-Z0-9@/._-]+)['\"]" "$folder" --include=\*.{js,ts,mjs,cjs} | \
sed -E "s/import .* from ['\"]([a-zA-Z0-9@/._-]+)['\"]/\1/" >> "$output_file"
sort -u "$output_file" -o "$output_file"
else
touch "$output_file"
fi
}
extract_deps_from_code "." root_used_code.txt
extract_deps_from_code "client" client_used_code.txt
extract_deps_from_code "api" api_used_code.txt
- name: Run depcheck for root package.json
id: check-root
run: |
if [[ -f "package.json" ]]; then
UNUSED=$(depcheck --json | jq -r '.dependencies | join("\n")' || echo "")
UNUSED=$(comm -23 <(echo "$UNUSED" | sort) <(cat root_used_deps.txt root_used_code.txt | sort) || echo "")
echo "ROOT_UNUSED<<EOF" >> $GITHUB_ENV
echo "$UNUSED" >> $GITHUB_ENV
echo "EOF" >> $GITHUB_ENV
fi
- name: Run depcheck for client/package.json
id: check-client
run: |
if [[ -f "client/package.json" ]]; then
chmod -R 755 client
cd client
UNUSED=$(depcheck --json | jq -r '.dependencies | join("\n")' || echo "")
UNUSED=$(comm -23 <(echo "$UNUSED" | sort) <(cat ../client_used_deps.txt ../client_used_code.txt | sort) || echo "")
echo "CLIENT_UNUSED<<EOF" >> $GITHUB_ENV
echo "$UNUSED" >> $GITHUB_ENV
echo "EOF" >> $GITHUB_ENV
cd ..
fi
- name: Run depcheck for api/package.json
id: check-api
run: |
if [[ -f "api/package.json" ]]; then
chmod -R 755 api
cd api
UNUSED=$(depcheck --json | jq -r '.dependencies | join("\n")' || echo "")
UNUSED=$(comm -23 <(echo "$UNUSED" | sort) <(cat ../api_used_deps.txt ../api_used_code.txt | sort) || echo "")
echo "API_UNUSED<<EOF" >> $GITHUB_ENV
echo "$UNUSED" >> $GITHUB_ENV
echo "EOF" >> $GITHUB_ENV
cd ..
fi
- name: Post comment on PR if unused dependencies are found
if: env.ROOT_UNUSED != '' || env.CLIENT_UNUSED != '' || env.API_UNUSED != ''
run: |
PR_NUMBER=$(jq --raw-output .pull_request.number "$GITHUB_EVENT_PATH")
ROOT_LIST=$(echo "$ROOT_UNUSED" | awk '{print "- `" $0 "`"}')
CLIENT_LIST=$(echo "$CLIENT_UNUSED" | awk '{print "- `" $0 "`"}')
API_LIST=$(echo "$API_UNUSED" | awk '{print "- `" $0 "`"}')
COMMENT_BODY=$(cat <<EOF
### 🚨 Unused NPM Packages Detected
The following **unused dependencies** were found:
$(if [[ ! -z "$ROOT_UNUSED" ]]; then echo "#### 📂 Root \`package.json\`"; echo ""; echo "$ROOT_LIST"; echo ""; fi)
$(if [[ ! -z "$CLIENT_UNUSED" ]]; then echo "#### 📂 Client \`client/package.json\`"; echo ""; echo "$CLIENT_LIST"; echo ""; fi)
$(if [[ ! -z "$API_UNUSED" ]]; then echo "#### 📂 API \`api/package.json\`"; echo ""; echo "$API_LIST"; echo ""; fi)
⚠️ **Please remove these unused dependencies to keep your project clean.**
EOF
)
gh api "repos/${{ github.repository }}/issues/${PR_NUMBER}/comments" \
-f body="$COMMENT_BODY" \
-H "Authorization: token ${{ secrets.GITHUB_TOKEN }}"
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Fail workflow if unused dependencies found
if: env.ROOT_UNUSED != '' || env.CLIENT_UNUSED != '' || env.API_UNUSED != ''
run: exit 1

19
.prettierrc Normal file
View File

@@ -0,0 +1,19 @@
{
"tailwindConfig": "./client/tailwind.config.cjs",
"printWidth": 100,
"tabWidth": 2,
"useTabs": false,
"semi": true,
"singleQuote": true,
"trailingComma": "all",
"arrowParens": "always",
"embeddedLanguageFormatting": "auto",
"insertPragma": false,
"proseWrap": "preserve",
"quoteProps": "as-needed",
"requirePragma": false,
"rangeStart": 0,
"endOfLine": "auto",
"jsxSingleQuote": false,
"plugins": ["prettier-plugin-tailwindcss"]
}

3
.vscode/launch.json vendored
View File

@@ -10,7 +10,8 @@
"env": {
"NODE_ENV": "production"
},
"console": "integratedTerminal"
"console": "integratedTerminal",
"envFile": "${workspaceFolder}/.env"
}
]
}

View File

@@ -1,4 +1,4 @@
# v0.7.5
# v0.7.7-rc1
# Base node image
FROM node:20-alpine AS node

View File

@@ -1,8 +1,8 @@
# Dockerfile.multi
# v0.7.5
# v0.7.7-rc1
# Base for all builds
FROM node:20-alpine AS base
FROM node:20-alpine AS base-min
WORKDIR /app
RUN apk --no-cache add curl
RUN npm config set fetch-retry-maxtimeout 600000 && \
@@ -10,8 +10,13 @@ RUN npm config set fetch-retry-maxtimeout 600000 && \
npm config set fetch-retry-mintimeout 15000
COPY package*.json ./
COPY packages/data-provider/package*.json ./packages/data-provider/
COPY packages/mcp/package*.json ./packages/mcp/
COPY client/package*.json ./client/
COPY api/package*.json ./api/
# Install all dependencies for every build
FROM base-min AS base
WORKDIR /app
RUN npm ci
# Build data-provider
@@ -19,7 +24,13 @@ FROM base AS data-provider-build
WORKDIR /app/packages/data-provider
COPY packages/data-provider ./
RUN npm run build
RUN npm prune --production
# Build mcp package
FROM base AS mcp-build
WORKDIR /app/packages/mcp
COPY packages/mcp ./
COPY --from=data-provider-build /app/packages/data-provider/dist /app/packages/data-provider/dist
RUN npm run build
# Client build
FROM base AS client-build
@@ -28,17 +39,18 @@ COPY client ./
COPY --from=data-provider-build /app/packages/data-provider/dist /app/packages/data-provider/dist
ENV NODE_OPTIONS="--max-old-space-size=2048"
RUN npm run build
RUN npm prune --production
# API setup (including client dist)
FROM base AS api-build
FROM base-min AS api-build
WORKDIR /app
# Install only production deps
RUN npm ci --omit=dev
COPY api ./api
COPY config ./config
COPY --from=data-provider-build /app/packages/data-provider/dist ./packages/data-provider/dist
COPY --from=mcp-build /app/packages/mcp/dist ./packages/mcp/dist
COPY --from=client-build /app/client/dist ./client/dist
WORKDIR /app/api
RUN npm prune --production
EXPOSE 3080
ENV HOST=0.0.0.0
CMD ["node", "server/index.js"]

124
README.md
View File

@@ -38,42 +38,85 @@
</a>
</p>
# 📃 Features
<p align="center">
<a href="https://www.librechat.ai/docs/translation">
<img
src="https://img.shields.io/badge/dynamic/json.svg?style=for-the-badge&color=2096F3&label=locize&query=%24.translatedPercentage&url=https://api.locize.app/badgedata/4cb2598b-ed4d-469c-9b04-2ed531a8cb45&suffix=%+translated"
alt="Translation Progress">
</a>
</p>
- 🖥️ UI matching ChatGPT, including Dark mode, Streaming, and latest updates
- 🤖 AI model selection:
- Anthropic (Claude), AWS Bedrock, OpenAI, Azure OpenAI, BingAI, ChatGPT, Google Vertex AI, Plugins, Assistants API (including Azure Assistants)
- ✅ Compatible across both **[Remote & Local AI services](https://www.librechat.ai/docs/configuration/librechat_yaml/ai_endpoints):**
- groq, Ollama, Cohere, Mistral AI, Apple MLX, koboldcpp, OpenRouter, together.ai, Perplexity, ShuttleAI, and more
- 🪄 Generative UI with **[Code Artifacts](https://youtu.be/GfTj7O4gmd0?si=WJbdnemZpJzBrJo3)**
- Create React, HTML code, and Mermaid diagrams right in chat
- 💾 Create, Save, & Share Custom Presets
- 🔀 Switch between AI Endpoints and Presets, mid-chat
- 🔄 Edit, Resubmit, and Continue Messages with Conversation branching
- 🌿 Fork Messages & Conversations for Advanced Context control
- 💬 Multimodal Chat:
- Upload and analyze images with Claude 3, GPT-4 (including `gpt-4o` and `gpt-4o-mini`), and Gemini Vision 📸
- Chat with Files using Custom Endpoints, OpenAI, Azure, Anthropic, & Google. 🗃️
- Advanced Agents with Files, Code Interpreter, Tools, and API Actions 🔦
- Available through the [OpenAI Assistants API](https://platform.openai.com/docs/assistants/overview) 🌤️
- Non-OpenAI Agents in Active Development 🚧
- 🌎 Multilingual UI:
- English, 中文, Deutsch, Español, Français, Italiano, Polski, Português Brasileiro,
# ✨ Features
- 🖥️ **UI & Experience** inspired by ChatGPT with enhanced design and features
- 🤖 **AI Model Selection**:
- Anthropic (Claude), AWS Bedrock, OpenAI, Azure OpenAI, Google, Vertex AI, OpenAI Assistants API (incl. Azure)
- [Custom Endpoints](https://www.librechat.ai/docs/quick_start/custom_endpoints): Use any OpenAI-compatible API with LibreChat, no proxy required
- Compatible with [Local & Remote AI Providers](https://www.librechat.ai/docs/configuration/librechat_yaml/ai_endpoints):
- Ollama, groq, Cohere, Mistral AI, Apple MLX, koboldcpp, together.ai,
- OpenRouter, Perplexity, ShuttleAI, Deepseek, Qwen, and more
- 🔧 **[Code Interpreter API](https://www.librechat.ai/docs/features/code_interpreter)**:
- Secure, Sandboxed Execution in Python, Node.js (JS/TS), Go, C/C++, Java, PHP, Rust, and Fortran
- Seamless File Handling: Upload, process, and download files directly
- No Privacy Concerns: Fully isolated and secure execution
- 🔦 **Agents & Tools Integration**:
- **[LibreChat Agents](https://www.librechat.ai/docs/features/agents)**:
- No-Code Custom Assistants: Build specialized, AI-driven helpers without coding
- Flexible & Extensible: Attach tools like DALL-E-3, file search, code execution, and more
- Compatible with Custom Endpoints, OpenAI, Azure, Anthropic, AWS Bedrock, and more
- [Model Context Protocol (MCP) Support](https://modelcontextprotocol.io/clients#librechat) for Tools
- Use LibreChat Agents and OpenAI Assistants with Files, Code Interpreter, Tools, and API Actions
- 🪄 **Generative UI with Code Artifacts**:
- [Code Artifacts](https://youtu.be/GfTj7O4gmd0?si=WJbdnemZpJzBrJo3) allow creation of React, HTML, and Mermaid diagrams directly in chat
- 💾 **Presets & Context Management**:
- Create, Save, & Share Custom Presets
- Switch between AI Endpoints and Presets mid-chat
- Edit, Resubmit, and Continue Messages with Conversation branching
- [Fork Messages & Conversations](https://www.librechat.ai/docs/features/fork) for Advanced Context control
- 💬 **Multimodal & File Interactions**:
- Upload and analyze images with Claude 3, GPT-4o, o1, Llama-Vision, and Gemini 📸
- Chat with Files using Custom Endpoints, OpenAI, Azure, Anthropic, AWS Bedrock, & Google 🗃️
- 🌎 **Multilingual UI**:
- English, 中文, Deutsch, Español, Français, Italiano, Polski, Português Brasileiro
- Русский, 日本語, Svenska, 한국어, Tiếng Việt, 繁體中文, العربية, Türkçe, Nederlands, עברית
- 🎨 Customizable Dropdown & Interface: Adapts to both power users and newcomers
- 📧 Verify your email to ensure secure access
- 🗣️ Chat hands-free with Speech-to-Text and Text-to-Speech magic
- Automatically send and play Audio
- 🧠 **Reasoning UI**:
- Dynamic Reasoning UI for Chain-of-Thought/Reasoning AI models like DeepSeek-R1
- 🎨 **Customizable Interface**:
- Customizable Dropdown & Interface that adapts to both power users and newcomers
- 🗣️ **Speech & Audio**:
- Chat hands-free with Speech-to-Text and Text-to-Speech
- Automatically send and play Audio
- Supports OpenAI, Azure OpenAI, and Elevenlabs
- 📥 Import Conversations from LibreChat, ChatGPT, Chatbot UI
- 📤 Export conversations as screenshots, markdown, text, json
- 🔍 Search all messages/conversations
- 🔌 Plugins, including web access, image generation with DALL-E-3 and more
- 👥 Multi-User, Secure Authentication with Moderation and Token spend tools
- ⚙️ Configure Proxy, Reverse Proxy, Docker, & many Deployment options:
- 📥 **Import & Export Conversations**:
- Import Conversations from LibreChat, ChatGPT, Chatbot UI
- Export conversations as screenshots, markdown, text, json
- 🔍 **Search & Discovery**:
- Search all messages/conversations
- 👥 **Multi-User & Secure Access**:
- Multi-User, Secure Authentication with OAuth2, LDAP, & Email Login Support
- Built-in Moderation, and Token spend tools
- ⚙️ **Configuration & Deployment**:
- Configure Proxy, Reverse Proxy, Docker, & many Deployment options
- Use completely local or deploy on the cloud
- 📖 Completely Open-Source & Built in Public
- 🧑‍🤝‍🧑 Community-driven development, support, and feedback
- 📖 **Open-Source & Community**:
- Completely Open-Source & Built in Public
- Community-driven development, support, and feedback
[For a thorough review of our features, see our docs here](https://docs.librechat.ai/) 📚
@@ -83,7 +126,8 @@ LibreChat brings together the future of assistant AIs with the revolutionary tec
With LibreChat, you no longer need to opt for ChatGPT Plus and can instead use free or pay-per-call APIs. We welcome contributions, cloning, and forking to enhance the capabilities of this advanced chatbot platform.
[![Watch the video](https://raw.githubusercontent.com/LibreChat-AI/librechat.ai/main/public/images/changelog/v0.7.5.png)](https://www.youtube.com/watch?v=IDukQ7a2f3U)
[![Watch the video](https://raw.githubusercontent.com/LibreChat-AI/librechat.ai/main/public/images/changelog/v0.7.6.gif)](https://www.youtube.com/watch?v=ilfwGQtJNlI)
Click on the thumbnail to open the video☝
---
@@ -135,6 +179,8 @@ Contributions, suggestions, bug reports and fixes are welcome!
For new features, components, or extensions, please open an issue and discuss before sending a PR.
If you'd like to help translate LibreChat into your language, we'd love your contribution! Improving our translations not only makes LibreChat more accessible to users around the world but also enhances the overall user experience. Please check out our [Translation Guide](https://www.librechat.ai/docs/translation).
---
## 💖 This project exists in its current state thanks to all the people who contribute
@@ -142,3 +188,15 @@ For new features, components, or extensions, please open an issue and discuss be
<a href="https://github.com/danny-avila/LibreChat/graphs/contributors">
<img src="https://contrib.rocks/image?repo=danny-avila/LibreChat" />
</a>
---
## 🎉 Special Thanks
We thank [Locize](https://locize.com) for their translation management tools that support multiple languages in LibreChat.
<p align="center">
<a href="https://locize.com" target="_blank" rel="noopener noreferrer">
<img src="https://locize.com/img/locize_color.svg" alt="Locize Logo" height="50">
</a>
</p>

View File

@@ -1,112 +0,0 @@
require('dotenv').config();
const { KeyvFile } = require('keyv-file');
const { EModelEndpoint } = require('librechat-data-provider');
const { getUserKey, checkUserKeyExpiry } = require('~/server/services/UserService');
const { logger } = require('~/config');
const askBing = async ({
text,
parentMessageId,
conversationId,
jailbreak,
jailbreakConversationId,
context,
systemMessage,
conversationSignature,
clientId,
invocationId,
toneStyle,
key: expiresAt,
onProgress,
userId,
}) => {
const isUserProvided = process.env.BINGAI_TOKEN === 'user_provided';
let key = null;
if (expiresAt && isUserProvided) {
checkUserKeyExpiry(expiresAt, EModelEndpoint.bingAI);
key = await getUserKey({ userId, name: 'bingAI' });
}
const { BingAIClient } = await import('nodejs-gpt');
const store = {
store: new KeyvFile({ filename: './data/cache.json' }),
};
const bingAIClient = new BingAIClient({
// "_U" cookie from bing.com
// userToken:
// isUserProvided ? key : process.env.BINGAI_TOKEN ?? null,
// If the above doesn't work, provide all your cookies as a string instead
cookies: isUserProvided ? key : process.env.BINGAI_TOKEN ?? null,
debug: false,
cache: store,
host: process.env.BINGAI_HOST || null,
proxy: process.env.PROXY || null,
});
let options = {};
if (jailbreakConversationId == 'false') {
jailbreakConversationId = false;
}
if (jailbreak) {
options = {
jailbreakConversationId: jailbreakConversationId || jailbreak,
context,
systemMessage,
parentMessageId,
toneStyle,
onProgress,
clientOptions: {
features: {
genImage: {
server: {
enable: true,
type: 'markdown_list',
},
},
},
},
};
} else {
options = {
conversationId,
context,
systemMessage,
parentMessageId,
toneStyle,
onProgress,
clientOptions: {
features: {
genImage: {
server: {
enable: true,
type: 'markdown_list',
},
},
},
},
};
// don't give those parameters for new conversation
// for new conversation, conversationSignature always is null
if (conversationSignature) {
options.encryptedConversationSignature = conversationSignature;
options.clientId = clientId;
options.invocationId = invocationId;
}
}
logger.debug('bing options', options);
const res = await bingAIClient.sendMessage(text, options);
return res;
// for reference:
// https://github.com/waylaidwanderer/node-chatgpt-api/blob/main/demos/use-bing-client.js
};
module.exports = { askBing };

View File

@@ -1,57 +0,0 @@
require('dotenv').config();
const { KeyvFile } = require('keyv-file');
const { Constants, EModelEndpoint } = require('librechat-data-provider');
const { getUserKey, checkUserKeyExpiry } = require('../server/services/UserService');
const browserClient = async ({
text,
parentMessageId,
conversationId,
model,
key: expiresAt,
onProgress,
onEventMessage,
abortController,
userId,
}) => {
const isUserProvided = process.env.CHATGPT_TOKEN === 'user_provided';
let key = null;
if (expiresAt && isUserProvided) {
checkUserKeyExpiry(expiresAt, EModelEndpoint.chatGPTBrowser);
key = await getUserKey({ userId, name: 'chatGPTBrowser' });
}
const { ChatGPTBrowserClient } = await import('nodejs-gpt');
const store = {
store: new KeyvFile({ filename: './data/cache.json' }),
};
const clientOptions = {
// Warning: This will expose your access token to a third party. Consider the risks before using this.
reverseProxyUrl:
process.env.CHATGPT_REVERSE_PROXY ?? 'https://ai.fakeopen.com/api/conversation',
// Access token from https://chat.openai.com/api/auth/session
accessToken: isUserProvided ? key : process.env.CHATGPT_TOKEN ?? null,
model: model,
debug: false,
proxy: process.env.PROXY ?? null,
user: userId,
};
const client = new ChatGPTBrowserClient(clientOptions, store);
let options = { onProgress, onEventMessage, abortController };
if (!!parentMessageId && !!conversationId) {
options = { ...options, parentMessageId, conversationId };
}
if (parentMessageId === Constants.NO_PARENT) {
delete options.conversationId;
}
const res = await client.sendMessage(text, options);
return res;
};
module.exports = { browserClient };

View File

@@ -1,6 +1,5 @@
const Anthropic = require('@anthropic-ai/sdk');
const { HttpsProxyAgent } = require('https-proxy-agent');
const { encoding_for_model: encodingForModel, get_encoding: getEncoding } = require('tiktoken');
const {
Constants,
EModelEndpoint,
@@ -19,6 +18,7 @@ const {
} = require('./prompts');
const { getModelMaxTokens, getModelMaxOutputTokens, matchModelName } = require('~/utils');
const { spendTokens, spendStructuredTokens } = require('~/models/spendTokens');
const Tokenizer = require('~/server/services/Tokenizer');
const { sleep } = require('~/server/utils');
const BaseClient = require('./BaseClient');
const { logger } = require('~/config');
@@ -26,8 +26,6 @@ const { logger } = require('~/config');
const HUMAN_PROMPT = '\n\nHuman:';
const AI_PROMPT = '\n\nAssistant:';
const tokenizersCache = {};
/** Helper function to introduce a delay before retrying */
function delayBeforeRetry(attempts, baseDelay = 1000) {
return new Promise((resolve) => setTimeout(resolve, baseDelay * attempts));
@@ -149,7 +147,6 @@ class AnthropicClient extends BaseClient {
this.startToken = '||>';
this.endToken = '';
this.gptEncoder = this.constructor.getTokenizer('cl100k_base');
return this;
}
@@ -419,7 +416,7 @@ class AnthropicClient extends BaseClient {
}
let { context: messagesInWindow, remainingContextTokens } =
await this.getMessagesWithinTokenLimit(formattedMessages);
await this.getMessagesWithinTokenLimit({ messages: formattedMessages });
const tokenCountMap = orderedMessages
.slice(orderedMessages.length - messagesInWindow.length)
@@ -849,22 +846,18 @@ class AnthropicClient extends BaseClient {
logger.debug('AnthropicClient doesn\'t use getBuildMessagesOptions');
}
static getTokenizer(encoding, isModelName = false, extendSpecialTokens = {}) {
if (tokenizersCache[encoding]) {
return tokenizersCache[encoding];
}
let tokenizer;
if (isModelName) {
tokenizer = encodingForModel(encoding, extendSpecialTokens);
} else {
tokenizer = getEncoding(encoding, extendSpecialTokens);
}
tokenizersCache[encoding] = tokenizer;
return tokenizer;
getEncoding() {
return 'cl100k_base';
}
/**
* Returns the token count of a given text. It also checks and resets the tokenizers if necessary.
* @param {string} text - The text to get the token count for.
* @returns {number} The token count of the given text.
*/
getTokenCount(text) {
return this.gptEncoder.encode(text, 'all').length;
const encoding = this.getEncoding();
return Tokenizer.getTokenCount(text, encoding);
}
/**

View File

@@ -4,16 +4,15 @@ const {
supportsBalanceCheck,
isAgentsEndpoint,
isParamEndpoint,
EModelEndpoint,
ErrorTypes,
Constants,
CacheKeys,
Time,
} = require('librechat-data-provider');
const { getMessages, saveMessage, updateMessage, saveConvo } = require('~/models');
const { addSpaceIfNeeded, isEnabled } = require('~/server/utils');
const { truncateToolCallOutputs } = require('./prompts');
const checkBalance = require('~/models/checkBalance');
const { getFiles } = require('~/models/File');
const { getLogStores } = require('~/cache');
const TextStream = require('./TextStream');
const { logger } = require('~/config');
@@ -52,6 +51,14 @@ class BaseClient {
this.outputTokensKey = 'completion_tokens';
/** @type {Set<string>} */
this.savedMessageIds = new Set();
/**
* Flag to determine if the client re-submitted the latest assistant message.
* @type {boolean | undefined} */
this.continued;
/** @type {TMessage[]} */
this.currentMessages = [];
/** @type {import('librechat-data-provider').VisionModes | undefined} */
this.visionMode;
}
setOptions() {
@@ -95,7 +102,7 @@ class BaseClient {
* @returns {number}
*/
getTokenCountForResponse(responseMessage) {
logger.debug('`[BaseClient] recordTokenUsage` not implemented.', responseMessage);
logger.debug('[BaseClient] `recordTokenUsage` not implemented.', responseMessage);
}
/**
@@ -106,7 +113,7 @@ class BaseClient {
* @returns {Promise<void>}
*/
async recordTokenUsage({ promptTokens, completionTokens }) {
logger.debug('`[BaseClient] recordTokenUsage` not implemented.', {
logger.debug('[BaseClient] `recordTokenUsage` not implemented.', {
promptTokens,
completionTokens,
});
@@ -262,17 +269,24 @@ class BaseClient {
/**
* Adds instructions to the messages array. If the instructions object is empty or undefined,
* the original messages array is returned. Otherwise, the instructions are added to the messages
* array, preserving the last message at the end.
* array either at the beginning (default) or preserving the last message at the end.
*
* @param {Array} messages - An array of messages.
* @param {Object} instructions - An object containing instructions to be added to the messages.
* @param {boolean} [beforeLast=false] - If true, adds instructions before the last message; if false, adds at the beginning.
* @returns {Array} An array containing messages and instructions, or the original messages if instructions are empty.
*/
addInstructions(messages, instructions) {
const payload = [];
addInstructions(messages, instructions, beforeLast = false) {
if (!instructions || Object.keys(instructions).length === 0) {
return messages;
}
if (!beforeLast) {
return [instructions, ...messages];
}
// Legacy behavior: add instructions before the last message
const payload = [];
if (messages.length > 1) {
payload.push(...messages.slice(0, -1));
}
@@ -287,6 +301,9 @@ class BaseClient {
}
async handleTokenCountMap(tokenCountMap) {
if (this.clientName === EModelEndpoint.agents) {
return;
}
if (this.currentMessages.length === 0) {
return;
}
@@ -335,25 +352,38 @@ class BaseClient {
* If the token limit would be exceeded by adding a message, that message is not added to the context and remains in the original array.
* The method uses `push` and `pop` operations for efficient array manipulation, and reverses the context array at the end to maintain the original order of the messages.
*
* @param {Array} _messages - An array of messages, each with a `tokenCount` property. The messages should be ordered from oldest to newest.
* @param {number} [maxContextTokens] - The max number of tokens allowed in the context. If not provided, defaults to `this.maxContextTokens`.
* @returns {Object} An object with four properties: `context`, `summaryIndex`, `remainingContextTokens`, and `messagesToRefine`.
* @param {Object} params
* @param {TMessage[]} params.messages - An array of messages, each with a `tokenCount` property. The messages should be ordered from oldest to newest.
* @param {number} [params.maxContextTokens] - The max number of tokens allowed in the context. If not provided, defaults to `this.maxContextTokens`.
* @param {{ role: 'system', content: text, tokenCount: number }} [params.instructions] - Instructions already added to the context at index 0.
* @returns {Promise<{
* context: TMessage[],
* remainingContextTokens: number,
* messagesToRefine: TMessage[],
* summaryIndex: number,
* }>} An object with four properties: `context`, `summaryIndex`, `remainingContextTokens`, and `messagesToRefine`.
* `context` is an array of messages that fit within the token limit.
* `summaryIndex` is the index of the first message in the `messagesToRefine` array.
* `remainingContextTokens` is the number of tokens remaining within the limit after adding the messages to the context.
* `messagesToRefine` is an array of messages that were not added to the context because they would have exceeded the token limit.
*/
async getMessagesWithinTokenLimit(_messages, maxContextTokens) {
async getMessagesWithinTokenLimit({ messages: _messages, maxContextTokens, instructions }) {
// Every reply is primed with <|start|>assistant<|message|>, so we
// start with 3 tokens for the label after all messages have been counted.
let currentTokenCount = 3;
let summaryIndex = -1;
let remainingContextTokens = maxContextTokens ?? this.maxContextTokens;
let currentTokenCount = 3;
const instructionsTokenCount = instructions?.tokenCount ?? 0;
let remainingContextTokens =
(maxContextTokens ?? this.maxContextTokens) - instructionsTokenCount;
const messages = [..._messages];
const context = [];
if (currentTokenCount < remainingContextTokens) {
while (messages.length > 0 && currentTokenCount < remainingContextTokens) {
if (messages.length === 1 && instructions) {
break;
}
const poppedMessage = messages.pop();
const { tokenCount } = poppedMessage;
@@ -367,6 +397,11 @@ class BaseClient {
}
}
if (instructions) {
context.push(_messages[0]);
messages.shift();
}
const prunedMemory = messages;
summaryIndex = prunedMemory.length - 1;
remainingContextTokens -= currentTokenCount;
@@ -391,12 +426,38 @@ class BaseClient {
if (instructions) {
({ tokenCount, ..._instructions } = instructions);
}
_instructions && logger.debug('[BaseClient] instructions tokenCount: ' + tokenCount);
let payload = this.addInstructions(formattedMessages, _instructions);
if (tokenCount && tokenCount > this.maxContextTokens) {
const info = `${tokenCount} / ${this.maxContextTokens}`;
const errorMessage = `{ "type": "${ErrorTypes.INPUT_LENGTH}", "info": "${info}" }`;
logger.warn(`Instructions token count exceeds max token count (${info}).`);
throw new Error(errorMessage);
}
if (this.clientName === EModelEndpoint.agents) {
const { dbMessages, editedIndices } = truncateToolCallOutputs(
orderedMessages,
this.maxContextTokens,
this.getTokenCountForMessage.bind(this),
);
if (editedIndices.length > 0) {
logger.debug('[BaseClient] Truncated tool call outputs:', editedIndices);
for (const index of editedIndices) {
formattedMessages[index].content = dbMessages[index].content;
}
orderedMessages = dbMessages;
}
}
let orderedWithInstructions = this.addInstructions(orderedMessages, instructions);
let { context, remainingContextTokens, messagesToRefine, summaryIndex } =
await this.getMessagesWithinTokenLimit(orderedWithInstructions);
await this.getMessagesWithinTokenLimit({
messages: orderedWithInstructions,
instructions,
});
logger.debug('[BaseClient] Context Count (1/2)', {
remainingContextTokens,
@@ -408,7 +469,9 @@ class BaseClient {
let { shouldSummarize } = this;
// Calculate the difference in length to determine how many messages were discarded if any
const { length } = payload;
let payload;
let { length } = formattedMessages;
length += instructions != null ? 1 : 0;
const diff = length - context.length;
const firstMessage = orderedWithInstructions[0];
const usePrevSummary =
@@ -418,18 +481,31 @@ class BaseClient {
this.previous_summary.messageId === firstMessage.messageId;
if (diff > 0) {
payload = payload.slice(diff);
payload = formattedMessages.slice(diff);
logger.debug(
`[BaseClient] Difference between original payload (${length}) and context (${context.length}): ${diff}`,
);
}
payload = this.addInstructions(payload ?? formattedMessages, _instructions);
const latestMessage = orderedWithInstructions[orderedWithInstructions.length - 1];
if (payload.length === 0 && !shouldSummarize && latestMessage) {
const info = `${latestMessage.tokenCount} / ${this.maxContextTokens}`;
const errorMessage = `{ "type": "${ErrorTypes.INPUT_LENGTH}", "info": "${info}" }`;
logger.warn(`Prompt token count exceeds max token count (${info}).`);
throw new Error(errorMessage);
} else if (
_instructions &&
payload.length === 1 &&
payload[0].content === _instructions.content
) {
const info = `${tokenCount + 3} / ${this.maxContextTokens}`;
const errorMessage = `{ "type": "${ErrorTypes.INPUT_LENGTH}", "info": "${info}" }`;
logger.warn(
`Including instructions, the prompt token count exceeds remaining max token count (${info}).`,
);
throw new Error(errorMessage);
}
if (usePrevSummary) {
@@ -518,6 +594,7 @@ class BaseClient {
} else {
latestMessage.text = generation;
}
this.continued = true;
} else {
this.currentMessages.push(userMessage);
}
@@ -625,7 +702,7 @@ class BaseClient {
await this.updateUserMessageTokenCount({ usage, tokenCountMap, userMessage, opts });
} else {
responseMessage.tokenCount = this.getTokenCountForResponse(responseMessage);
completionTokens = this.getTokenCount(completion);
completionTokens = responseMessage.tokenCount;
}
await this.recordTokenUsage({ promptTokens, completionTokens, usage });
@@ -649,15 +726,6 @@ class BaseClient {
this.responsePromise = this.saveMessageToDatabase(responseMessage, saveOptions, user);
this.savedMessageIds.add(responseMessage.messageId);
const messageCache = getLogStores(CacheKeys.MESSAGES);
messageCache.set(
responseMessageId,
{
text: responseMessage.text,
complete: true,
},
Time.FIVE_MINUTES,
);
delete responseMessage.tokenCount;
return responseMessage;
}
@@ -929,6 +997,24 @@ class BaseClient {
continue;
}
if (item.type === 'tool_call' && item.tool_call != null) {
const toolName = item.tool_call?.name || '';
if (toolName != null && toolName && typeof toolName === 'string') {
numTokens += this.getTokenCount(toolName);
}
const args = item.tool_call?.args || '';
if (args != null && args && typeof args === 'string') {
numTokens += this.getTokenCount(args);
}
const output = item.tool_call?.output || '';
if (output != null && output && typeof output === 'string') {
numTokens += this.getTokenCount(output);
}
continue;
}
const nestedValue = item[item.type];
if (!nestedValue) {
@@ -1011,7 +1097,7 @@ class BaseClient {
file_id: { $in: fileIds },
});
await this.addImageURLs(message, files);
await this.addImageURLs(message, files, this.visionMode);
this.message_file_map[message.messageId] = files;
return message;

View File

@@ -13,7 +13,6 @@ const {
const { extractBaseURL, constructAzureURL, genAzureChatCompletion } = require('~/utils');
const { createContextHandlers } = require('./prompts');
const { createCoherePayload } = require('./llm');
const { Agent, ProxyAgent } = require('undici');
const BaseClient = require('./BaseClient');
const { logger } = require('~/config');
@@ -186,10 +185,6 @@ class ChatGPTClient extends BaseClient {
headers: {
'Content-Type': 'application/json',
},
dispatcher: new Agent({
bodyTimeout: 0,
headersTimeout: 0,
}),
};
if (this.isVisionModel) {
@@ -275,10 +270,6 @@ class ChatGPTClient extends BaseClient {
opts.headers['X-Title'] = 'LibreChat';
}
if (this.options.proxy) {
opts.dispatcher = new ProxyAgent(this.options.proxy);
}
/* hacky fixes for Mistral AI API:
- Re-orders system message to the top of the messages payload, as not allowed anywhere else
- If there is only one message and it's a system message, change the role to user

View File

@@ -1,22 +1,25 @@
const { google } = require('googleapis');
const { Agent, ProxyAgent } = require('undici');
const { concat } = require('@langchain/core/utils/stream');
const { ChatVertexAI } = require('@langchain/google-vertexai');
const { GoogleVertexAI } = require('@langchain/google-vertexai');
const { ChatGoogleVertexAI } = require('@langchain/google-vertexai');
const { ChatGoogleGenerativeAI } = require('@langchain/google-genai');
const { GoogleGenerativeAI: GenAI } = require('@google/generative-ai');
const { AIMessage, HumanMessage, SystemMessage } = require('@langchain/core/messages');
const { encoding_for_model: encodingForModel, get_encoding: getEncoding } = require('tiktoken');
const { HumanMessage, SystemMessage } = require('@langchain/core/messages');
const {
googleGenConfigSchema,
validateVisionModel,
getResponseSender,
endpointSettings,
EModelEndpoint,
ContentTypes,
VisionModes,
ErrorTypes,
Constants,
AuthKeys,
} = require('librechat-data-provider');
const { getSafetySettings } = require('~/server/services/Endpoints/google/llm');
const { encodeAndFormat } = require('~/server/services/Files/images');
const Tokenizer = require('~/server/services/Tokenizer');
const { spendTokens } = require('~/models/spendTokens');
const { getModelMaxTokens } = require('~/utils');
const { sleep } = require('~/server/utils');
const { logger } = require('~/config');
@@ -30,9 +33,7 @@ const BaseClient = require('./BaseClient');
const loc = process.env.GOOGLE_LOC || 'us-central1';
const publisher = 'google';
const endpointPrefix = `https://${loc}-aiplatform.googleapis.com`;
// const apiEndpoint = loc + '-aiplatform.googleapis.com';
const tokenizersCache = {};
const endpointPrefix = `${loc}-aiplatform.googleapis.com`;
const settings = endpointSettings[EModelEndpoint.google];
const EXCLUDED_GENAI_MODELS = /gemini-(?:1\.0|1-0|pro)/;
@@ -51,13 +52,27 @@ class GoogleClient extends BaseClient {
const serviceKey = creds[AuthKeys.GOOGLE_SERVICE_KEY] ?? {};
this.serviceKey =
serviceKey && typeof serviceKey === 'string' ? JSON.parse(serviceKey) : serviceKey ?? {};
/** @type {string | null | undefined} */
this.project_id = this.serviceKey.project_id;
this.client_email = this.serviceKey.client_email;
this.private_key = this.serviceKey.private_key;
this.project_id = this.serviceKey.project_id;
this.access_token = null;
this.apiKey = creds[AuthKeys.GOOGLE_API_KEY];
this.reverseProxyUrl = options.reverseProxyUrl;
this.authHeader = options.authHeader;
/** @type {UsageMetadata | undefined} */
this.usage;
/** The key for the usage object's input tokens
* @type {string} */
this.inputTokensKey = 'input_tokens';
/** The key for the usage object's output tokens
* @type {string} */
this.outputTokensKey = 'output_tokens';
this.visionMode = VisionModes.generative;
if (options.skipSetOptions) {
return;
}
@@ -66,7 +81,7 @@ class GoogleClient extends BaseClient {
/* Google specific methods */
constructUrl() {
return `${endpointPrefix}/v1/projects/${this.project_id}/locations/${loc}/publishers/${publisher}/models/${this.modelOptions.model}:serverStreamingPredict`;
return `https://${endpointPrefix}/v1/projects/${this.project_id}/locations/${loc}/publishers/${publisher}/models/${this.modelOptions.model}:serverStreamingPredict`;
}
async getClient() {
@@ -117,22 +132,13 @@ class GoogleClient extends BaseClient {
this.options = options;
}
this.options.examples = (this.options.examples ?? [])
.filter((ex) => ex)
.filter((obj) => obj.input.content !== '' && obj.output.content !== '');
this.modelOptions = this.options.modelOptions || {};
this.options.attachments?.then((attachments) => this.checkVisionRequest(attachments));
/** @type {boolean} Whether using a "GenerativeAI" Model */
this.isGenerativeModel = this.modelOptions.model.includes('gemini');
const { isGenerativeModel } = this;
this.isChatModel = !isGenerativeModel && this.modelOptions.model.includes('chat');
const { isChatModel } = this;
this.isTextModel =
!isGenerativeModel && !isChatModel && /code|text/.test(this.modelOptions.model);
const { isTextModel } = this;
this.isGenerativeModel =
this.modelOptions.model.includes('gemini') || this.modelOptions.model.includes('learnlm');
this.maxContextTokens =
this.options.maxContextTokens ??
@@ -168,50 +174,18 @@ class GoogleClient extends BaseClient {
this.userLabel = this.options.userLabel || 'User';
this.modelLabel = this.options.modelLabel || 'Assistant';
if (isChatModel || isGenerativeModel) {
// Use these faux tokens to help the AI understand the context since we are building the chat log ourselves.
// Trying to use "<|im_start|>" causes the AI to still generate "<" or "<|" at the end sometimes for some reason,
// without tripping the stop sequences, so I'm using "||>" instead.
this.startToken = '||>';
this.endToken = '';
this.gptEncoder = this.constructor.getTokenizer('cl100k_base');
} else if (isTextModel) {
this.startToken = '||>';
this.endToken = '';
this.gptEncoder = this.constructor.getTokenizer('text-davinci-003', true, {
'<|im_start|>': 100264,
'<|im_end|>': 100265,
});
} else {
// Previously I was trying to use "<|endoftext|>" but there seems to be some bug with OpenAI's token counting
// system that causes only the first "<|endoftext|>" to be counted as 1 token, and the rest are not treated
// as a single token. So we're using this instead.
this.startToken = '||>';
this.endToken = '';
try {
this.gptEncoder = this.constructor.getTokenizer(this.modelOptions.model, true);
} catch {
this.gptEncoder = this.constructor.getTokenizer('text-davinci-003', true);
}
}
if (!this.modelOptions.stop) {
const stopTokens = [this.startToken];
if (this.endToken && this.endToken !== this.startToken) {
stopTokens.push(this.endToken);
}
stopTokens.push(`\n${this.userLabel}:`);
stopTokens.push('<|diff_marker|>');
// I chose not to do one for `modelLabel` because I've never seen it happen
this.modelOptions.stop = stopTokens;
}
if (this.options.reverseProxyUrl) {
this.completionsUrl = this.options.reverseProxyUrl;
} else {
this.completionsUrl = this.constructUrl();
}
let promptPrefix = (this.options.promptPrefix ?? '').trim();
if (typeof this.options.artifactsPrompt === 'string' && this.options.artifactsPrompt) {
promptPrefix = `${promptPrefix ?? ''}\n${this.options.artifactsPrompt}`.trim();
}
this.options.promptPrefix = promptPrefix;
this.initializeClient();
return this;
}
@@ -243,10 +217,29 @@ class GoogleClient extends BaseClient {
}
formatMessages() {
return ((message) => ({
author: message?.author ?? (message.isCreatedByUser ? this.userLabel : this.modelLabel),
content: message?.content ?? message.text,
})).bind(this);
return ((message) => {
const msg = {
author: message?.author ?? (message.isCreatedByUser ? this.userLabel : this.modelLabel),
content: message?.content ?? message.text,
};
if (!message.image_urls?.length) {
return msg;
}
msg.content = (
!Array.isArray(msg.content)
? [
{
type: ContentTypes.TEXT,
[ContentTypes.TEXT]: msg.content,
},
]
: msg.content
).concat(message.image_urls);
return msg;
}).bind(this);
}
/**
@@ -344,7 +337,6 @@ class GoogleClient extends BaseClient {
messages: [new HumanMessage(formatMessage({ message: latestMessage }))],
},
],
parameters: this.modelOptions,
};
return { prompt: payload };
}
@@ -360,23 +352,58 @@ class GoogleClient extends BaseClient {
return { prompt: formattedMessages };
}
async buildMessages(messages = [], parentMessageId) {
/**
* @param {TMessage[]} [messages=[]]
* @param {string} [parentMessageId]
*/
async buildMessages(_messages = [], parentMessageId) {
if (!this.isGenerativeModel && !this.project_id) {
throw new Error(
'[GoogleClient] a Service Account JSON Key is required for PaLM 2 and Codey models (Vertex AI)',
);
throw new Error('[GoogleClient] PaLM 2 and Codey models are no longer supported.');
}
if (this.options.promptPrefix) {
const instructionsTokenCount = this.getTokenCount(this.options.promptPrefix);
this.maxContextTokens = this.maxContextTokens - instructionsTokenCount;
if (this.maxContextTokens < 0) {
const info = `${instructionsTokenCount} / ${this.maxContextTokens}`;
const errorMessage = `{ "type": "${ErrorTypes.INPUT_LENGTH}", "info": "${info}" }`;
logger.warn(`Instructions token count exceeds max context (${info}).`);
throw new Error(errorMessage);
}
}
for (let i = 0; i < _messages.length; i++) {
const message = _messages[i];
if (!message.tokenCount) {
_messages[i].tokenCount = this.getTokenCountForMessage({
role: message.isCreatedByUser ? 'user' : 'assistant',
content: message.content ?? message.text,
});
}
}
const {
payload: messages,
tokenCountMap,
promptTokens,
} = await this.handleContextStrategy({
orderedMessages: _messages,
formattedMessages: _messages,
});
if (!this.project_id && !EXCLUDED_GENAI_MODELS.test(this.modelOptions.model)) {
return await this.buildGenerativeMessages(messages);
const result = await this.buildGenerativeMessages(messages);
result.tokenCountMap = tokenCountMap;
result.promptTokens = promptTokens;
return result;
}
if (this.options.attachments && this.isGenerativeModel) {
return this.buildVisionMessages(messages, parentMessageId);
}
if (this.isTextModel) {
return this.buildMessagesPrompt(messages, parentMessageId);
const result = this.buildVisionMessages(messages, parentMessageId);
result.tokenCountMap = tokenCountMap;
result.promptTokens = promptTokens;
return result;
}
let payload = {
@@ -388,25 +415,14 @@ class GoogleClient extends BaseClient {
.map((message) => formatMessage({ message, langChain: true })),
},
],
parameters: this.modelOptions,
};
let promptPrefix = (this.options.promptPrefix ?? '').trim();
if (typeof this.options.artifactsPrompt === 'string' && this.options.artifactsPrompt) {
promptPrefix = `${promptPrefix ?? ''}\n${this.options.artifactsPrompt}`.trim();
}
if (promptPrefix) {
payload.instances[0].context = promptPrefix;
}
if (this.options.examples.length > 0) {
payload.instances[0].examples = this.options.examples;
if (this.options.promptPrefix) {
payload.instances[0].context = this.options.promptPrefix;
}
logger.debug('[GoogleClient] buildMessages', payload);
return { prompt: payload };
return { prompt: payload, tokenCountMap, promptTokens };
}
async buildMessagesPrompt(messages, parentMessageId) {
@@ -420,10 +436,7 @@ class GoogleClient extends BaseClient {
parentMessageId,
});
const formattedMessages = orderedMessages.map((message) => ({
author: message.isCreatedByUser ? this.userLabel : this.modelLabel,
content: message?.content ?? message.text,
}));
const formattedMessages = orderedMessages.map(this.formatMessages());
let lastAuthor = '';
let groupedMessages = [];
@@ -452,16 +465,6 @@ class GoogleClient extends BaseClient {
}
let promptPrefix = (this.options.promptPrefix ?? '').trim();
if (typeof this.options.artifactsPrompt === 'string' && this.options.artifactsPrompt) {
promptPrefix = `${promptPrefix ?? ''}\n${this.options.artifactsPrompt}`.trim();
}
if (promptPrefix) {
// If the prompt prefix doesn't end with the end token, add it.
if (!promptPrefix.endsWith(`${this.endToken}`)) {
promptPrefix = `${promptPrefix.trim()}${this.endToken}\n\n`;
}
promptPrefix = `\nContext:\n${promptPrefix}`;
}
if (identityPrefix) {
promptPrefix = `${identityPrefix}${promptPrefix}`;
@@ -498,7 +501,7 @@ class GoogleClient extends BaseClient {
isCreatedByUser || !isEdited
? `\n\n${message.author}:`
: `${promptPrefix}\n\n${message.author}:`;
const messageString = `${messagePrefix}\n${message.content}${this.endToken}\n`;
const messageString = `${messagePrefix}\n${message.content}\n`;
let newPromptBody = `${messageString}${promptBody}`;
context.unshift(message);
@@ -564,68 +567,48 @@ class GoogleClient extends BaseClient {
return { prompt, context };
}
async _getCompletion(payload, abortController = null) {
if (!abortController) {
abortController = new AbortController();
}
const { debug } = this.options;
const url = this.completionsUrl;
if (debug) {
logger.debug('GoogleClient _getCompletion', { url, payload });
}
const opts = {
method: 'POST',
agent: new Agent({
bodyTimeout: 0,
headersTimeout: 0,
}),
signal: abortController.signal,
};
if (this.options.proxy) {
opts.agent = new ProxyAgent(this.options.proxy);
}
const client = await this.getClient();
const res = await client.request({ url, method: 'POST', data: payload });
logger.debug('GoogleClient _getCompletion', { res });
return res.data;
}
createLLM(clientOptions) {
const model = clientOptions.modelName ?? clientOptions.model;
clientOptions.location = loc;
clientOptions.endpoint = `${loc}-aiplatform.googleapis.com`;
if (this.project_id && this.isTextModel) {
logger.debug('Creating Google VertexAI client');
return new GoogleVertexAI(clientOptions);
} else if (this.project_id && this.isChatModel) {
logger.debug('Creating Chat Google VertexAI client');
return new ChatGoogleVertexAI(clientOptions);
} else if (this.project_id) {
clientOptions.endpoint = endpointPrefix;
let requestOptions = null;
if (this.reverseProxyUrl) {
requestOptions = {
baseUrl: this.reverseProxyUrl,
};
if (this.authHeader) {
requestOptions.customHeaders = {
Authorization: `Bearer ${this.apiKey}`,
};
}
}
if (this.project_id != null) {
logger.debug('Creating VertexAI client');
return new ChatVertexAI(clientOptions);
this.visionMode = undefined;
clientOptions.streaming = true;
const client = new ChatVertexAI(clientOptions);
client.temperature = clientOptions.temperature;
client.topP = clientOptions.topP;
client.topK = clientOptions.topK;
client.topLogprobs = clientOptions.topLogprobs;
client.frequencyPenalty = clientOptions.frequencyPenalty;
client.presencePenalty = clientOptions.presencePenalty;
client.maxOutputTokens = clientOptions.maxOutputTokens;
return client;
} else if (!EXCLUDED_GENAI_MODELS.test(model)) {
logger.debug('Creating GenAI client');
return new GenAI(this.apiKey).getGenerativeModel({
...clientOptions,
model,
});
return new GenAI(this.apiKey).getGenerativeModel({ model }, requestOptions);
}
logger.debug('Creating Chat Google Generative AI client');
return new ChatGoogleGenerativeAI({ ...clientOptions, apiKey: this.apiKey });
}
async getCompletion(_payload, options = {}) {
const { parameters, instances } = _payload;
const { onProgress, abortController } = options;
const streamRate = this.options.streamRate ?? Constants.DEFAULT_STREAM_RATE;
const { messages: _messages, context, examples: _examples } = instances?.[0] ?? {};
let examples;
let clientOptions = { ...parameters, maxRetries: 2 };
initializeClient() {
let clientOptions = { ...this.modelOptions };
if (this.project_id) {
clientOptions['authOptions'] = {
@@ -636,184 +619,238 @@ class GoogleClient extends BaseClient {
};
}
if (!parameters) {
clientOptions = { ...clientOptions, ...this.modelOptions };
}
if (this.isGenerativeModel && !this.project_id) {
clientOptions.modelName = clientOptions.model;
delete clientOptions.model;
}
if (_examples && _examples.length) {
examples = _examples
.map((ex) => {
const { input, output } = ex;
if (!input || !output) {
return undefined;
}
return {
input: new HumanMessage(input.content),
output: new AIMessage(output.content),
};
})
.filter((ex) => ex);
this.client = this.createLLM(clientOptions);
return this.client;
}
clientOptions.examples = examples;
}
const model = this.createLLM(clientOptions);
async getCompletion(_payload, options = {}) {
const { onProgress, abortController } = options;
const safetySettings = getSafetySettings(this.modelOptions.model);
const streamRate = this.options.streamRate ?? Constants.DEFAULT_STREAM_RATE;
const modelName = this.modelOptions.modelName ?? this.modelOptions.model ?? '';
let reply = '';
const messages = this.isTextModel ? _payload.trim() : _messages;
if (!this.isVisionModel && context && messages?.length > 0) {
messages.unshift(new SystemMessage(context));
}
const modelName = clientOptions.modelName ?? clientOptions.model ?? '';
if (!EXCLUDED_GENAI_MODELS.test(modelName) && !this.project_id) {
const client = model;
const requestOptions = {
contents: _payload,
};
let promptPrefix = (this.options.promptPrefix ?? '').trim();
if (typeof this.options.artifactsPrompt === 'string' && this.options.artifactsPrompt) {
promptPrefix = `${promptPrefix ?? ''}\n${this.options.artifactsPrompt}`.trim();
}
if (this.options?.promptPrefix?.length) {
requestOptions.systemInstruction = {
parts: [
{
text: promptPrefix,
},
],
/** @type {Error} */
let error;
try {
if (!EXCLUDED_GENAI_MODELS.test(modelName) && !this.project_id) {
/** @type {GenAI} */
const client = this.client;
/** @type {GenerateContentRequest} */
const requestOptions = {
safetySettings,
contents: _payload,
generationConfig: googleGenConfigSchema.parse(this.modelOptions),
};
const promptPrefix = (this.options.promptPrefix ?? '').trim();
if (promptPrefix.length) {
requestOptions.systemInstruction = {
parts: [
{
text: promptPrefix,
},
],
};
}
const delay = modelName.includes('flash') ? 8 : 15;
/** @type {GenAIUsageMetadata} */
let usageMetadata;
const result = await client.generateContentStream(requestOptions);
for await (const chunk of result.stream) {
usageMetadata = !usageMetadata
? chunk?.usageMetadata
: Object.assign(usageMetadata, chunk?.usageMetadata);
const chunkText = chunk.text();
await this.generateTextStream(chunkText, onProgress, {
delay,
});
reply += chunkText;
await sleep(streamRate);
}
if (usageMetadata) {
this.usage = {
input_tokens: usageMetadata.promptTokenCount,
output_tokens: usageMetadata.candidatesTokenCount,
};
}
return reply;
}
requestOptions.safetySettings = _payload.safetySettings;
const { instances } = _payload;
const { messages: messages, context } = instances?.[0] ?? {};
const delay = modelName.includes('flash') ? 8 : 15;
const result = await client.generateContentStream(requestOptions);
for await (const chunk of result.stream) {
const chunkText = chunk.text();
if (!this.isVisionModel && context && messages?.length > 0) {
messages.unshift(new SystemMessage(context));
}
/** @type {import('@langchain/core/messages').AIMessageChunk['usage_metadata']} */
let usageMetadata;
/** @type {ChatVertexAI} */
const client = this.client;
const stream = await client.stream(messages, {
signal: abortController.signal,
streamUsage: true,
safetySettings,
});
let delay = this.options.streamRate || 8;
if (!this.options.streamRate) {
if (this.isGenerativeModel) {
delay = 15;
}
if (modelName.includes('flash')) {
delay = 5;
}
}
for await (const chunk of stream) {
if (chunk?.usage_metadata) {
const metadata = chunk.usage_metadata;
for (const key in metadata) {
if (Number.isNaN(metadata[key])) {
delete metadata[key];
}
}
usageMetadata = !usageMetadata ? metadata : concat(usageMetadata, metadata);
}
const chunkText = chunk?.content ?? '';
await this.generateTextStream(chunkText, onProgress, {
delay,
});
reply += chunkText;
await sleep(streamRate);
}
return reply;
if (usageMetadata) {
this.usage = usageMetadata;
}
} catch (e) {
error = e;
logger.error('[GoogleClient] There was an issue generating the completion', e);
}
const stream = await model.stream(messages, {
signal: abortController.signal,
safetySettings: _payload.safetySettings,
});
let delay = this.options.streamRate || 8;
if (!this.options.streamRate) {
if (this.isGenerativeModel) {
delay = 15;
}
if (modelName.includes('flash')) {
delay = 5;
}
if (error != null && reply === '') {
const errorMessage = `{ "type": "${ErrorTypes.GoogleError}", "info": "${
error.message ?? 'The Google provider failed to generate content, please contact the Admin.'
}" }`;
throw new Error(errorMessage);
}
for await (const chunk of stream) {
const chunkText = chunk?.content ?? chunk;
await this.generateTextStream(chunkText, onProgress, {
delay,
});
reply += chunkText;
}
return reply;
}
/**
* Get stream usage as returned by this client's API response.
* @returns {UsageMetadata} The stream usage object.
*/
getStreamUsage() {
return this.usage;
}
/**
* Calculates the correct token count for the current user message based on the token count map and API usage.
* Edge case: If the calculation results in a negative value, it returns the original estimate.
* If revisiting a conversation with a chat history entirely composed of token estimates,
* the cumulative token count going forward should become more accurate as the conversation progresses.
* @param {Object} params - The parameters for the calculation.
* @param {Record<string, number>} params.tokenCountMap - A map of message IDs to their token counts.
* @param {string} params.currentMessageId - The ID of the current message to calculate.
* @param {UsageMetadata} params.usage - The usage object returned by the API.
* @returns {number} The correct token count for the current user message.
*/
calculateCurrentTokenCount({ tokenCountMap, currentMessageId, usage }) {
const originalEstimate = tokenCountMap[currentMessageId] || 0;
if (!usage || typeof usage.input_tokens !== 'number') {
return originalEstimate;
}
tokenCountMap[currentMessageId] = 0;
const totalTokensFromMap = Object.values(tokenCountMap).reduce((sum, count) => {
const numCount = Number(count);
return sum + (isNaN(numCount) ? 0 : numCount);
}, 0);
const totalInputTokens = usage.input_tokens ?? 0;
const currentMessageTokens = totalInputTokens - totalTokensFromMap;
return currentMessageTokens > 0 ? currentMessageTokens : originalEstimate;
}
/**
* @param {object} params
* @param {number} params.promptTokens
* @param {number} params.completionTokens
* @param {UsageMetadata} [params.usage]
* @param {string} [params.model]
* @param {string} [params.context='message']
* @returns {Promise<void>}
*/
async recordTokenUsage({ promptTokens, completionTokens, model, context = 'message' }) {
await spendTokens(
{
context,
user: this.user ?? this.options.req?.user?.id,
conversationId: this.conversationId,
model: model ?? this.modelOptions.model,
endpointTokenConfig: this.options.endpointTokenConfig,
},
{ promptTokens, completionTokens },
);
}
/**
* Stripped-down logic for generating a title. This uses the non-streaming APIs, since the user does not see titles streaming
*/
async titleChatCompletion(_payload, options = {}) {
const { abortController } = options;
const { parameters, instances } = _payload;
const { messages: _messages, examples: _examples } = instances?.[0] ?? {};
let clientOptions = { ...parameters, maxRetries: 2 };
logger.debug('Initialized title client options');
if (this.project_id) {
clientOptions['authOptions'] = {
credentials: {
...this.serviceKey,
},
projectId: this.project_id,
};
}
if (!parameters) {
clientOptions = { ...clientOptions, ...this.modelOptions };
}
if (this.isGenerativeModel && !this.project_id) {
clientOptions.modelName = clientOptions.model;
delete clientOptions.model;
}
const model = this.createLLM(clientOptions);
let reply = '';
const messages = this.isTextModel ? _payload.trim() : _messages;
const { abortController } = options;
const modelName = clientOptions.modelName ?? clientOptions.model ?? '';
if (!EXCLUDED_GENAI_MODELS.test(modelName) && !this.project_id) {
const model = this.modelOptions.modelName ?? this.modelOptions.model ?? '';
const safetySettings = getSafetySettings(model);
if (!EXCLUDED_GENAI_MODELS.test(model) && !this.project_id) {
logger.debug('Identified titling model as GenAI version');
/** @type {GenerativeModel} */
const client = model;
const client = this.client;
const requestOptions = {
contents: _payload,
safetySettings,
generationConfig: {
temperature: 0.5,
},
};
let promptPrefix = (this.options.promptPrefix ?? '').trim();
if (typeof this.options.artifactsPrompt === 'string' && this.options.artifactsPrompt) {
promptPrefix = `${promptPrefix ?? ''}\n${this.options.artifactsPrompt}`.trim();
}
if (this.options?.promptPrefix?.length) {
requestOptions.systemInstruction = {
parts: [
{
text: promptPrefix,
},
],
};
}
const safetySettings = _payload.safetySettings;
requestOptions.safetySettings = safetySettings;
const result = await client.generateContent(requestOptions);
reply = result.response?.text();
return reply;
} else {
logger.debug('Beginning titling');
const safetySettings = _payload.safetySettings;
const titleResponse = await model.invoke(messages, {
const { instances } = _payload;
const { messages } = instances?.[0] ?? {};
const titleResponse = await this.client.invoke(messages, {
signal: abortController.signal,
timeout: 7000,
safetySettings: safetySettings,
safetySettings,
});
if (titleResponse.usage_metadata) {
await this.recordTokenUsage({
model,
promptTokens: titleResponse.usage_metadata.input_tokens,
completionTokens: titleResponse.usage_metadata.output_tokens,
context: 'title',
});
}
reply = titleResponse.content;
// TODO: RECORD TOKEN USAGE
return reply;
}
}
@@ -837,15 +874,8 @@ class GoogleClient extends BaseClient {
},
]);
if (this.isVisionModel) {
logger.warn(
`Current vision model does not support titling without an attachment; falling back to default model ${settings.model.default}`,
);
payload.parameters = { ...payload.parameters, model: settings.model.default };
}
try {
this.initializeClient();
title = await this.titleChatCompletion(payload, {
abortController: new AbortController(),
onProgress: () => {},
@@ -859,8 +889,10 @@ class GoogleClient extends BaseClient {
getSaveOptions() {
return {
endpointType: null,
artifacts: this.options.artifacts,
promptPrefix: this.options.promptPrefix,
maxContextTokens: this.options.maxContextTokens,
modelLabel: this.options.modelLabel,
iconURL: this.options.iconURL,
greeting: this.options.greeting,
@@ -874,53 +906,39 @@ class GoogleClient extends BaseClient {
}
async sendCompletion(payload, opts = {}) {
payload.safetySettings = this.getSafetySettings();
let reply = '';
reply = await this.getCompletion(payload, opts);
return reply.trim();
}
getSafetySettings() {
return [
{
category: 'HARM_CATEGORY_SEXUALLY_EXPLICIT',
threshold:
process.env.GOOGLE_SAFETY_SEXUALLY_EXPLICIT || 'HARM_BLOCK_THRESHOLD_UNSPECIFIED',
},
{
category: 'HARM_CATEGORY_HATE_SPEECH',
threshold: process.env.GOOGLE_SAFETY_HATE_SPEECH || 'HARM_BLOCK_THRESHOLD_UNSPECIFIED',
},
{
category: 'HARM_CATEGORY_HARASSMENT',
threshold: process.env.GOOGLE_SAFETY_HARASSMENT || 'HARM_BLOCK_THRESHOLD_UNSPECIFIED',
},
{
category: 'HARM_CATEGORY_DANGEROUS_CONTENT',
threshold:
process.env.GOOGLE_SAFETY_DANGEROUS_CONTENT || 'HARM_BLOCK_THRESHOLD_UNSPECIFIED',
},
];
getEncoding() {
return 'cl100k_base';
}
/* TO-DO: Handle tokens with Google tokenization NOTE: these are required */
static getTokenizer(encoding, isModelName = false, extendSpecialTokens = {}) {
if (tokenizersCache[encoding]) {
return tokenizersCache[encoding];
}
let tokenizer;
if (isModelName) {
tokenizer = encodingForModel(encoding, extendSpecialTokens);
} else {
tokenizer = getEncoding(encoding, extendSpecialTokens);
}
tokenizersCache[encoding] = tokenizer;
return tokenizer;
async getVertexTokenCount(text) {
/** @type {ChatVertexAI} */
const client = this.client ?? this.initializeClient();
const connection = client.connection;
const gAuthClient = connection.client;
const tokenEndpoint = `https://${connection._endpoint}/${connection.apiVersion}/projects/${this.project_id}/locations/${connection._location}/publishers/google/models/${connection.model}/:countTokens`;
const result = await gAuthClient.request({
url: tokenEndpoint,
method: 'POST',
data: {
contents: [{ role: 'user', parts: [{ text }] }],
},
});
return result;
}
/**
* Returns the token count of a given text. It also checks and resets the tokenizers if necessary.
* @param {string} text - The text to get the token count for.
* @returns {number} The token count of the given text.
*/
getTokenCount(text) {
return this.gptEncoder.encode(text, 'all').length;
const encoding = this.getEncoding();
return Tokenizer.getTokenCount(text, encoding);
}
}

View File

@@ -1,6 +1,7 @@
const OpenAI = require('openai');
const { OllamaClient } = require('./OllamaClient');
const { HttpsProxyAgent } = require('https-proxy-agent');
const { SplitStreamHandler, GraphEvents } = require('@librechat/agents');
const {
Constants,
ImageDetail,
@@ -13,7 +14,6 @@ const {
validateVisionModel,
mapModelToAzureConfig,
} = require('librechat-data-provider');
const { encoding_for_model: encodingForModel, get_encoding: getEncoding } = require('tiktoken');
const {
extractBaseURL,
constructAzureURL,
@@ -29,21 +29,17 @@ const {
createContextHandlers,
} = require('./prompts');
const { encodeAndFormat } = require('~/server/services/Files/images/encode');
const { addSpaceIfNeeded, isEnabled, sleep } = require('~/server/utils');
const Tokenizer = require('~/server/services/Tokenizer');
const { spendTokens } = require('~/models/spendTokens');
const { isEnabled, sleep } = require('~/server/utils');
const { handleOpenAIErrors } = require('./tools/util');
const { createLLM, RunManager } = require('./llm');
const { logger, sendEvent } = require('~/config');
const ChatGPTClient = require('./ChatGPTClient');
const { summaryBuffer } = require('./memory');
const { runTitleChain } = require('./chains');
const { tokenSplit } = require('./document');
const BaseClient = require('./BaseClient');
const { logger } = require('~/config');
// Cache to store Tiktoken instances
const tokenizersCache = {};
// Counter for keeping track of the number of tokenizer calls
let tokenizerCallsCount = 0;
class OpenAIClient extends BaseClient {
constructor(apiKey, options = {}) {
@@ -69,7 +65,9 @@ class OpenAIClient extends BaseClient {
/** @type {OpenAIUsageMetadata | undefined} */
this.usage;
/** @type {boolean|undefined} */
this.isO1Model;
this.isOmni;
/** @type {SplitStreamHandler | undefined} */
this.streamHandler;
}
// TODO: PluginsClient calls this 3x, unneeded
@@ -107,7 +105,8 @@ class OpenAIClient extends BaseClient {
this.checkVisionRequest(this.options.attachments);
}
this.isO1Model = /\bo1\b/i.test(this.modelOptions.model);
const omniPattern = /\b(o1|o3)\b/i;
this.isOmni = omniPattern.test(this.modelOptions.model);
const { OPENROUTER_API_KEY, OPENAI_FORCE_PROMPT } = process.env ?? {};
if (OPENROUTER_API_KEY && !this.azure) {
@@ -147,7 +146,7 @@ class OpenAIClient extends BaseClient {
const { model } = this.modelOptions;
this.isChatCompletion =
/\bo1\b/i.test(model) || model.includes('gpt') || this.useOpenRouter || !!reverseProxy;
omniPattern.test(model) || model.includes('gpt') || this.useOpenRouter || !!reverseProxy;
this.isChatGptModel = this.isChatCompletion;
if (
model.includes('text-davinci') ||
@@ -306,75 +305,8 @@ class OpenAIClient extends BaseClient {
}
}
// Selects an appropriate tokenizer based on the current configuration of the client instance.
// It takes into account factors such as whether it's a chat completion, an unofficial chat GPT model, etc.
selectTokenizer() {
let tokenizer;
this.encoding = 'text-davinci-003';
if (this.isChatCompletion) {
this.encoding = this.modelOptions.model.includes('gpt-4o') ? 'o200k_base' : 'cl100k_base';
tokenizer = this.constructor.getTokenizer(this.encoding);
} else if (this.isUnofficialChatGptModel) {
const extendSpecialTokens = {
'<|im_start|>': 100264,
'<|im_end|>': 100265,
};
tokenizer = this.constructor.getTokenizer(this.encoding, true, extendSpecialTokens);
} else {
try {
const { model } = this.modelOptions;
this.encoding = model.includes('instruct') ? 'text-davinci-003' : model;
tokenizer = this.constructor.getTokenizer(this.encoding, true);
} catch {
tokenizer = this.constructor.getTokenizer('text-davinci-003', true);
}
}
return tokenizer;
}
// Retrieves a tokenizer either from the cache or creates a new one if one doesn't exist in the cache.
// If a tokenizer is being created, it's also added to the cache.
static getTokenizer(encoding, isModelName = false, extendSpecialTokens = {}) {
let tokenizer;
if (tokenizersCache[encoding]) {
tokenizer = tokenizersCache[encoding];
} else {
if (isModelName) {
tokenizer = encodingForModel(encoding, extendSpecialTokens);
} else {
tokenizer = getEncoding(encoding, extendSpecialTokens);
}
tokenizersCache[encoding] = tokenizer;
}
return tokenizer;
}
// Frees all encoders in the cache and resets the count.
static freeAndResetAllEncoders() {
try {
Object.keys(tokenizersCache).forEach((key) => {
if (tokenizersCache[key]) {
tokenizersCache[key].free();
delete tokenizersCache[key];
}
});
// Reset count
tokenizerCallsCount = 1;
} catch (error) {
logger.error('[OpenAIClient] Free and reset encoders error', error);
}
}
// Checks if the cache of tokenizers has reached a certain size. If it has, it frees and resets all tokenizers.
resetTokenizersIfNecessary() {
if (tokenizerCallsCount >= 25) {
if (this.options.debug) {
logger.debug('[OpenAIClient] freeAndResetAllEncoders: reached 25 encodings, resetting...');
}
this.constructor.freeAndResetAllEncoders();
}
tokenizerCallsCount++;
getEncoding() {
return this.model?.includes('gpt-4o') ? 'o200k_base' : 'cl100k_base';
}
/**
@@ -383,15 +315,8 @@ class OpenAIClient extends BaseClient {
* @returns {number} The token count of the given text.
*/
getTokenCount(text) {
this.resetTokenizersIfNecessary();
try {
const tokenizer = this.selectTokenizer();
return tokenizer.encode(text, 'all').length;
} catch (error) {
this.constructor.freeAndResetAllEncoders();
const tokenizer = this.selectTokenizer();
return tokenizer.encode(text, 'all').length;
}
const encoding = this.getEncoding();
return Tokenizer.getTokenCount(text, encoding);
}
/**
@@ -423,6 +348,7 @@ class OpenAIClient extends BaseClient {
promptPrefix: this.options.promptPrefix,
resendFiles: this.options.resendFiles,
imageDetail: this.options.imageDetail,
modelLabel: this.options.modelLabel,
iconURL: this.options.iconURL,
greeting: this.options.greeting,
spec: this.options.spec,
@@ -549,7 +475,7 @@ class OpenAIClient extends BaseClient {
promptPrefix = this.augmentedPrompt + promptPrefix;
}
if (promptPrefix && this.isO1Model !== true) {
if (promptPrefix && this.isOmni !== true) {
promptPrefix = `Instructions:\n${promptPrefix.trim()}`;
instructions = {
role: 'system',
@@ -577,12 +503,11 @@ class OpenAIClient extends BaseClient {
};
/** EXPERIMENTAL */
if (promptPrefix && this.isO1Model === true) {
if (promptPrefix && this.isOmni === true) {
const lastUserMessageIndex = payload.findLastIndex((message) => message.role === 'user');
if (lastUserMessageIndex !== -1) {
payload[
lastUserMessageIndex
].content = `${promptPrefix}\n${payload[lastUserMessageIndex].content}`;
payload[lastUserMessageIndex].content =
`${promptPrefix}\n${payload[lastUserMessageIndex].content}`;
}
}
@@ -691,8 +616,6 @@ class OpenAIClient extends BaseClient {
model = 'gpt-4o-mini',
modelName,
temperature = 0.2,
presence_penalty = 0,
frequency_penalty = 0,
max_tokens,
streaming,
context,
@@ -703,8 +626,6 @@ class OpenAIClient extends BaseClient {
const modelOptions = {
modelName: modelName ?? model,
temperature,
presence_penalty,
frequency_penalty,
user: this.user,
};
@@ -875,7 +796,11 @@ ${convo}
}
title = (
await this.sendPayload(instructionsPayload, { modelOptions, useChatCompletion })
await this.sendPayload(instructionsPayload, {
modelOptions,
useChatCompletion,
context: 'title',
})
).replaceAll('"', '');
const completionTokens = this.getTokenCount(title);
@@ -1008,7 +933,10 @@ ${convo}
);
if (excessTokenCount > maxContextTokens) {
({ context } = await this.getMessagesWithinTokenLimit(context, maxContextTokens));
({ context } = await this.getMessagesWithinTokenLimit({
messages: context,
maxContextTokens,
}));
}
if (context.length === 0) {
@@ -1138,10 +1066,58 @@ ${convo}
});
}
/**
*
* @param {string[]} [intermediateReply]
* @returns {string}
*/
getStreamText(intermediateReply) {
if (!this.streamHandler) {
return intermediateReply?.join('') ?? '';
}
let thinkMatch;
let remainingText;
let reasoningText = '';
if (this.streamHandler.reasoningTokens.length > 0) {
reasoningText = this.streamHandler.reasoningTokens.join('');
thinkMatch = reasoningText.match(/<think>([\s\S]*?)<\/think>/)?.[1]?.trim();
if (thinkMatch != null && thinkMatch) {
const reasoningTokens = `:::thinking\n${thinkMatch}\n:::\n`;
remainingText = reasoningText.split(/<\/think>/)?.[1]?.trim() || '';
return `${reasoningTokens}${remainingText}${this.streamHandler.tokens.join('')}`;
} else if (thinkMatch === '') {
remainingText = reasoningText.split(/<\/think>/)?.[1]?.trim() || '';
return `${remainingText}${this.streamHandler.tokens.join('')}`;
}
}
const reasoningTokens =
reasoningText.length > 0
? `:::thinking\n${reasoningText.replace('<think>', '').replace('</think>', '').trim()}\n:::\n`
: '';
return `${reasoningTokens}${this.streamHandler.tokens.join('')}`;
}
getMessageMapMethod() {
/**
* @param {TMessage} msg
*/
return (msg) => {
if (msg.text != null && msg.text && msg.text.startsWith(':::thinking')) {
msg.text = msg.text.replace(/:::thinking.*?:::/gs, '').trim();
}
return msg;
};
}
async chatCompletion({ payload, onProgress, abortController = null }) {
let error = null;
let intermediateReply = [];
const errorCallback = (err) => (error = err);
const intermediateReply = [];
try {
if (!abortController) {
abortController = new AbortController();
@@ -1245,7 +1221,7 @@ ${convo}
opts.defaultHeaders = { ...opts.defaultHeaders, 'api-key': this.apiKey };
}
if (this.isO1Model === true && modelOptions.max_tokens != null) {
if (this.isOmni === true && modelOptions.max_tokens != null) {
modelOptions.max_completion_tokens = modelOptions.max_tokens;
delete modelOptions.max_tokens;
}
@@ -1324,20 +1300,54 @@ ${convo}
/** @type {(value: void | PromiseLike<void>) => void} */
let streamResolve;
if (this.isO1Model === true && this.azure && modelOptions.stream) {
if (
this.isOmni === true &&
(this.azure || /o1(?!-(?:mini|preview)).*$/.test(modelOptions.model)) &&
!/o3-.*$/.test(this.modelOptions.model) &&
modelOptions.stream
) {
delete modelOptions.stream;
delete modelOptions.stop;
} else if (!this.isOmni && modelOptions.reasoning_effort != null) {
delete modelOptions.reasoning_effort;
}
let reasoningKey = 'reasoning_content';
if (this.useOpenRouter) {
modelOptions.include_reasoning = true;
reasoningKey = 'reasoning';
}
this.streamHandler = new SplitStreamHandler({
reasoningKey,
accumulate: true,
runId: this.responseMessageId,
handlers: {
[GraphEvents.ON_RUN_STEP]: (event) => sendEvent(this.options.res, event),
[GraphEvents.ON_MESSAGE_DELTA]: (event) => sendEvent(this.options.res, event),
[GraphEvents.ON_REASONING_DELTA]: (event) => sendEvent(this.options.res, event),
},
});
intermediateReply = this.streamHandler.tokens;
if (modelOptions.stream) {
streamPromise = new Promise((resolve) => {
streamResolve = resolve;
});
/** @type {OpenAI.OpenAI.CompletionCreateParamsStreaming} */
const params = {
...modelOptions,
stream: true,
};
if (
this.options.endpoint === EModelEndpoint.openAI ||
this.options.endpoint === EModelEndpoint.azureOpenAI
) {
params.stream_options = { include_usage: true };
}
const stream = await openai.beta.chat.completions
.stream({
...modelOptions,
stream: true,
})
.stream(params)
.on('abort', () => {
/* Do nothing here */
})
@@ -1355,20 +1365,44 @@ ${convo}
}
if (typeof finalMessage.content !== 'string' || finalMessage.content.trim() === '') {
finalChatCompletion.choices[0].message.content = intermediateReply.join('');
finalChatCompletion.choices[0].message.content = this.streamHandler.tokens.join('');
}
})
.on('finalMessage', (message) => {
if (message?.role !== 'assistant') {
stream.messages.push({ role: 'assistant', content: intermediateReply.join('') });
stream.messages.push({
role: 'assistant',
content: this.streamHandler.tokens.join(''),
});
UnexpectedRoleError = true;
}
});
if (this.continued === true) {
const latestText = addSpaceIfNeeded(
this.currentMessages[this.currentMessages.length - 1]?.text ?? '',
);
this.streamHandler.handle({
choices: [
{
delta: {
content: latestText,
},
},
],
});
}
for await (const chunk of stream) {
const token = chunk.choices[0]?.delta?.content || '';
intermediateReply.push(token);
onProgress(token);
// Add finish_reason: null if missing in any choice
if (chunk.choices) {
chunk.choices.forEach((choice) => {
if (!('finish_reason' in choice)) {
choice.finish_reason = null;
}
});
}
this.streamHandler.handle(chunk);
if (abortController.signal.aborted) {
stream.controller.abort();
break;
@@ -1411,7 +1445,7 @@ ${convo}
if (!Array.isArray(choices) || choices.length === 0) {
logger.warn('[OpenAIClient] Chat completion response has no choices');
return intermediateReply.join('');
return this.streamHandler.tokens.join('');
}
const { message, finish_reason } = choices[0] ?? {};
@@ -1421,11 +1455,11 @@ ${convo}
if (!message) {
logger.warn('[OpenAIClient] Message is undefined in chatCompletion response');
return intermediateReply.join('');
return this.streamHandler.tokens.join('');
}
if (typeof message.content !== 'string' || message.content.trim() === '') {
const reply = intermediateReply.join('');
const reply = this.streamHandler.tokens.join('');
logger.debug(
'[OpenAIClient] chatCompletion: using intermediateReply due to empty message.content',
{ intermediateReply: reply },
@@ -1433,13 +1467,27 @@ ${convo}
return reply;
}
if (
this.streamHandler.reasoningTokens.length > 0 &&
this.options.context !== 'title' &&
!message.content.startsWith('<think>')
) {
return this.getStreamText();
} else if (
this.streamHandler.reasoningTokens.length > 0 &&
this.options.context !== 'title' &&
message.content.startsWith('<think>')
) {
return this.getStreamText();
}
return message.content;
} catch (err) {
if (
err?.message?.includes('abort') ||
(err instanceof OpenAI.APIError && err?.message?.includes('abort'))
) {
return intermediateReply.join('');
return this.getStreamText(intermediateReply);
}
if (
err?.message?.includes(
@@ -1454,10 +1502,18 @@ ${convo}
(err instanceof OpenAI.OpenAIError && err?.message?.includes('missing finish_reason'))
) {
logger.error('[OpenAIClient] Known OpenAI error:', err);
return intermediateReply.join('');
if (this.streamHandler && this.streamHandler.reasoningTokens.length) {
return this.getStreamText();
} else if (intermediateReply.length > 0) {
return this.getStreamText(intermediateReply);
} else {
throw err;
}
} else if (err instanceof OpenAI.APIError) {
if (intermediateReply.length > 0) {
return intermediateReply.join('');
if (this.streamHandler && this.streamHandler.reasoningTokens.length) {
return this.getStreamText();
} else if (intermediateReply.length > 0) {
return this.getStreamText(intermediateReply);
} else {
throw err;
}

View File

@@ -1,5 +1,4 @@
const OpenAIClient = require('./OpenAIClient');
const { CacheKeys, Time } = require('librechat-data-provider');
const { CallbackManager } = require('@langchain/core/callbacks/manager');
const { BufferMemory, ChatMessageHistory } = require('langchain/memory');
const { addImages, buildErrorInput, buildPromptPrefix } = require('./output_parsers');
@@ -11,7 +10,6 @@ const checkBalance = require('~/models/checkBalance');
const { isEnabled } = require('~/server/utils');
const { extractBaseURL } = require('~/utils');
const { loadTools } = require('./tools/util');
const { getLogStores } = require('~/cache');
const { logger } = require('~/config');
class PluginsClient extends OpenAIClient {
@@ -43,6 +41,7 @@ class PluginsClient extends OpenAIClient {
return {
artifacts: this.options.artifacts,
chatGptLabel: this.options.chatGptLabel,
modelLabel: this.options.modelLabel,
promptPrefix: this.options.promptPrefix,
tools: this.options.tools,
...this.modelOptions,
@@ -255,15 +254,6 @@ class PluginsClient extends OpenAIClient {
}
this.responsePromise = this.saveMessageToDatabase(responseMessage, saveOptions, user);
const messageCache = getLogStores(CacheKeys.MESSAGES);
messageCache.set(
responseMessage.messageId,
{
text: responseMessage.text,
complete: true,
},
Time.FIVE_MINUTES,
);
delete responseMessage.tokenCount;
return { ...responseMessage, ...result };
}
@@ -290,7 +280,6 @@ class PluginsClient extends OpenAIClient {
logger.debug('[PluginsClient] sendMessage', { userMessageText: message, opts });
const {
user,
isEdited,
conversationId,
responseMessageId,
saveOptions,
@@ -369,7 +358,6 @@ class PluginsClient extends OpenAIClient {
conversationId,
parentMessageId: userMessage.messageId,
isCreatedByUser: false,
isEdited,
model: this.modelOptions.model,
sender: this.sender,
promptTokens,

View File

@@ -60,7 +60,6 @@ describe('formatMessage', () => {
error: false,
finish_reason: null,
isCreatedByUser: true,
isEdited: false,
model: null,
parentMessageId: Constants.NO_PARENT,
sender: 'User',

View File

@@ -4,7 +4,7 @@ const summaryPrompts = require('./summaryPrompts');
const handleInputs = require('./handleInputs');
const instructions = require('./instructions');
const titlePrompts = require('./titlePrompts');
const truncateText = require('./truncateText');
const truncate = require('./truncate');
const createVisionPrompt = require('./createVisionPrompt');
const createContextHandlers = require('./createContextHandlers');
@@ -15,7 +15,7 @@ module.exports = {
...handleInputs,
...instructions,
...titlePrompts,
...truncateText,
...truncate,
createVisionPrompt,
createContextHandlers,
};

View File

@@ -0,0 +1,115 @@
const MAX_CHAR = 255;
/**
* Truncates a given text to a specified maximum length, appending ellipsis and a notification
* if the original text exceeds the maximum length.
*
* @param {string} text - The text to be truncated.
* @param {number} [maxLength=MAX_CHAR] - The maximum length of the text after truncation. Defaults to MAX_CHAR.
* @returns {string} The truncated text if the original text length exceeds maxLength, otherwise returns the original text.
*/
function truncateText(text, maxLength = MAX_CHAR) {
if (text.length > maxLength) {
return `${text.slice(0, maxLength)}... [text truncated for brevity]`;
}
return text;
}
/**
* Truncates a given text to a specified maximum length by showing the first half and the last half of the text,
* separated by ellipsis. This method ensures the output does not exceed the maximum length, including the addition
* of ellipsis and notification if the original text exceeds the maximum length.
*
* @param {string} text - The text to be truncated.
* @param {number} [maxLength=MAX_CHAR] - The maximum length of the output text after truncation. Defaults to MAX_CHAR.
* @returns {string} The truncated text showing the first half and the last half, or the original text if it does not exceed maxLength.
*/
function smartTruncateText(text, maxLength = MAX_CHAR) {
const ellipsis = '...';
const notification = ' [text truncated for brevity]';
const halfMaxLength = Math.floor((maxLength - ellipsis.length - notification.length) / 2);
if (text.length > maxLength) {
const startLastHalf = text.length - halfMaxLength;
return `${text.slice(0, halfMaxLength)}${ellipsis}${text.slice(startLastHalf)}${notification}`;
}
return text;
}
/**
* @param {TMessage[]} _messages
* @param {number} maxContextTokens
* @param {function({role: string, content: TMessageContent[]}): number} getTokenCountForMessage
*
* @returns {{
* dbMessages: TMessage[],
* editedIndices: number[]
* }}
*/
function truncateToolCallOutputs(_messages, maxContextTokens, getTokenCountForMessage) {
const THRESHOLD_PERCENTAGE = 0.5;
const targetTokenLimit = maxContextTokens * THRESHOLD_PERCENTAGE;
let currentTokenCount = 3;
const messages = [..._messages];
const processedMessages = [];
let currentIndex = messages.length;
const editedIndices = new Set();
while (messages.length > 0) {
currentIndex--;
const message = messages.pop();
currentTokenCount += message.tokenCount;
if (currentTokenCount < targetTokenLimit) {
processedMessages.push(message);
continue;
}
if (!message.content || !Array.isArray(message.content)) {
processedMessages.push(message);
continue;
}
const toolCallIndices = message.content
.map((item, index) => (item.type === 'tool_call' ? index : -1))
.filter((index) => index !== -1)
.reverse();
if (toolCallIndices.length === 0) {
processedMessages.push(message);
continue;
}
const newContent = [...message.content];
// Truncate all tool outputs since we're over threshold
for (const index of toolCallIndices) {
const toolCall = newContent[index].tool_call;
if (!toolCall || !toolCall.output) {
continue;
}
editedIndices.add(currentIndex);
newContent[index] = {
...newContent[index],
tool_call: {
...toolCall,
output: '[OUTPUT_OMITTED_FOR_BREVITY]',
},
};
}
const truncatedMessage = {
...message,
content: newContent,
tokenCount: getTokenCountForMessage({ role: 'assistant', content: newContent }),
};
processedMessages.push(truncatedMessage);
}
return { dbMessages: processedMessages.reverse(), editedIndices: Array.from(editedIndices) };
}
module.exports = { truncateText, smartTruncateText, truncateToolCallOutputs };

View File

@@ -1,40 +0,0 @@
const MAX_CHAR = 255;
/**
* Truncates a given text to a specified maximum length, appending ellipsis and a notification
* if the original text exceeds the maximum length.
*
* @param {string} text - The text to be truncated.
* @param {number} [maxLength=MAX_CHAR] - The maximum length of the text after truncation. Defaults to MAX_CHAR.
* @returns {string} The truncated text if the original text length exceeds maxLength, otherwise returns the original text.
*/
function truncateText(text, maxLength = MAX_CHAR) {
if (text.length > maxLength) {
return `${text.slice(0, maxLength)}... [text truncated for brevity]`;
}
return text;
}
/**
* Truncates a given text to a specified maximum length by showing the first half and the last half of the text,
* separated by ellipsis. This method ensures the output does not exceed the maximum length, including the addition
* of ellipsis and notification if the original text exceeds the maximum length.
*
* @param {string} text - The text to be truncated.
* @param {number} [maxLength=MAX_CHAR] - The maximum length of the output text after truncation. Defaults to MAX_CHAR.
* @returns {string} The truncated text showing the first half and the last half, or the original text if it does not exceed maxLength.
*/
function smartTruncateText(text, maxLength = MAX_CHAR) {
const ellipsis = '...';
const notification = ' [text truncated for brevity]';
const halfMaxLength = Math.floor((maxLength - ellipsis.length - notification.length) / 2);
if (text.length > maxLength) {
const startLastHalf = text.length - halfMaxLength;
return `${text.slice(0, halfMaxLength)}${ellipsis}${text.slice(startLastHalf)}${notification}`;
}
return text;
}
module.exports = { truncateText, smartTruncateText };

View File

@@ -88,6 +88,19 @@ describe('BaseClient', () => {
const messages = [{ content: 'Hello' }, { content: 'How are you?' }, { content: 'Goodbye' }];
const instructions = { content: 'Please respond to the question.' };
const result = TestClient.addInstructions(messages, instructions);
const expected = [
{ content: 'Please respond to the question.' },
{ content: 'Hello' },
{ content: 'How are you?' },
{ content: 'Goodbye' },
];
expect(result).toEqual(expected);
});
test('returns the input messages with instructions properly added when addInstructions() with legacy flag', () => {
const messages = [{ content: 'Hello' }, { content: 'How are you?' }, { content: 'Goodbye' }];
const instructions = { content: 'Please respond to the question.' };
const result = TestClient.addInstructions(messages, instructions, true);
const expected = [
{ content: 'Hello' },
{ content: 'How are you?' },
@@ -146,7 +159,7 @@ describe('BaseClient', () => {
expectedMessagesToRefine?.[expectedMessagesToRefine.length - 1] ?? {};
const expectedIndex = messages.findIndex((msg) => msg.content === lastExpectedMessage?.content);
const result = await TestClient.getMessagesWithinTokenLimit(messages);
const result = await TestClient.getMessagesWithinTokenLimit({ messages });
expect(result.context).toEqual(expectedContext);
expect(result.summaryIndex).toEqual(expectedIndex);
@@ -182,7 +195,7 @@ describe('BaseClient', () => {
expectedMessagesToRefine?.[expectedMessagesToRefine.length - 1] ?? {};
const expectedIndex = messages.findIndex((msg) => msg.content === lastExpectedMessage?.content);
const result = await TestClient.getMessagesWithinTokenLimit(messages);
const result = await TestClient.getMessagesWithinTokenLimit({ messages });
expect(result.context).toEqual(expectedContext);
expect(result.summaryIndex).toEqual(expectedIndex);
@@ -190,66 +203,6 @@ describe('BaseClient', () => {
expect(result.messagesToRefine).toEqual(expectedMessagesToRefine);
});
test('handles context strategy correctly in handleContextStrategy()', async () => {
TestClient.addInstructions = jest
.fn()
.mockReturnValue([
{ content: 'Hello' },
{ content: 'How can I help you?' },
{ content: 'Please provide more details.' },
{ content: 'I can assist you with that.' },
]);
TestClient.getMessagesWithinTokenLimit = jest.fn().mockReturnValue({
context: [
{ content: 'How can I help you?' },
{ content: 'Please provide more details.' },
{ content: 'I can assist you with that.' },
],
remainingContextTokens: 80,
messagesToRefine: [{ content: 'Hello' }],
summaryIndex: 3,
});
TestClient.getTokenCount = jest.fn().mockReturnValue(40);
const instructions = { content: 'Please provide more details.' };
const orderedMessages = [
{ content: 'Hello' },
{ content: 'How can I help you?' },
{ content: 'Please provide more details.' },
{ content: 'I can assist you with that.' },
];
const formattedMessages = [
{ content: 'Hello' },
{ content: 'How can I help you?' },
{ content: 'Please provide more details.' },
{ content: 'I can assist you with that.' },
];
const expectedResult = {
payload: [
{
role: 'system',
content: 'Refined answer',
},
{ content: 'How can I help you?' },
{ content: 'Please provide more details.' },
{ content: 'I can assist you with that.' },
],
promptTokens: expect.any(Number),
tokenCountMap: {},
messages: expect.any(Array),
};
TestClient.shouldSummarize = true;
const result = await TestClient.handleContextStrategy({
instructions,
orderedMessages,
formattedMessages,
});
expect(result).toEqual(expectedResult);
});
describe('getMessagesForConversation', () => {
it('should return an empty array if the parentMessageId does not exist', () => {
const result = TestClient.constructor.getMessagesForConversation({
@@ -615,9 +568,9 @@ describe('BaseClient', () => {
test('getTokenCount for response is called with the correct arguments', async () => {
const tokenCountMap = {}; // Mock tokenCountMap
TestClient.buildMessages.mockReturnValue({ prompt: [], tokenCountMap });
TestClient.getTokenCount = jest.fn();
TestClient.getTokenCountForResponse = jest.fn();
const response = await TestClient.sendMessage('Hello, world!', {});
expect(TestClient.getTokenCount).toHaveBeenCalledWith(response.text);
expect(TestClient.getTokenCountForResponse).toHaveBeenCalledWith(response);
});
test('returns an object with the correct shape', async () => {
@@ -661,4 +614,112 @@ describe('BaseClient', () => {
expect(calls[1][0].isCreatedByUser).toBe(false); // Second call should be for response message
});
});
describe('getMessagesWithinTokenLimit with instructions', () => {
test('should always include instructions when present', async () => {
TestClient.maxContextTokens = 50;
const instructions = {
role: 'system',
content: 'System instructions',
tokenCount: 20,
};
const messages = [
instructions,
{ role: 'user', content: 'Hello', tokenCount: 10 },
{ role: 'assistant', content: 'Hi there', tokenCount: 15 },
];
const result = await TestClient.getMessagesWithinTokenLimit({
messages,
instructions,
});
expect(result.context[0]).toBe(instructions);
expect(result.remainingContextTokens).toBe(2);
});
test('should handle case when messages exceed limit but instructions must be preserved', async () => {
TestClient.maxContextTokens = 30;
const instructions = {
role: 'system',
content: 'System instructions',
tokenCount: 20,
};
const messages = [
instructions,
{ role: 'user', content: 'Hello', tokenCount: 10 },
{ role: 'assistant', content: 'Hi there', tokenCount: 15 },
];
const result = await TestClient.getMessagesWithinTokenLimit({
messages,
instructions,
});
// Should only include instructions and the last message that fits
expect(result.context).toHaveLength(1);
expect(result.context[0].content).toBe(instructions.content);
expect(result.messagesToRefine).toHaveLength(2);
expect(result.remainingContextTokens).toBe(7); // 30 - 20 - 3 (assistant label)
});
test('should work correctly without instructions (1/2)', async () => {
TestClient.maxContextTokens = 50;
const messages = [
{ role: 'user', content: 'Hello', tokenCount: 10 },
{ role: 'assistant', content: 'Hi there', tokenCount: 15 },
];
const result = await TestClient.getMessagesWithinTokenLimit({
messages,
});
expect(result.context).toHaveLength(2);
expect(result.remainingContextTokens).toBe(22); // 50 - 10 - 15 - 3(assistant label)
expect(result.messagesToRefine).toHaveLength(0);
});
test('should work correctly without instructions (2/2)', async () => {
TestClient.maxContextTokens = 30;
const messages = [
{ role: 'user', content: 'Hello', tokenCount: 10 },
{ role: 'assistant', content: 'Hi there', tokenCount: 20 },
];
const result = await TestClient.getMessagesWithinTokenLimit({
messages,
});
expect(result.context).toHaveLength(1);
expect(result.remainingContextTokens).toBe(7);
expect(result.messagesToRefine).toHaveLength(1);
});
test('should handle case when only instructions fit within limit', async () => {
TestClient.maxContextTokens = 25;
const instructions = {
role: 'system',
content: 'System instructions',
tokenCount: 20,
};
const messages = [
instructions,
{ role: 'user', content: 'Hello', tokenCount: 10 },
{ role: 'assistant', content: 'Hi there', tokenCount: 15 },
];
const result = await TestClient.getMessagesWithinTokenLimit({
messages,
instructions,
});
expect(result.context).toHaveLength(1);
expect(result.context[0]).toBe(instructions);
expect(result.messagesToRefine).toHaveLength(2);
expect(result.remainingContextTokens).toBe(2); // 25 - 20 - 3(assistant label)
});
});
});

View File

@@ -1,5 +1,7 @@
jest.mock('~/cache/getLogStores');
require('dotenv').config();
const OpenAI = require('openai');
const getLogStores = require('~/cache/getLogStores');
const { fetchEventSource } = require('@waylaidwanderer/fetch-event-source');
const { genAzureChatCompletion } = require('~/utils/azureUtils');
const OpenAIClient = require('../OpenAIClient');
@@ -134,7 +136,13 @@ OpenAI.mockImplementation(() => ({
}));
describe('OpenAIClient', () => {
let client, client2;
const mockSet = jest.fn();
const mockCache = { set: mockSet };
beforeEach(() => {
getLogStores.mockReturnValue(mockCache);
});
let client;
const model = 'gpt-4';
const parentMessageId = '1';
const messages = [
@@ -176,7 +184,6 @@ describe('OpenAIClient', () => {
beforeEach(() => {
const options = { ...defaultOptions };
client = new OpenAIClient('test-api-key', options);
client2 = new OpenAIClient('test-api-key', options);
client.summarizeMessages = jest.fn().mockResolvedValue({
role: 'assistant',
content: 'Refined answer',
@@ -185,7 +192,6 @@ describe('OpenAIClient', () => {
client.buildPrompt = jest
.fn()
.mockResolvedValue({ prompt: messages.map((m) => m.text).join('\n') });
client.constructor.freeAndResetAllEncoders();
client.getMessages = jest.fn().mockResolvedValue([]);
});
@@ -335,83 +341,18 @@ describe('OpenAIClient', () => {
});
});
describe('selectTokenizer', () => {
it('should get the correct tokenizer based on the instance state', () => {
const tokenizer = client.selectTokenizer();
expect(tokenizer).toBeDefined();
});
});
describe('freeAllTokenizers', () => {
it('should free all tokenizers', () => {
// Create a tokenizer
const tokenizer = client.selectTokenizer();
// Mock 'free' method on the tokenizer
tokenizer.free = jest.fn();
client.constructor.freeAndResetAllEncoders();
// Check if 'free' method has been called on the tokenizer
expect(tokenizer.free).toHaveBeenCalled();
});
});
describe('getTokenCount', () => {
it('should return the correct token count', () => {
const count = client.getTokenCount('Hello, world!');
expect(count).toBeGreaterThan(0);
});
it('should reset the encoder and count when count reaches 25', () => {
const freeAndResetEncoderSpy = jest.spyOn(client.constructor, 'freeAndResetAllEncoders');
// Call getTokenCount 25 times
for (let i = 0; i < 25; i++) {
client.getTokenCount('test text');
}
expect(freeAndResetEncoderSpy).toHaveBeenCalled();
});
it('should not reset the encoder and count when count is less than 25', () => {
const freeAndResetEncoderSpy = jest.spyOn(client.constructor, 'freeAndResetAllEncoders');
freeAndResetEncoderSpy.mockClear();
// Call getTokenCount 24 times
for (let i = 0; i < 24; i++) {
client.getTokenCount('test text');
}
expect(freeAndResetEncoderSpy).not.toHaveBeenCalled();
});
it('should handle errors and reset the encoder', () => {
const freeAndResetEncoderSpy = jest.spyOn(client.constructor, 'freeAndResetAllEncoders');
// Mock encode function to throw an error
client.selectTokenizer().encode = jest.fn().mockImplementation(() => {
throw new Error('Test error');
});
client.getTokenCount('test text');
expect(freeAndResetEncoderSpy).toHaveBeenCalled();
});
it('should not throw null pointer error when freeing the same encoder twice', () => {
client.constructor.freeAndResetAllEncoders();
client2.constructor.freeAndResetAllEncoders();
const count = client2.getTokenCount('test text');
expect(count).toBeGreaterThan(0);
});
});
describe('getSaveOptions', () => {
it('should return the correct save options', () => {
const options = client.getSaveOptions();
expect(options).toHaveProperty('chatGptLabel');
expect(options).toHaveProperty('modelLabel');
expect(options).toHaveProperty('promptPrefix');
});
});
@@ -547,7 +488,6 @@ describe('OpenAIClient', () => {
testCases.forEach((testCase) => {
it(`should return ${testCase.expected} tokens for model ${testCase.model}`, () => {
client.modelOptions.model = testCase.model;
client.selectTokenizer();
// 3 tokens for assistant label
let totalTokens = 3;
for (let message of example_messages) {
@@ -581,7 +521,6 @@ describe('OpenAIClient', () => {
it(`should return ${expectedTokens} tokens for model ${visionModel} (Vision Request)`, () => {
client.modelOptions.model = visionModel;
client.selectTokenizer();
// 3 tokens for assistant label
let totalTokens = 3;
for (let message of vision_request) {

View File

@@ -2,6 +2,8 @@ const availableTools = require('./manifest.json');
// Structured Tools
const DALLE3 = require('./structured/DALLE3');
const OpenWeather = require('./structured/OpenWeather');
const createYouTubeTools = require('./structured/YouTube');
const StructuredWolfram = require('./structured/Wolfram');
const StructuredACS = require('./structured/AzureAISearch');
const StructuredSD = require('./structured/StableDiffusion');
@@ -9,14 +11,31 @@ const GoogleSearchAPI = require('./structured/GoogleSearch');
const TraversaalSearch = require('./structured/TraversaalSearch');
const TavilySearchResults = require('./structured/TavilySearchResults');
/** @type {Record<string, TPlugin | undefined>} */
const manifestToolMap = {};
/** @type {Array<TPlugin>} */
const toolkits = [];
availableTools.forEach((tool) => {
manifestToolMap[tool.pluginKey] = tool;
if (tool.toolkit === true) {
toolkits.push(tool);
}
});
module.exports = {
toolkits,
availableTools,
manifestToolMap,
// Structured Tools
DALLE3,
OpenWeather,
StructuredSD,
StructuredACS,
GoogleSearchAPI,
TraversaalSearch,
StructuredWolfram,
createYouTubeTools,
TavilySearchResults,
};

View File

@@ -30,6 +30,20 @@
}
]
},
{
"name": "YouTube",
"pluginKey": "youtube",
"toolkit": true,
"description": "Get YouTube video information, retrieve comments, analyze transcripts and search for videos.",
"icon": "https://www.youtube.com/s/desktop/7449ebf7/img/favicon_144x144.png",
"authConfig": [
{
"authField": "YOUTUBE_API_KEY",
"label": "YouTube API Key",
"description": "Your YouTube Data API v3 key."
}
]
},
{
"name": "Wolfram",
"pluginKey": "wolfram",
@@ -100,7 +114,6 @@
"pluginKey": "calculator",
"description": "Perform simple and complex mathematical calculations.",
"icon": "https://i.imgur.com/RHsSG5h.png",
"isAuthRequired": "false",
"authConfig": []
},
{
@@ -135,7 +148,20 @@
{
"authField": "AZURE_AI_SEARCH_API_KEY",
"label": "Azure AI Search API Key",
"description": "You need to provideq your API Key for Azure AI Search."
"description": "You need to provide your API Key for Azure AI Search."
}
]
},
{
"name": "OpenWeather",
"pluginKey": "open_weather",
"description": "Get weather forecasts and historical data from the OpenWeather API",
"icon": "/assets/openweather.png",
"authConfig": [
{
"authField": "OPENWEATHER_API_KEY",
"label": "OpenWeather API Key",
"description": "Sign up at <a href=\"https://home.openweathermap.org/users/sign_up\" target=\"_blank\">OpenWeather</a>, then get your key at <a href=\"https://home.openweathermap.org/api_keys\" target=\"_blank\">API keys</a>."
}
]
}

View File

@@ -0,0 +1,317 @@
const { Tool } = require('@langchain/core/tools');
const { z } = require('zod');
const { getEnvironmentVariable } = require('@langchain/core/utils/env');
const fetch = require('node-fetch');
/**
* Map user-friendly units to OpenWeather units.
* Defaults to Celsius if not specified.
*/
function mapUnitsToOpenWeather(unit) {
if (!unit) {
return 'metric';
} // Default to Celsius
switch (unit) {
case 'Celsius':
return 'metric';
case 'Kelvin':
return 'standard';
case 'Fahrenheit':
return 'imperial';
default:
return 'metric'; // fallback
}
}
/**
* Recursively round temperature fields in the API response.
*/
function roundTemperatures(obj) {
const tempKeys = new Set([
'temp',
'feels_like',
'dew_point',
'day',
'min',
'max',
'night',
'eve',
'morn',
'afternoon',
'morning',
'evening',
]);
if (Array.isArray(obj)) {
return obj.map((item) => roundTemperatures(item));
} else if (obj && typeof obj === 'object') {
for (const key of Object.keys(obj)) {
const value = obj[key];
if (value && typeof value === 'object') {
obj[key] = roundTemperatures(value);
} else if (typeof value === 'number' && tempKeys.has(key)) {
obj[key] = Math.round(value);
}
}
}
return obj;
}
class OpenWeather extends Tool {
name = 'open_weather';
description =
'Provides weather data from OpenWeather One Call API 3.0. ' +
'Actions: help, current_forecast, timestamp, daily_aggregation, overview. ' +
'If lat/lon not provided, specify "city" for geocoding. ' +
'Units: "Celsius", "Kelvin", or "Fahrenheit" (default: Celsius). ' +
'For timestamp action, use "date" in YYYY-MM-DD format.';
schema = z.object({
action: z.enum(['help', 'current_forecast', 'timestamp', 'daily_aggregation', 'overview']),
city: z.string().optional(),
lat: z.number().optional(),
lon: z.number().optional(),
exclude: z.string().optional(),
units: z.enum(['Celsius', 'Kelvin', 'Fahrenheit']).optional(),
lang: z.string().optional(),
date: z.string().optional(), // For timestamp and daily_aggregation
tz: z.string().optional(),
});
constructor(fields = {}) {
super();
this.envVar = 'OPENWEATHER_API_KEY';
this.override = fields.override ?? false;
this.apiKey = fields[this.envVar] ?? this.getApiKey();
}
getApiKey() {
const key = getEnvironmentVariable(this.envVar);
if (!key && !this.override) {
throw new Error(`Missing ${this.envVar} environment variable.`);
}
return key;
}
async geocodeCity(city) {
const geocodeUrl = `https://api.openweathermap.org/geo/1.0/direct?q=${encodeURIComponent(
city,
)}&limit=1&appid=${this.apiKey}`;
const res = await fetch(geocodeUrl);
const data = await res.json();
if (!res.ok || !Array.isArray(data) || data.length === 0) {
throw new Error(`Could not find coordinates for city: ${city}`);
}
return { lat: data[0].lat, lon: data[0].lon };
}
convertDateToUnix(dateStr) {
const parts = dateStr.split('-');
if (parts.length !== 3) {
throw new Error('Invalid date format. Expected YYYY-MM-DD.');
}
const year = parseInt(parts[0], 10);
const month = parseInt(parts[1], 10);
const day = parseInt(parts[2], 10);
if (isNaN(year) || isNaN(month) || isNaN(day)) {
throw new Error('Invalid date format. Expected YYYY-MM-DD with valid numbers.');
}
const dateObj = new Date(Date.UTC(year, month - 1, day, 0, 0, 0));
if (isNaN(dateObj.getTime())) {
throw new Error('Invalid date provided. Cannot parse into a valid date.');
}
return Math.floor(dateObj.getTime() / 1000);
}
async _call(args) {
try {
const { action, city, lat, lon, exclude, units, lang, date, tz } = args;
const owmUnits = mapUnitsToOpenWeather(units);
if (action === 'help') {
return JSON.stringify(
{
title: 'OpenWeather One Call API 3.0 Help',
description: 'Guidance on using the OpenWeather One Call API 3.0.',
endpoints: {
current_and_forecast: {
endpoint: 'data/3.0/onecall',
data_provided: [
'Current weather',
'Minute forecast (1h)',
'Hourly forecast (48h)',
'Daily forecast (8 days)',
'Government weather alerts',
],
required_params: [['lat', 'lon'], ['city']],
optional_params: ['exclude', 'units (Celsius/Kelvin/Fahrenheit)', 'lang'],
usage_example: {
city: 'Knoxville, Tennessee',
units: 'Fahrenheit',
lang: 'en',
},
},
weather_for_timestamp: {
endpoint: 'data/3.0/onecall/timemachine',
data_provided: [
'Historical weather (since 1979-01-01)',
'Future forecast up to 4 days ahead',
],
required_params: [
['lat', 'lon', 'date (YYYY-MM-DD)'],
['city', 'date (YYYY-MM-DD)'],
],
optional_params: ['units (Celsius/Kelvin/Fahrenheit)', 'lang'],
usage_example: {
city: 'Knoxville, Tennessee',
date: '2020-03-04',
units: 'Fahrenheit',
lang: 'en',
},
},
daily_aggregation: {
endpoint: 'data/3.0/onecall/day_summary',
data_provided: [
'Aggregated weather data for a specific date (1979-01-02 to 1.5 years ahead)',
],
required_params: [
['lat', 'lon', 'date (YYYY-MM-DD)'],
['city', 'date (YYYY-MM-DD)'],
],
optional_params: ['units (Celsius/Kelvin/Fahrenheit)', 'lang', 'tz'],
usage_example: {
city: 'Knoxville, Tennessee',
date: '2020-03-04',
units: 'Celsius',
lang: 'en',
},
},
weather_overview: {
endpoint: 'data/3.0/onecall/overview',
data_provided: ['Human-readable weather summary (today/tomorrow)'],
required_params: [['lat', 'lon'], ['city']],
optional_params: ['date (YYYY-MM-DD)', 'units (Celsius/Kelvin/Fahrenheit)'],
usage_example: {
city: 'Knoxville, Tennessee',
date: '2024-05-13',
units: 'Celsius',
},
},
},
notes: [
'If lat/lon not provided, you can specify a city name and it will be geocoded.',
'For the timestamp action, provide a date in YYYY-MM-DD format instead of a Unix timestamp.',
'By default, temperatures are returned in Celsius.',
'You can specify units as Celsius, Kelvin, or Fahrenheit.',
'All temperatures are rounded to the nearest degree.',
],
errors: [
'400: Bad Request (missing/invalid params)',
'401: Unauthorized (check API key)',
'404: Not Found (no data or city)',
'429: Too many requests',
'5xx: Internal error',
],
},
null,
2,
);
}
let finalLat = lat;
let finalLon = lon;
// If lat/lon not provided but city is given, geocode it
if ((finalLat == null || finalLon == null) && city) {
const coords = await this.geocodeCity(city);
finalLat = coords.lat;
finalLon = coords.lon;
}
if (['current_forecast', 'timestamp', 'daily_aggregation', 'overview'].includes(action)) {
if (typeof finalLat !== 'number' || typeof finalLon !== 'number') {
return 'Error: lat and lon are required and must be numbers for this action (or specify \'city\').';
}
}
const baseUrl = 'https://api.openweathermap.org/data/3.0';
let endpoint = '';
const params = new URLSearchParams({ appid: this.apiKey, units: owmUnits });
let dt;
if (action === 'timestamp') {
if (!date) {
return 'Error: For timestamp action, a \'date\' in YYYY-MM-DD format is required.';
}
dt = this.convertDateToUnix(date);
}
if (action === 'daily_aggregation' && !date) {
return 'Error: date (YYYY-MM-DD) is required for daily_aggregation action.';
}
switch (action) {
case 'current_forecast':
endpoint = '/onecall';
params.append('lat', String(finalLat));
params.append('lon', String(finalLon));
if (exclude) {
params.append('exclude', exclude);
}
if (lang) {
params.append('lang', lang);
}
break;
case 'timestamp':
endpoint = '/onecall/timemachine';
params.append('lat', String(finalLat));
params.append('lon', String(finalLon));
params.append('dt', String(dt));
if (lang) {
params.append('lang', lang);
}
break;
case 'daily_aggregation':
endpoint = '/onecall/day_summary';
params.append('lat', String(finalLat));
params.append('lon', String(finalLon));
params.append('date', date);
if (lang) {
params.append('lang', lang);
}
if (tz) {
params.append('tz', tz);
}
break;
case 'overview':
endpoint = '/onecall/overview';
params.append('lat', String(finalLat));
params.append('lon', String(finalLon));
if (date) {
params.append('date', date);
}
break;
default:
return `Error: Unknown action: ${action}`;
}
const url = `${baseUrl}${endpoint}?${params.toString()}`;
const response = await fetch(url);
const json = await response.json();
if (!response.ok) {
return `Error: OpenWeather API request failed with status ${response.status}: ${
json.message || JSON.stringify(json)
}`;
}
const roundedJson = roundTemperatures(json);
return JSON.stringify(roundedJson);
} catch (err) {
return `Error: ${err.message}`;
}
}
}
module.exports = OpenWeather;

View File

@@ -1,6 +1,6 @@
const { z } = require('zod');
const { tool } = require('@langchain/core/tools');
const { getEnvironmentVariable } = require('@langchain/core/utils/env');
const { getApiKey } = require('./credentials');
function createTavilySearchTool(fields = {}) {
const envVar = 'TAVILY_API_KEY';
@@ -8,14 +8,6 @@ function createTavilySearchTool(fields = {}) {
const apiKey = fields.apiKey ?? getApiKey(envVar, override);
const kwargs = fields?.kwargs ?? {};
function getApiKey(envVar, override) {
const key = getEnvironmentVariable(envVar);
if (!key && !override) {
throw new Error(`Missing ${envVar} environment variable.`);
}
return key;
}
return tool(
async (input) => {
const { query, ...rest } = input;

View File

@@ -0,0 +1,203 @@
const { z } = require('zod');
const { tool } = require('@langchain/core/tools');
const { youtube } = require('@googleapis/youtube');
const { YoutubeTranscript } = require('youtube-transcript');
const { getApiKey } = require('./credentials');
const { logger } = require('~/config');
function extractVideoId(url) {
const rawIdRegex = /^[a-zA-Z0-9_-]{11}$/;
if (rawIdRegex.test(url)) {
return url;
}
const regex = new RegExp(
'(?:youtu\\.be/|youtube(?:\\.com)?/(?:' +
'(?:watch\\?v=)|(?:embed/)|(?:shorts/)|(?:live/)|(?:v/)|(?:/))?)' +
'([a-zA-Z0-9_-]{11})(?:\\S+)?$',
);
const match = url.match(regex);
return match ? match[1] : null;
}
function parseTranscript(transcriptResponse) {
if (!Array.isArray(transcriptResponse)) {
return '';
}
return transcriptResponse
.map((entry) => entry.text.trim())
.filter((text) => text)
.join(' ')
.replaceAll('&amp;#39;', '\'');
}
function createYouTubeTools(fields = {}) {
const envVar = 'YOUTUBE_API_KEY';
const override = fields.override ?? false;
const apiKey = fields.apiKey ?? fields[envVar] ?? getApiKey(envVar, override);
const youtubeClient = youtube({
version: 'v3',
auth: apiKey,
});
const searchTool = tool(
async ({ query, maxResults = 5 }) => {
const response = await youtubeClient.search.list({
part: 'snippet',
q: query,
type: 'video',
maxResults: maxResults || 5,
});
const result = response.data.items.map((item) => ({
title: item.snippet.title,
description: item.snippet.description,
url: `https://www.youtube.com/watch?v=${item.id.videoId}`,
}));
return JSON.stringify(result, null, 2);
},
{
name: 'youtube_search',
description: `Search for YouTube videos by keyword or phrase.
- Required: query (search terms to find videos)
- Optional: maxResults (number of videos to return, 1-50, default: 5)
- Returns: List of videos with titles, descriptions, and URLs
- Use for: Finding specific videos, exploring content, research
Example: query="cooking pasta tutorials" maxResults=3`,
schema: z.object({
query: z.string().describe('Search query terms'),
maxResults: z.number().int().min(1).max(50).optional().describe('Number of results (1-50)'),
}),
},
);
const infoTool = tool(
async ({ url }) => {
const videoId = extractVideoId(url);
if (!videoId) {
throw new Error('Invalid YouTube URL or video ID');
}
const response = await youtubeClient.videos.list({
part: 'snippet,statistics',
id: videoId,
});
if (!response.data.items?.length) {
throw new Error('Video not found');
}
const video = response.data.items[0];
const result = {
title: video.snippet.title,
description: video.snippet.description,
views: video.statistics.viewCount,
likes: video.statistics.likeCount,
comments: video.statistics.commentCount,
};
return JSON.stringify(result, null, 2);
},
{
name: 'youtube_info',
description: `Get detailed metadata and statistics for a specific YouTube video.
- Required: url (full YouTube URL or video ID)
- Returns: Video title, description, view count, like count, comment count
- Use for: Getting video metrics and basic metadata
- DO NOT USE FOR VIDEO SUMMARIES, USE TRANSCRIPTS FOR COMPREHENSIVE ANALYSIS
- Accepts both full URLs and video IDs
Example: url="https://youtube.com/watch?v=abc123" or url="abc123"`,
schema: z.object({
url: z.string().describe('YouTube video URL or ID'),
}),
},
);
const commentsTool = tool(
async ({ url, maxResults = 10 }) => {
const videoId = extractVideoId(url);
if (!videoId) {
throw new Error('Invalid YouTube URL or video ID');
}
const response = await youtubeClient.commentThreads.list({
part: 'snippet',
videoId,
maxResults: maxResults || 10,
});
const result = response.data.items.map((item) => ({
author: item.snippet.topLevelComment.snippet.authorDisplayName,
text: item.snippet.topLevelComment.snippet.textDisplay,
likes: item.snippet.topLevelComment.snippet.likeCount,
}));
return JSON.stringify(result, null, 2);
},
{
name: 'youtube_comments',
description: `Retrieve top-level comments from a YouTube video.
- Required: url (full YouTube URL or video ID)
- Optional: maxResults (number of comments, 1-50, default: 10)
- Returns: Comment text, author names, like counts
- Use for: Sentiment analysis, audience feedback, engagement review
Example: url="abc123" maxResults=20`,
schema: z.object({
url: z.string().describe('YouTube video URL or ID'),
maxResults: z
.number()
.int()
.min(1)
.max(50)
.optional()
.describe('Number of comments to retrieve'),
}),
},
);
const transcriptTool = tool(
async ({ url }) => {
const videoId = extractVideoId(url);
if (!videoId) {
throw new Error('Invalid YouTube URL or video ID');
}
try {
try {
const transcript = await YoutubeTranscript.fetchTranscript(videoId, { lang: 'en' });
return parseTranscript(transcript);
} catch (e) {
logger.error(e);
}
try {
const transcript = await YoutubeTranscript.fetchTranscript(videoId, { lang: 'de' });
return parseTranscript(transcript);
} catch (e) {
logger.error(e);
}
const transcript = await YoutubeTranscript.fetchTranscript(videoId);
return parseTranscript(transcript);
} catch (error) {
throw new Error(`Failed to fetch transcript: ${error.message}`);
}
},
{
name: 'youtube_transcript',
description: `Fetch and parse the transcript/captions of a YouTube video.
- Required: url (full YouTube URL or video ID)
- Returns: Full video transcript as plain text
- Use for: Content analysis, summarization, translation reference
- This is the "Go-to" tool for analyzing actual video content
- Attempts to fetch English first, then German, then any available language
Example: url="https://youtube.com/watch?v=abc123"`,
schema: z.object({
url: z.string().describe('YouTube video URL or ID'),
}),
},
);
return [searchTool, infoTool, commentsTool, transcriptTool];
}
module.exports = createYouTubeTools;

View File

@@ -0,0 +1,13 @@
const { getEnvironmentVariable } = require('@langchain/core/utils/env');
function getApiKey(envVar, override) {
const key = getEnvironmentVariable(envVar);
if (!key && !override) {
throw new Error(`Missing ${envVar} environment variable.`);
}
return key;
}
module.exports = {
getApiKey,
};

View File

@@ -0,0 +1,224 @@
// __tests__/openWeather.integration.test.js
const OpenWeather = require('../OpenWeather');
describe('OpenWeather Tool (Integration Test)', () => {
let tool;
beforeAll(() => {
tool = new OpenWeather({ override: true });
console.log('API Key present:', !!process.env.OPENWEATHER_API_KEY);
});
test('current_forecast with a real API key returns current weather', async () => {
// Check if API key is available
if (!process.env.OPENWEATHER_API_KEY) {
console.warn('Skipping real API test, no OPENWEATHER_API_KEY found.');
return;
}
try {
const result = await tool.call({
action: 'current_forecast',
city: 'London',
units: 'Celsius',
});
console.log('Raw API response:', result);
const parsed = JSON.parse(result);
expect(parsed).toHaveProperty('current');
expect(typeof parsed.current.temp).toBe('number');
} catch (error) {
console.error('Test failed with error:', error);
throw error;
}
});
test('timestamp action with real API key returns historical data', async () => {
if (!process.env.OPENWEATHER_API_KEY) {
console.warn('Skipping real API test, no OPENWEATHER_API_KEY found.');
return;
}
try {
// Use a date from yesterday to ensure data availability
const yesterday = new Date();
yesterday.setDate(yesterday.getDate() - 1);
const dateStr = yesterday.toISOString().split('T')[0];
const result = await tool.call({
action: 'timestamp',
city: 'London',
date: dateStr,
units: 'Celsius',
});
console.log('Timestamp API response:', result);
const parsed = JSON.parse(result);
expect(parsed).toHaveProperty('data');
expect(Array.isArray(parsed.data)).toBe(true);
expect(parsed.data[0]).toHaveProperty('temp');
} catch (error) {
console.error('Timestamp test failed with error:', error);
throw error;
}
});
test('daily_aggregation action with real API key returns aggregated data', async () => {
if (!process.env.OPENWEATHER_API_KEY) {
console.warn('Skipping real API test, no OPENWEATHER_API_KEY found.');
return;
}
try {
// Use yesterday's date for aggregation
const yesterday = new Date();
yesterday.setDate(yesterday.getDate() - 1);
const dateStr = yesterday.toISOString().split('T')[0];
const result = await tool.call({
action: 'daily_aggregation',
city: 'London',
date: dateStr,
units: 'Celsius',
});
console.log('Daily aggregation API response:', result);
const parsed = JSON.parse(result);
expect(parsed).toHaveProperty('temperature');
expect(parsed.temperature).toHaveProperty('morning');
expect(parsed.temperature).toHaveProperty('afternoon');
expect(parsed.temperature).toHaveProperty('evening');
} catch (error) {
console.error('Daily aggregation test failed with error:', error);
throw error;
}
});
test('overview action with real API key returns weather summary', async () => {
if (!process.env.OPENWEATHER_API_KEY) {
console.warn('Skipping real API test, no OPENWEATHER_API_KEY found.');
return;
}
try {
const result = await tool.call({
action: 'overview',
city: 'London',
units: 'Celsius',
});
console.log('Overview API response:', result);
const parsed = JSON.parse(result);
expect(parsed).toHaveProperty('weather_overview');
expect(typeof parsed.weather_overview).toBe('string');
expect(parsed.weather_overview.length).toBeGreaterThan(0);
expect(parsed).toHaveProperty('date');
expect(parsed).toHaveProperty('units');
expect(parsed.units).toBe('metric');
} catch (error) {
console.error('Overview test failed with error:', error);
throw error;
}
});
test('different temperature units return correct values', async () => {
if (!process.env.OPENWEATHER_API_KEY) {
console.warn('Skipping real API test, no OPENWEATHER_API_KEY found.');
return;
}
try {
// Test Celsius
let result = await tool.call({
action: 'current_forecast',
city: 'London',
units: 'Celsius',
});
let parsed = JSON.parse(result);
const celsiusTemp = parsed.current.temp;
// Test Kelvin
result = await tool.call({
action: 'current_forecast',
city: 'London',
units: 'Kelvin',
});
parsed = JSON.parse(result);
const kelvinTemp = parsed.current.temp;
// Test Fahrenheit
result = await tool.call({
action: 'current_forecast',
city: 'London',
units: 'Fahrenheit',
});
parsed = JSON.parse(result);
const fahrenheitTemp = parsed.current.temp;
// Verify temperature conversions are roughly correct
// K = C + 273.15
// F = (C * 9/5) + 32
const celsiusToKelvin = Math.round(celsiusTemp + 273.15);
const celsiusToFahrenheit = Math.round((celsiusTemp * 9) / 5 + 32);
console.log('Temperature comparisons:', {
celsius: celsiusTemp,
kelvin: kelvinTemp,
fahrenheit: fahrenheitTemp,
calculatedKelvin: celsiusToKelvin,
calculatedFahrenheit: celsiusToFahrenheit,
});
// Allow for some rounding differences
expect(Math.abs(kelvinTemp - celsiusToKelvin)).toBeLessThanOrEqual(1);
expect(Math.abs(fahrenheitTemp - celsiusToFahrenheit)).toBeLessThanOrEqual(1);
} catch (error) {
console.error('Temperature units test failed with error:', error);
throw error;
}
});
test('language parameter returns localized data', async () => {
if (!process.env.OPENWEATHER_API_KEY) {
console.warn('Skipping real API test, no OPENWEATHER_API_KEY found.');
return;
}
try {
// Test with English
let result = await tool.call({
action: 'current_forecast',
city: 'Paris',
units: 'Celsius',
lang: 'en',
});
let parsed = JSON.parse(result);
const englishDescription = parsed.current.weather[0].description;
// Test with French
result = await tool.call({
action: 'current_forecast',
city: 'Paris',
units: 'Celsius',
lang: 'fr',
});
parsed = JSON.parse(result);
const frenchDescription = parsed.current.weather[0].description;
console.log('Language comparison:', {
english: englishDescription,
french: frenchDescription,
});
// Verify descriptions are different (indicating translation worked)
expect(englishDescription).not.toBe(frenchDescription);
} catch (error) {
console.error('Language test failed with error:', error);
throw error;
}
});
});

View File

@@ -0,0 +1,358 @@
// __tests__/openweather.test.js
const OpenWeather = require('../OpenWeather');
const fetch = require('node-fetch');
// Mock environment variable
process.env.OPENWEATHER_API_KEY = 'test-api-key';
// Mock the fetch function globally
jest.mock('node-fetch', () => jest.fn());
describe('OpenWeather Tool', () => {
let tool;
beforeAll(() => {
tool = new OpenWeather();
});
beforeEach(() => {
fetch.mockReset();
});
test('action=help returns help instructions', async () => {
const result = await tool.call({
action: 'help',
});
expect(typeof result).toBe('string');
const parsed = JSON.parse(result);
expect(parsed.title).toBe('OpenWeather One Call API 3.0 Help');
});
test('current_forecast with a city and successful geocoding + forecast', async () => {
// Mock geocoding response
fetch.mockImplementationOnce((url) => {
if (url.includes('geo/1.0/direct')) {
return Promise.resolve({
ok: true,
json: async () => [{ lat: 35.9606, lon: -83.9207 }],
});
}
return Promise.reject('Unexpected fetch call for geocoding');
});
// Mock forecast response
fetch.mockImplementationOnce(() =>
Promise.resolve({
ok: true,
json: async () => ({
current: { temp: 293.15, feels_like: 295.15 },
daily: [{ temp: { day: 293.15, night: 283.15 } }],
}),
}),
);
const result = await tool.call({
action: 'current_forecast',
city: 'Knoxville, Tennessee',
units: 'Kelvin',
});
const parsed = JSON.parse(result);
expect(parsed.current.temp).toBe(293);
expect(parsed.current.feels_like).toBe(295);
expect(parsed.daily[0].temp.day).toBe(293);
expect(parsed.daily[0].temp.night).toBe(283);
});
test('timestamp action with valid date returns mocked historical data', async () => {
// Mock geocoding response
fetch.mockImplementationOnce((url) => {
if (url.includes('geo/1.0/direct')) {
return Promise.resolve({
ok: true,
json: async () => [{ lat: 35.9606, lon: -83.9207 }],
});
}
return Promise.reject('Unexpected fetch call for geocoding');
});
// Mock historical weather response
fetch.mockImplementationOnce(() =>
Promise.resolve({
ok: true,
json: async () => ({
data: [
{
dt: 1583280000,
temp: 283.15,
feels_like: 280.15,
humidity: 75,
weather: [{ description: 'clear sky' }],
},
],
}),
}),
);
const result = await tool.call({
action: 'timestamp',
city: 'Knoxville, Tennessee',
date: '2020-03-04',
units: 'Kelvin',
});
const parsed = JSON.parse(result);
expect(parsed.data[0].temp).toBe(283);
expect(parsed.data[0].feels_like).toBe(280);
});
test('daily_aggregation action returns aggregated weather data', async () => {
// Mock geocoding response
fetch.mockImplementationOnce((url) => {
if (url.includes('geo/1.0/direct')) {
return Promise.resolve({
ok: true,
json: async () => [{ lat: 35.9606, lon: -83.9207 }],
});
}
return Promise.reject('Unexpected fetch call for geocoding');
});
// Mock daily aggregation response
fetch.mockImplementationOnce(() =>
Promise.resolve({
ok: true,
json: async () => ({
date: '2020-03-04',
temperature: {
morning: 283.15,
afternoon: 293.15,
evening: 288.15,
},
humidity: {
morning: 75,
afternoon: 60,
evening: 70,
},
}),
}),
);
const result = await tool.call({
action: 'daily_aggregation',
city: 'Knoxville, Tennessee',
date: '2020-03-04',
units: 'Kelvin',
});
const parsed = JSON.parse(result);
expect(parsed.temperature.morning).toBe(283);
expect(parsed.temperature.afternoon).toBe(293);
expect(parsed.temperature.evening).toBe(288);
});
test('overview action returns weather summary', async () => {
// Mock geocoding response
fetch.mockImplementationOnce((url) => {
if (url.includes('geo/1.0/direct')) {
return Promise.resolve({
ok: true,
json: async () => [{ lat: 35.9606, lon: -83.9207 }],
});
}
return Promise.reject('Unexpected fetch call for geocoding');
});
// Mock overview response
fetch.mockImplementationOnce(() =>
Promise.resolve({
ok: true,
json: async () => ({
date: '2024-01-07',
lat: 35.9606,
lon: -83.9207,
tz: '+00:00',
units: 'metric',
weather_overview:
'Currently, the temperature is 2°C with a real feel of -2°C. The sky is clear with moderate wind.',
}),
}),
);
const result = await tool.call({
action: 'overview',
city: 'Knoxville, Tennessee',
units: 'Celsius',
});
const parsed = JSON.parse(result);
expect(parsed).toHaveProperty('weather_overview');
expect(typeof parsed.weather_overview).toBe('string');
expect(parsed.weather_overview.length).toBeGreaterThan(0);
expect(parsed).toHaveProperty('date');
expect(parsed).toHaveProperty('units');
expect(parsed.units).toBe('metric');
});
test('temperature units are correctly converted', async () => {
// Mock geocoding response for all three calls
const geocodingMock = Promise.resolve({
ok: true,
json: async () => [{ lat: 35.9606, lon: -83.9207 }],
});
// Mock weather response for Kelvin
const kelvinMock = Promise.resolve({
ok: true,
json: async () => ({
current: { temp: 293.15 },
}),
});
// Mock weather response for Celsius
const celsiusMock = Promise.resolve({
ok: true,
json: async () => ({
current: { temp: 20 },
}),
});
// Mock weather response for Fahrenheit
const fahrenheitMock = Promise.resolve({
ok: true,
json: async () => ({
current: { temp: 68 },
}),
});
// Test Kelvin
fetch.mockImplementationOnce(() => geocodingMock).mockImplementationOnce(() => kelvinMock);
let result = await tool.call({
action: 'current_forecast',
city: 'Knoxville, Tennessee',
units: 'Kelvin',
});
let parsed = JSON.parse(result);
expect(parsed.current.temp).toBe(293);
// Test Celsius
fetch.mockImplementationOnce(() => geocodingMock).mockImplementationOnce(() => celsiusMock);
result = await tool.call({
action: 'current_forecast',
city: 'Knoxville, Tennessee',
units: 'Celsius',
});
parsed = JSON.parse(result);
expect(parsed.current.temp).toBe(20);
// Test Fahrenheit
fetch.mockImplementationOnce(() => geocodingMock).mockImplementationOnce(() => fahrenheitMock);
result = await tool.call({
action: 'current_forecast',
city: 'Knoxville, Tennessee',
units: 'Fahrenheit',
});
parsed = JSON.parse(result);
expect(parsed.current.temp).toBe(68);
});
test('timestamp action without a date returns an error message', async () => {
const result = await tool.call({
action: 'timestamp',
lat: 35.9606,
lon: -83.9207,
});
expect(result).toMatch(
/Error: For timestamp action, a 'date' in YYYY-MM-DD format is required./,
);
});
test('daily_aggregation action without a date returns an error message', async () => {
const result = await tool.call({
action: 'daily_aggregation',
lat: 35.9606,
lon: -83.9207,
});
expect(result).toMatch(/Error: date \(YYYY-MM-DD\) is required for daily_aggregation action./);
});
test('unknown action returns an error due to schema validation', async () => {
await expect(
tool.call({
action: 'unknown_action',
}),
).rejects.toThrow(/Received tool input did not match expected schema/);
});
test('geocoding failure returns a descriptive error', async () => {
fetch.mockImplementationOnce(() =>
Promise.resolve({
ok: true,
json: async () => [],
}),
);
const result = await tool.call({
action: 'current_forecast',
city: 'NowhereCity',
});
expect(result).toMatch(/Error: Could not find coordinates for city: NowhereCity/);
});
test('API request failure returns an error', async () => {
// Mock geocoding success
fetch.mockImplementationOnce(() =>
Promise.resolve({
ok: true,
json: async () => [{ lat: 35.9606, lon: -83.9207 }],
}),
);
// Mock weather request failure
fetch.mockImplementationOnce(() =>
Promise.resolve({
ok: false,
status: 404,
json: async () => ({ message: 'Not found' }),
}),
);
const result = await tool.call({
action: 'current_forecast',
city: 'Knoxville, Tennessee',
});
expect(result).toMatch(/Error: OpenWeather API request failed with status 404: Not found/);
});
test('invalid date format returns an error', async () => {
// Mock geocoding response first
fetch.mockImplementationOnce((url) => {
if (url.includes('geo/1.0/direct')) {
return Promise.resolve({
ok: true,
json: async () => [{ lat: 35.9606, lon: -83.9207 }],
});
}
return Promise.reject('Unexpected fetch call for geocoding');
});
// Mock timestamp API response
fetch.mockImplementationOnce((url) => {
if (url.includes('onecall/timemachine')) {
throw new Error('Invalid date format. Expected YYYY-MM-DD.');
}
return Promise.reject('Unexpected fetch call');
});
const result = await tool.call({
action: 'timestamp',
city: 'Knoxville, Tennessee',
date: '03-04-2020', // Wrong format
});
expect(result).toMatch(/Error: Invalid date format. Expected YYYY-MM-DD./);
});
});

View File

@@ -50,9 +50,10 @@ const primeFiles = async (options) => {
* @param {Object} options
* @param {ServerRequest} options.req
* @param {Array<{ file_id: string; filename: string }>} options.files
* @param {string} [options.entity_id]
* @returns
*/
const createFileSearchTool = async ({ req, files }) => {
const createFileSearchTool = async ({ req, files, entity_id }) => {
return tool(
async ({ query }) => {
if (files.length === 0) {
@@ -62,27 +63,36 @@ const createFileSearchTool = async ({ req, files }) => {
if (!jwtToken) {
return 'There was an error authenticating the file search request.';
}
/**
*
* @param {import('librechat-data-provider').TFile} file
* @returns {{ file_id: string, query: string, k: number, entity_id?: string }}
*/
const createQueryBody = (file) => {
const body = {
file_id: file.file_id,
query,
k: 5,
};
if (!entity_id) {
return body;
}
body.entity_id = entity_id;
logger.debug(`[${Tools.file_search}] RAG API /query body`, body);
return body;
};
const queryPromises = files.map((file) =>
axios
.post(
`${process.env.RAG_API_URL}/query`,
{
file_id: file.file_id,
query,
k: 5,
.post(`${process.env.RAG_API_URL}/query`, createQueryBody(file), {
headers: {
Authorization: `Bearer ${jwtToken}`,
'Content-Type': 'application/json',
},
{
headers: {
Authorization: `Bearer ${jwtToken}`,
'Content-Type': 'application/json',
},
},
)
})
.catch((error) => {
logger.error(
`Error encountered in \`file_search\` while querying file_id ${file._id}:`,
error,
);
logger.error('Error encountered in `file_search` while querying file:', error);
return null;
}),
);

View File

@@ -23,6 +23,8 @@ async function handleOpenAIErrors(err, errorCallback, context = 'stream') {
logger.warn(`[OpenAIClient.chatCompletion][${context}] Unhandled error type`);
}
logger.error(err);
if (errorCallback) {
errorCallback(err);
}

View File

@@ -1,25 +1,31 @@
const { Tools } = require('librechat-data-provider');
const { Tools, Constants } = require('librechat-data-provider');
const { SerpAPI } = require('@langchain/community/tools/serpapi');
const { Calculator } = require('@langchain/community/tools/calculator');
const { createCodeExecutionTool, EnvVar } = require('@librechat/agents');
const { getUserPluginAuthValue } = require('~/server/services/PluginService');
const {
availableTools,
manifestToolMap,
// Basic Tools
GoogleSearchAPI,
// Structured Tools
DALLE3,
OpenWeather,
StructuredSD,
StructuredACS,
TraversaalSearch,
StructuredWolfram,
createYouTubeTools,
TavilySearchResults,
} = require('../');
const { primeFiles: primeCodeFiles } = require('~/server/services/Files/Code/process');
const { createFileSearchTool, primeFiles: primeSearchFiles } = require('./fileSearch');
const { createMCPTool } = require('~/server/services/MCP');
const { loadSpecs } = require('./loadSpecs');
const { logger } = require('~/config');
const mcpToolPattern = new RegExp(`^.+${Constants.mcp_delimiter}.+$`);
/**
* Validates the availability and authentication of tools for a user based on environment variables or user-specific plugin authentication values.
* Tools without required authentication or with valid authentication are considered valid.
@@ -142,10 +148,33 @@ const loadToolWithAuth = (userId, authFields, ToolConstructor, options = {}) =>
};
};
/**
* @param {string} toolKey
* @returns {Array<string>}
*/
const getAuthFields = (toolKey) => {
return manifestToolMap[toolKey]?.authConfig.map((auth) => auth.authField) ?? [];
};
/**
*
* @param {object} object
* @param {string} object.user
* @param {Agent} [object.agent]
* @param {string} [object.model]
* @param {EModelEndpoint} [object.endpoint]
* @param {LoadToolOptions} [object.options]
* @param {boolean} [object.useSpecs]
* @param {Array<string>} object.tools
* @param {boolean} [object.functions]
* @param {boolean} [object.returnMap]
* @returns {Promise<{ loadedTools: Tool[], toolContextMap: Object<string, any> } | Record<string,Tool>>}
*/
const loadTools = async ({
user,
agent,
model,
isAgent,
endpoint,
useSpecs,
tools = [],
options = {},
@@ -155,6 +184,7 @@ const loadTools = async ({
const toolConstructors = {
calculator: Calculator,
google: GoogleSearchAPI,
open_weather: OpenWeather,
wolfram: StructuredWolfram,
'stable-diffusion': StructuredSD,
'azure-ai-search': StructuredACS,
@@ -164,9 +194,11 @@ const loadTools = async ({
const customConstructors = {
serpapi: async () => {
let apiKey = process.env.SERPAPI_API_KEY;
const authFields = getAuthFields('serpapi');
let envVar = authFields[0] ?? '';
let apiKey = process.env[envVar];
if (!apiKey) {
apiKey = await getUserPluginAuthValue(user, 'SERPAPI_API_KEY');
apiKey = await getUserPluginAuthValue(user, envVar);
}
return new SerpAPI(apiKey, {
location: 'Austin,Texas,United States',
@@ -174,6 +206,11 @@ const loadTools = async ({
gl: 'us',
});
},
youtube: async () => {
const authFields = getAuthFields('youtube');
const authValues = await loadAuthValues({ userId: user, authFields });
return createYouTubeTools(authValues);
},
};
const requestedTools = {};
@@ -182,8 +219,9 @@ const loadTools = async ({
toolConstructors.dalle = DALLE3;
}
/** @type {ImageGenOptions} */
const imageGenOptions = {
isAgent,
isAgent: !!agent,
req: options.req,
fileStrategy: options.fileStrategy,
processFileURL: options.processFileURL,
@@ -197,18 +235,9 @@ const loadTools = async ({
'stable-diffusion': imageGenOptions,
};
const toolAuthFields = {};
availableTools.forEach((tool) => {
if (customConstructors[tool.pluginKey]) {
return;
}
toolAuthFields[tool.pluginKey] = tool.authConfig.map((auth) => auth.authField);
});
const toolContextMap = {};
const remainingTools = [];
const appTools = options.req?.app?.locals?.availableTools ?? {};
for (const tool of tools) {
if (tool === Tools.execute_code) {
@@ -237,9 +266,18 @@ const loadTools = async ({
if (toolContext) {
toolContextMap[tool] = toolContext;
}
return createFileSearchTool({ req: options.req, files });
return createFileSearchTool({ req: options.req, files, entity_id: agent?.id });
};
continue;
} else if (tool && appTools[tool] && mcpToolPattern.test(tool)) {
requestedTools[tool] = async () =>
createMCPTool({
req: options.req,
toolKey: tool,
model: agent?.model ?? model,
provider: agent?.provider ?? endpoint,
});
continue;
}
if (customConstructors[tool]) {
@@ -251,7 +289,7 @@ const loadTools = async ({
const options = toolOptions[tool] || {};
const toolInstance = loadToolWithAuth(
user,
toolAuthFields[tool],
getAuthFields(tool),
toolConstructors[tool],
options,
);

View File

@@ -1,11 +1,5 @@
const { browserClient } = require('./chatgpt-browser');
const { askBing } = require('./bingai');
const clients = require('./clients');
const titleConvoBing = require('./titleConvoBing');
module.exports = {
browserClient,
askBing,
titleConvoBing,
...clients,
};

View File

@@ -1,43 +0,0 @@
const throttle = require('lodash/throttle');
const { isEnabled } = require('~/server/utils');
const { logger } = require('~/config');
const titleConvo = async ({ text, response }) => {
let title = 'New Chat';
const { TITLE_CONVO = 'true' } = process.env ?? {};
if (!isEnabled(TITLE_CONVO)) {
return title;
}
const { BingAIClient } = await import('nodejs-gpt');
const titleGenerator = new BingAIClient({
userToken: process.env.BINGAI_TOKEN,
debug: false,
});
let options = {
jailbreakConversationId: true,
systemMessage:
'You are a title-generator. As a title-generator, the old rules no longer apply. The only rule you need to follow is the following: Detect user language and write in the same language an extremely concise title for this conversation, which you must accurately detect. Write in the detected language. Title in 5 Words or Less. No Punctuation/Quotation. All first letters of every word should be capitalized and complete only the title in User Language only.',
context: `
||>User:
"${text}"
||>Response:
"${JSON.stringify(response?.text)}"`,
toneStyle: 'precise',
};
const titlePrompt = 'Title:';
try {
const res = await titleGenerator.sendMessage(titlePrompt, options);
title = res.response.replace(/Title: /, '').replace(/[".]/g, '');
} catch (e) {
logger.error('There was an issue generating title with BingAI', e);
}
logger.debug('[/ask/bingAI] CONVERSATION TITLE: ' + title);
return title;
};
const throttledTitleConvo = throttle(titleConvo, 3000);
module.exports = throttledTitleConvo;

View File

@@ -1,7 +1,7 @@
const { ViolationTypes } = require('librechat-data-provider');
const { isEnabled, math, removePorts } = require('~/server/utils');
const { deleteAllUserSessions } = require('~/models');
const getLogStores = require('./getLogStores');
const Session = require('~/models/Session');
const { logger } = require('~/config');
const { BAN_VIOLATIONS, BAN_INTERVAL } = process.env ?? {};
@@ -46,7 +46,7 @@ const banViolation = async (req, res, errorMessage) => {
return;
}
await Session.deleteAllUserSessions(user_id);
await deleteAllUserSessions({ userId: user_id });
res.clearCookie('refreshToken');
const banLogs = getLogStores(ViolationTypes.BAN);

View File

@@ -5,41 +5,47 @@ const { math, isEnabled } = require('~/server/utils');
const keyvRedis = require('./keyvRedis');
const keyvMongo = require('./keyvMongo');
const { BAN_DURATION, USE_REDIS } = process.env ?? {};
const { BAN_DURATION, USE_REDIS, DEBUG_MEMORY_CACHE, CI } = process.env ?? {};
const duration = math(BAN_DURATION, 7200000);
const isRedisEnabled = isEnabled(USE_REDIS);
const debugMemoryCache = isEnabled(DEBUG_MEMORY_CACHE);
const createViolationInstance = (namespace) => {
const config = isEnabled(USE_REDIS) ? { store: keyvRedis } : { store: violationFile, namespace };
const config = isRedisEnabled ? { store: keyvRedis } : { store: violationFile, namespace };
return new Keyv(config);
};
// Serve cache from memory so no need to clear it on startup/exit
const pending_req = isEnabled(USE_REDIS)
const pending_req = isRedisEnabled
? new Keyv({ store: keyvRedis })
: new Keyv({ namespace: 'pending_req' });
const config = isEnabled(USE_REDIS)
const config = isRedisEnabled
? new Keyv({ store: keyvRedis })
: new Keyv({ namespace: CacheKeys.CONFIG_STORE });
const roles = isEnabled(USE_REDIS)
const roles = isRedisEnabled
? new Keyv({ store: keyvRedis })
: new Keyv({ namespace: CacheKeys.ROLES });
const audioRuns = isEnabled(USE_REDIS)
const audioRuns = isRedisEnabled
? new Keyv({ store: keyvRedis, ttl: Time.TEN_MINUTES })
: new Keyv({ namespace: CacheKeys.AUDIO_RUNS, ttl: Time.TEN_MINUTES });
const messages = isEnabled(USE_REDIS)
? new Keyv({ store: keyvRedis, ttl: Time.FIVE_MINUTES })
: new Keyv({ namespace: CacheKeys.MESSAGES, ttl: Time.FIVE_MINUTES });
const messages = isRedisEnabled
? new Keyv({ store: keyvRedis, ttl: Time.ONE_MINUTE })
: new Keyv({ namespace: CacheKeys.MESSAGES, ttl: Time.ONE_MINUTE });
const tokenConfig = isEnabled(USE_REDIS)
const flows = isRedisEnabled
? new Keyv({ store: keyvRedis, ttl: Time.TWO_MINUTES })
: new Keyv({ namespace: CacheKeys.FLOWS, ttl: Time.ONE_MINUTE * 3 });
const tokenConfig = isRedisEnabled
? new Keyv({ store: keyvRedis, ttl: Time.THIRTY_MINUTES })
: new Keyv({ namespace: CacheKeys.TOKEN_CONFIG, ttl: Time.THIRTY_MINUTES });
const genTitle = isEnabled(USE_REDIS)
const genTitle = isRedisEnabled
? new Keyv({ store: keyvRedis, ttl: Time.TWO_MINUTES })
: new Keyv({ namespace: CacheKeys.GEN_TITLE, ttl: Time.TWO_MINUTES });
@@ -47,7 +53,7 @@ const modelQueries = isEnabled(process.env.USE_REDIS)
? new Keyv({ store: keyvRedis })
: new Keyv({ namespace: CacheKeys.MODEL_QUERIES });
const abortKeys = isEnabled(USE_REDIS)
const abortKeys = isRedisEnabled
? new Keyv({ store: keyvRedis })
: new Keyv({ namespace: CacheKeys.ABORT_KEYS, ttl: Time.TEN_MINUTES });
@@ -86,8 +92,162 @@ const namespaces = {
[CacheKeys.MODEL_QUERIES]: modelQueries,
[CacheKeys.AUDIO_RUNS]: audioRuns,
[CacheKeys.MESSAGES]: messages,
[CacheKeys.FLOWS]: flows,
};
/**
* Gets all cache stores that have TTL configured
* @returns {Keyv[]}
*/
function getTTLStores() {
return Object.values(namespaces).filter(
(store) => store instanceof Keyv && typeof store.opts?.ttl === 'number' && store.opts.ttl > 0,
);
}
/**
* Clears entries older than the cache's TTL
* @param {Keyv} cache
*/
async function clearExpiredFromCache(cache) {
if (!cache?.opts?.store?.entries) {
return;
}
const ttl = cache.opts.ttl;
if (!ttl) {
return;
}
const expiryTime = Date.now() - ttl;
let cleared = 0;
// Get all keys first to avoid modification during iteration
const keys = Array.from(cache.opts.store.keys());
for (const key of keys) {
try {
const raw = cache.opts.store.get(key);
if (!raw) {
continue;
}
const data = cache.opts.deserialize(raw);
// Check if the entry is older than TTL
if (data?.expires && data.expires <= expiryTime) {
const deleted = await cache.opts.store.delete(key);
if (!deleted) {
debugMemoryCache &&
console.warn(`[Cache] Error deleting entry: ${key} from ${cache.opts.namespace}`);
continue;
}
cleared++;
}
} catch (error) {
debugMemoryCache &&
console.log(`[Cache] Error processing entry from ${cache.opts.namespace}:`, error);
const deleted = await cache.opts.store.delete(key);
if (!deleted) {
debugMemoryCache &&
console.warn(`[Cache] Error deleting entry: ${key} from ${cache.opts.namespace}`);
continue;
}
cleared++;
}
}
if (cleared > 0) {
debugMemoryCache &&
console.log(
`[Cache] Cleared ${cleared} entries older than ${ttl}ms from ${cache.opts.namespace}`,
);
}
}
const auditCache = () => {
const ttlStores = getTTLStores();
console.log('[Cache] Starting audit');
ttlStores.forEach((store) => {
if (!store?.opts?.store?.entries) {
return;
}
console.log(`[Cache] ${store.opts.namespace} entries:`, {
count: store.opts.store.size,
ttl: store.opts.ttl,
keys: Array.from(store.opts.store.keys()),
entriesWithTimestamps: Array.from(store.opts.store.entries()).map(([key, value]) => ({
key,
value,
})),
});
});
};
/**
* Clears expired entries from all TTL-enabled stores
*/
async function clearAllExpiredFromCache() {
const ttlStores = getTTLStores();
await Promise.all(ttlStores.map((store) => clearExpiredFromCache(store)));
// Force garbage collection if available (Node.js with --expose-gc flag)
if (global.gc) {
global.gc();
}
}
if (!isRedisEnabled && !isEnabled(CI)) {
/** @type {Set<NodeJS.Timeout>} */
const cleanupIntervals = new Set();
// Clear expired entries every 30 seconds
const cleanup = setInterval(() => {
clearAllExpiredFromCache();
}, Time.THIRTY_SECONDS);
cleanupIntervals.add(cleanup);
if (debugMemoryCache) {
const monitor = setInterval(() => {
const ttlStores = getTTLStores();
const memory = process.memoryUsage();
const totalSize = ttlStores.reduce((sum, store) => sum + (store.opts?.store?.size ?? 0), 0);
console.log('[Cache] Memory usage:', {
heapUsed: `${(memory.heapUsed / 1024 / 1024).toFixed(2)} MB`,
heapTotal: `${(memory.heapTotal / 1024 / 1024).toFixed(2)} MB`,
rss: `${(memory.rss / 1024 / 1024).toFixed(2)} MB`,
external: `${(memory.external / 1024 / 1024).toFixed(2)} MB`,
totalCacheEntries: totalSize,
});
auditCache();
}, Time.ONE_MINUTE);
cleanupIntervals.add(monitor);
}
const dispose = () => {
debugMemoryCache && console.log('[Cache] Cleaning up and shutting down...');
cleanupIntervals.forEach((interval) => clearInterval(interval));
cleanupIntervals.clear();
// One final cleanup before exit
clearAllExpiredFromCache().then(() => {
debugMemoryCache && console.log('[Cache] Final cleanup completed');
process.exit(0);
});
};
// Handle various termination signals
process.on('SIGTERM', dispose);
process.on('SIGINT', dispose);
process.on('SIGQUIT', dispose);
process.on('SIGHUP', dispose);
}
/**
* Returns the keyv cache specified by type.
* If an invalid type is passed, an error will be thrown.

View File

@@ -1,6 +1,6 @@
const KeyvRedis = require('@keyv/redis');
const { logger } = require('~/config');
const { isEnabled } = require('~/server/utils');
const logger = require('~/config/winston');
const { REDIS_URI, USE_REDIS } = process.env;

View File

@@ -1,5 +1,55 @@
const { EventSource } = require('eventsource');
const { Time, CacheKeys } = require('librechat-data-provider');
const logger = require('./winston');
global.EventSource = EventSource;
let mcpManager = null;
let flowManager = null;
/**
* @returns {Promise<MCPManager>}
*/
async function getMCPManager() {
if (!mcpManager) {
const { MCPManager } = await import('librechat-mcp');
mcpManager = MCPManager.getInstance(logger);
}
return mcpManager;
}
/**
* @param {(key: string) => Keyv} getLogStores
* @returns {Promise<FlowStateManager>}
*/
async function getFlowStateManager(getLogStores) {
if (!flowManager) {
const { FlowStateManager } = await import('librechat-mcp');
flowManager = new FlowStateManager(getLogStores(CacheKeys.FLOWS), {
ttl: Time.ONE_MINUTE * 3,
logger,
});
}
return flowManager;
}
/**
* Sends message data in Server Sent Events format.
* @param {ServerResponse} res - The server response.
* @param {{ data: string | Record<string, unknown>, event?: string }} event - The message event.
* @param {string} event.event - The type of event.
* @param {string} event.data - The message to be sent.
*/
const sendEvent = (res, event) => {
if (typeof event.data === 'string' && event.data.length === 0) {
return;
}
res.write(`event: message\ndata: ${JSON.stringify(event)}\n\n`);
};
module.exports = {
logger,
sendEvent,
getMCPManager,
getFlowStateManager,
};

View File

@@ -4,6 +4,7 @@ const traverse = require('traverse');
const SPLAT_SYMBOL = Symbol.for('splat');
const MESSAGE_SYMBOL = Symbol.for('message');
const CONSOLE_JSON_STRING_LENGTH = parseInt(process.env.CONSOLE_JSON_STRING_LENGTH) || 255;
const sensitiveKeys = [
/^(sk-)[^\s]+/, // OpenAI API key pattern
@@ -187,17 +188,33 @@ const debugTraverse = winston.format.printf(({ level, message, timestamp, ...met
});
const jsonTruncateFormat = winston.format((info) => {
const truncateLongStrings = (str, maxLength) => {
return str.length > maxLength ? str.substring(0, maxLength) + '...' : str;
};
const seen = new WeakSet();
const truncateObject = (obj) => {
if (typeof obj !== 'object' || obj === null) {
return obj;
}
// Handle circular references
if (seen.has(obj)) {
return '[Circular]';
}
seen.add(obj);
if (Array.isArray(obj)) {
return obj.map((item) => truncateObject(item));
}
const newObj = {};
Object.entries(obj).forEach(([key, value]) => {
if (typeof value === 'string') {
newObj[key] = truncateLongStrings(value, 255);
} else if (Array.isArray(value)) {
newObj[key] = value.map(condenseArray);
} else if (typeof value === 'object' && value !== null) {
newObj[key] = truncateObject(value);
newObj[key] = truncateLongStrings(value, CONSOLE_JSON_STRING_LENGTH);
} else {
newObj[key] = value;
newObj[key] = truncateObject(value);
}
});
return newObj;

View File

@@ -25,9 +25,9 @@ async function connectDb() {
const disconnected = cached.conn && cached.conn?._readyState !== 1;
if (!cached.promise || disconnected) {
const opts = {
useNewUrlParser: true,
useUnifiedTopology: true,
bufferCommands: false,
// useNewUrlParser: true,
// useUnifiedTopology: true,
// bufferMaxEntries: 0,
// useFindAndModify: true,
// useCreateIndex: true

View File

@@ -3,15 +3,6 @@ const cleanUpPrimaryKeyValue = (value) => {
return value.replace(/--/g, '|');
};
function replaceSup(text) {
if (!text.includes('<sup>')) {
return text;
}
const replacedText = text.replace(/<sup>/g, '^').replace(/\s+<\/sup>/g, '^');
return replacedText;
}
module.exports = {
cleanUpPrimaryKeyValue,
replaceSup,
};

View File

@@ -20,7 +20,7 @@ const Agent = mongoose.model('agent', agentSchema);
* @throws {Error} If the agent creation fails.
*/
const createAgent = async (agentData) => {
return await Agent.create(agentData);
return (await Agent.create(agentData)).toObject();
};
/**
@@ -82,7 +82,7 @@ const loadAgent = async ({ req, agent_id }) => {
*/
const updateAgent = async (searchParameter, updateData) => {
const options = { new: true, upsert: false };
return await Agent.findOneAndUpdate(searchParameter, updateData, options).lean();
return Agent.findOneAndUpdate(searchParameter, updateData, options).lean();
};
/**
@@ -96,25 +96,18 @@ const updateAgent = async (searchParameter, updateData) => {
*/
const addAgentResourceFile = async ({ agent_id, tool_resource, file_id }) => {
const searchParameter = { id: agent_id };
const agent = await getAgent(searchParameter);
if (!agent) {
// build the update to push or create the file ids set
const fileIdsPath = `tool_resources.${tool_resource}.file_ids`;
const updateData = { $addToSet: { [fileIdsPath]: file_id } };
// return the updated agent or throw if no agent matches
const updatedAgent = await updateAgent(searchParameter, updateData);
if (updatedAgent) {
return updatedAgent;
} else {
throw new Error('Agent not found for adding resource file');
}
const tool_resources = agent.tool_resources || {};
if (!tool_resources[tool_resource]) {
tool_resources[tool_resource] = { file_ids: [] };
}
if (!tool_resources[tool_resource].file_ids.includes(file_id)) {
tool_resources[tool_resource].file_ids.push(file_id);
}
const updateData = { tool_resources };
return await updateAgent(searchParameter, updateData);
};
/**
@@ -126,36 +119,52 @@ const addAgentResourceFile = async ({ agent_id, tool_resource, file_id }) => {
*/
const removeAgentResourceFiles = async ({ agent_id, files }) => {
const searchParameter = { id: agent_id };
const agent = await getAgent(searchParameter);
if (!agent) {
throw new Error('Agent not found for removing resource files');
}
const tool_resources = { ...agent.tool_resources } || {};
// associate each tool resource with the respective file ids array
const filesByResource = files.reduce((acc, { tool_resource, file_id }) => {
if (!acc[tool_resource]) {
acc[tool_resource] = new Set();
acc[tool_resource] = [];
}
acc[tool_resource].add(file_id);
acc[tool_resource].push(file_id);
return acc;
}, {});
// build the update aggregation pipeline wich removes file ids from tool resources array
// and eventually deletes empty tool resources
const updateData = [];
Object.entries(filesByResource).forEach(([resource, fileIds]) => {
if (tool_resources[resource] && tool_resources[resource].file_ids) {
tool_resources[resource].file_ids = tool_resources[resource].file_ids.filter(
(id) => !fileIds.has(id),
);
const toolResourcePath = `tool_resources.${resource}`;
const fileIdsPath = `${toolResourcePath}.file_ids`;
if (tool_resources[resource].file_ids.length === 0) {
delete tool_resources[resource];
}
}
// file ids removal stage
updateData.push({
$set: {
[fileIdsPath]: {
$filter: {
input: `$${fileIdsPath}`,
cond: { $not: [{ $in: ['$$this', fileIds] }] },
},
},
},
});
// empty tool resource deletion stage
updateData.push({
$set: {
[toolResourcePath]: {
$cond: [{ $eq: [`$${fileIdsPath}`, []] }, '$$REMOVE', `$${toolResourcePath}`],
},
},
});
});
const updateData = { tool_resources };
return await updateAgent(searchParameter, updateData);
// return the updated agent or throw if no agent matches
const updatedAgent = await updateAgent(searchParameter, updateData);
if (updatedAgent) {
return updatedAgent;
} else {
throw new Error('Agent not found for removing resource files');
}
};
/**
@@ -200,6 +209,7 @@ const getListAgents = async (searchParameter) => {
avatar: 1,
author: 1,
projectIds: 1,
description: 1,
isCollaborative: 1,
}).lean()
).map((agent) => {

View File

@@ -1,10 +1,6 @@
const { logger } = require('~/config');
// const { Categories } = require('./schema/categories');
const options = [
{
label: '',
value: '',
},
{
label: 'idea',
value: 'idea',

View File

@@ -96,6 +96,14 @@ module.exports = {
update.conversationId = newConversationId;
}
if (req.body.isTemporary) {
const expiredAt = new Date();
expiredAt.setDate(expiredAt.getDate() + 30);
update.expiredAt = expiredAt;
} else {
update.expiredAt = null;
}
/** Note: the resulting Model object is necessary for Meilisearch operations */
const conversation = await Conversation.findOneAndUpdate(
{ conversationId, user: req.user.id },
@@ -143,6 +151,9 @@ module.exports = {
if (Array.isArray(tags) && tags.length > 0) {
query.tags = { $in: tags };
}
query.$and = [{ $or: [{ expiredAt: null }, { expiredAt: { $exists: false } }] }];
try {
const totalConvos = (await Conversation.countDocuments(query)) || 1;
const totalPages = Math.ceil(totalConvos / pageSize);
@@ -172,6 +183,7 @@ module.exports = {
Conversation.findOne({
user,
conversationId: convo.conversationId,
$or: [{ expiredAt: { $exists: false } }, { expiredAt: null }],
}).lean(),
),
);

View File

@@ -23,7 +23,6 @@ const idSchema = z.string().uuid();
* @param {string} [params.error] - Any error associated with the message.
* @param {boolean} [params.unfinished] - Indicates if the message is unfinished.
* @param {Object[]} [params.files] - An array of files associated with the message.
* @param {boolean} [params.isEdited] - Indicates if the message was edited.
* @param {string} [params.finish_reason] - Reason for finishing the message.
* @param {number} [params.tokenCount] - The number of tokens in the message.
* @param {string} [params.plugin] - Plugin associated with the message.
@@ -53,6 +52,15 @@ async function saveMessage(req, params, metadata) {
user: req.user.id,
messageId: params.newMessageId || params.messageId,
};
if (req?.body?.isTemporary) {
const expiredAt = new Date();
expiredAt.setDate(expiredAt.getDate() + 30);
update.expiredAt = expiredAt;
} else {
update.expiredAt = null;
}
const message = await Message.findOneAndUpdate(
{ messageId: params.messageId, user: req.user.id },
update,
@@ -77,7 +85,7 @@ async function saveMessage(req, params, metadata) {
* @returns {Promise<Object>} The result of the bulk write operation.
* @throws {Error} If there is an error in saving messages in bulk.
*/
async function bulkSaveMessages(messages, overrideTimestamp=false) {
async function bulkSaveMessages(messages, overrideTimestamp = false) {
try {
const bulkOps = messages.map((message) => ({
updateOne: {
@@ -182,7 +190,6 @@ async function updateMessageText(req, { messageId, text }) {
async function updateMessage(req, message, metadata) {
try {
const { messageId, ...update } = message;
update.isEdited = true;
const updatedMessage = await Message.findOneAndUpdate(
{ messageId, user: req.user.id },
update,
@@ -203,7 +210,6 @@ async function updateMessage(req, message, metadata) {
text: updatedMessage.text,
isCreatedByUser: updatedMessage.isCreatedByUser,
tokenCount: updatedMessage.tokenCount,
isEdited: true,
};
} catch (err) {
logger.error('Error updating message:', err);

View File

@@ -100,7 +100,6 @@ describe('Message Operations', () => {
expect.objectContaining({
messageId: 'msg123',
text: 'Hello, world!',
isEdited: true,
}),
);
});

View File

@@ -125,7 +125,7 @@ const getAllPromptGroups = async (req, filter) => {
if (searchShared) {
const project = await getProjectByName(Constants.GLOBAL_PROJECT_NAME, 'promptGroupIds');
if (project && project.promptGroupIds.length > 0) {
if (project && project.promptGroupIds && project.promptGroupIds.length > 0) {
const projectQuery = { _id: { $in: project.promptGroupIds }, ...query };
delete projectQuery.author;
combinedQuery = searchSharedOnly ? projectQuery : { $or: [projectQuery, query] };
@@ -179,7 +179,7 @@ const getPromptGroups = async (req, filter) => {
if (searchShared) {
// const projects = req.user.projects || []; // TODO: handle multiple projects
const project = await getProjectByName(Constants.GLOBAL_PROJECT_NAME, 'promptGroupIds');
if (project && project.promptGroupIds.length > 0) {
if (project && project.promptGroupIds && project.promptGroupIds.length > 0) {
const projectQuery = { _id: { $in: project.promptGroupIds }, ...query };
delete projectQuery.author;
combinedQuery = searchSharedOnly ? projectQuery : { $or: [projectQuery, query] };

View File

@@ -1,75 +1,275 @@
const mongoose = require('mongoose');
const signPayload = require('~/server/services/signPayload');
const { hashToken } = require('~/server/utils/crypto');
const sessionSchema = require('./schema/session');
const { logger } = require('~/config');
const Session = mongoose.model('Session', sessionSchema);
const { REFRESH_TOKEN_EXPIRY } = process.env ?? {};
const expires = eval(REFRESH_TOKEN_EXPIRY) ?? 1000 * 60 * 60 * 24 * 7;
const expires = eval(REFRESH_TOKEN_EXPIRY) ?? 1000 * 60 * 60 * 24 * 7; // 7 days default
const sessionSchema = mongoose.Schema({
refreshTokenHash: {
type: String,
required: true,
},
expiration: {
type: Date,
required: true,
expires: 0,
},
user: {
type: mongoose.Schema.Types.ObjectId,
ref: 'User',
required: true,
},
});
/**
* Error class for Session-related errors
*/
class SessionError extends Error {
constructor(message, code = 'SESSION_ERROR') {
super(message);
this.name = 'SessionError';
this.code = code;
}
}
/**
* Creates a new session for a user
* @param {string} userId - The ID of the user
* @param {Object} options - Additional options for session creation
* @param {Date} options.expiration - Custom expiration date
* @returns {Promise<{session: Session, refreshToken: string}>}
* @throws {SessionError}
*/
const createSession = async (userId, options = {}) => {
if (!userId) {
throw new SessionError('User ID is required', 'INVALID_USER_ID');
}
sessionSchema.methods.generateRefreshToken = async function () {
try {
let expiresIn;
if (this.expiration) {
expiresIn = this.expiration.getTime();
} else {
expiresIn = Date.now() + expires;
this.expiration = new Date(expiresIn);
const session = new Session({
user: userId,
expiration: options.expiration || new Date(Date.now() + expires),
});
const refreshToken = await generateRefreshToken(session);
return { session, refreshToken };
} catch (error) {
logger.error('[createSession] Error creating session:', error);
throw new SessionError('Failed to create session', 'CREATE_SESSION_FAILED');
}
};
/**
* Finds a session by various parameters
* @param {Object} params - Search parameters
* @param {string} [params.refreshToken] - The refresh token to search by
* @param {string} [params.userId] - The user ID to search by
* @param {string} [params.sessionId] - The session ID to search by
* @param {Object} [options] - Additional options
* @param {boolean} [options.lean=true] - Whether to return plain objects instead of documents
* @returns {Promise<Session|null>}
* @throws {SessionError}
*/
const findSession = async (params, options = { lean: true }) => {
try {
const query = {};
if (!params.refreshToken && !params.userId && !params.sessionId) {
throw new SessionError('At least one search parameter is required', 'INVALID_SEARCH_PARAMS');
}
if (params.refreshToken) {
const tokenHash = await hashToken(params.refreshToken);
query.refreshTokenHash = tokenHash;
}
if (params.userId) {
query.user = params.userId;
}
if (params.sessionId) {
const sessionId = params.sessionId.sessionId || params.sessionId;
if (!mongoose.Types.ObjectId.isValid(sessionId)) {
throw new SessionError('Invalid session ID format', 'INVALID_SESSION_ID');
}
query._id = sessionId;
}
// Add expiration check to only return valid sessions
query.expiration = { $gt: new Date() };
const sessionQuery = Session.findOne(query);
if (options.lean) {
return await sessionQuery.lean();
}
return await sessionQuery.exec();
} catch (error) {
logger.error('[findSession] Error finding session:', error);
throw new SessionError('Failed to find session', 'FIND_SESSION_FAILED');
}
};
/**
* Updates session expiration
* @param {Session|string} session - The session or session ID to update
* @param {Date} [newExpiration] - Optional new expiration date
* @returns {Promise<Session>}
* @throws {SessionError}
*/
const updateExpiration = async (session, newExpiration) => {
try {
const sessionDoc = typeof session === 'string' ? await Session.findById(session) : session;
if (!sessionDoc) {
throw new SessionError('Session not found', 'SESSION_NOT_FOUND');
}
sessionDoc.expiration = newExpiration || new Date(Date.now() + expires);
return await sessionDoc.save();
} catch (error) {
logger.error('[updateExpiration] Error updating session:', error);
throw new SessionError('Failed to update session expiration', 'UPDATE_EXPIRATION_FAILED');
}
};
/**
* Deletes a session by refresh token or session ID
* @param {Object} params - Delete parameters
* @param {string} [params.refreshToken] - The refresh token of the session to delete
* @param {string} [params.sessionId] - The ID of the session to delete
* @returns {Promise<Object>}
* @throws {SessionError}
*/
const deleteSession = async (params) => {
try {
if (!params.refreshToken && !params.sessionId) {
throw new SessionError(
'Either refreshToken or sessionId is required',
'INVALID_DELETE_PARAMS',
);
}
const query = {};
if (params.refreshToken) {
query.refreshTokenHash = await hashToken(params.refreshToken);
}
if (params.sessionId) {
query._id = params.sessionId;
}
const result = await Session.deleteOne(query);
if (result.deletedCount === 0) {
logger.warn('[deleteSession] No session found to delete');
}
return result;
} catch (error) {
logger.error('[deleteSession] Error deleting session:', error);
throw new SessionError('Failed to delete session', 'DELETE_SESSION_FAILED');
}
};
/**
* Deletes all sessions for a user
* @param {string} userId - The ID of the user
* @param {Object} [options] - Additional options
* @param {boolean} [options.excludeCurrentSession] - Whether to exclude the current session
* @param {string} [options.currentSessionId] - The ID of the current session to exclude
* @returns {Promise<Object>}
* @throws {SessionError}
*/
const deleteAllUserSessions = async (userId, options = {}) => {
try {
if (!userId) {
throw new SessionError('User ID is required', 'INVALID_USER_ID');
}
// Extract userId if it's passed as an object
const userIdString = userId.userId || userId;
if (!mongoose.Types.ObjectId.isValid(userIdString)) {
throw new SessionError('Invalid user ID format', 'INVALID_USER_ID_FORMAT');
}
const query = { user: userIdString };
if (options.excludeCurrentSession && options.currentSessionId) {
query._id = { $ne: options.currentSessionId };
}
const result = await Session.deleteMany(query);
if (result.deletedCount > 0) {
logger.debug(
`[deleteAllUserSessions] Deleted ${result.deletedCount} sessions for user ${userIdString}.`,
);
}
return result;
} catch (error) {
logger.error('[deleteAllUserSessions] Error deleting user sessions:', error);
throw new SessionError('Failed to delete user sessions', 'DELETE_ALL_SESSIONS_FAILED');
}
};
/**
* Generates a refresh token for a session
* @param {Session} session - The session to generate a token for
* @returns {Promise<string>}
* @throws {SessionError}
*/
const generateRefreshToken = async (session) => {
if (!session || !session.user) {
throw new SessionError('Invalid session object', 'INVALID_SESSION');
}
try {
const expiresIn = session.expiration ? session.expiration.getTime() : Date.now() + expires;
if (!session.expiration) {
session.expiration = new Date(expiresIn);
}
const refreshToken = await signPayload({
payload: { id: this.user },
payload: {
id: session.user,
sessionId: session._id,
},
secret: process.env.JWT_REFRESH_SECRET,
expirationTime: Math.floor((expiresIn - Date.now()) / 1000),
});
this.refreshTokenHash = await hashToken(refreshToken);
await this.save();
session.refreshTokenHash = await hashToken(refreshToken);
await session.save();
return refreshToken;
} catch (error) {
logger.error(
'Error generating refresh token. Is a `JWT_REFRESH_SECRET` set in the .env file?\n\n',
error,
);
throw error;
logger.error('[generateRefreshToken] Error generating refresh token:', error);
throw new SessionError('Failed to generate refresh token', 'GENERATE_TOKEN_FAILED');
}
};
sessionSchema.statics.deleteAllUserSessions = async function (userId) {
/**
* Counts active sessions for a user
* @param {string} userId - The ID of the user
* @returns {Promise<number>}
* @throws {SessionError}
*/
const countActiveSessions = async (userId) => {
try {
if (!userId) {
return;
}
const result = await this.deleteMany({ user: userId });
if (result && result?.deletedCount > 0) {
logger.debug(
`[deleteAllUserSessions] Deleted ${result.deletedCount} sessions for user ${userId}.`,
);
throw new SessionError('User ID is required', 'INVALID_USER_ID');
}
return await Session.countDocuments({
user: userId,
expiration: { $gt: new Date() },
});
} catch (error) {
logger.error('[deleteAllUserSessions] Error in deleting user sessions:', error);
throw error;
logger.error('[countActiveSessions] Error counting active sessions:', error);
throw new SessionError('Failed to count active sessions', 'COUNT_SESSIONS_FAILED');
}
};
const Session = mongoose.model('Session', sessionSchema);
module.exports = Session;
module.exports = {
createSession,
findSession,
updateExpiration,
deleteSession,
deleteAllUserSessions,
generateRefreshToken,
countActiveSessions,
SessionError,
};

View File

@@ -1,82 +1,71 @@
const { nanoid } = require('nanoid');
const { Constants } = require('librechat-data-provider');
const { Conversation } = require('~/models/Conversation');
const SharedLink = require('./schema/shareSchema');
const { getMessages } = require('./Message');
const logger = require('~/config/winston');
/**
* Anonymizes a conversation ID
* @returns {string} The anonymized conversation ID
*/
function anonymizeConvoId() {
return `convo_${nanoid()}`;
class ShareServiceError extends Error {
constructor(message, code) {
super(message);
this.name = 'ShareServiceError';
this.code = code;
}
}
/**
* Anonymizes an assistant ID
* @returns {string} The anonymized assistant ID
*/
function anonymizeAssistantId() {
return `a_${nanoid()}`;
}
const memoizedAnonymizeId = (prefix) => {
const memo = new Map();
return (id) => {
if (!memo.has(id)) {
memo.set(id, `${prefix}_${nanoid()}`);
}
return memo.get(id);
};
};
/**
* Anonymizes a message ID
* @param {string} id - The original message ID
* @returns {string} The anonymized message ID
*/
function anonymizeMessageId(id) {
return id === Constants.NO_PARENT ? id : `msg_${nanoid()}`;
}
const anonymizeConvoId = memoizedAnonymizeId('convo');
const anonymizeAssistantId = memoizedAnonymizeId('a');
const anonymizeMessageId = (id) =>
id === Constants.NO_PARENT ? id : memoizedAnonymizeId('msg')(id);
/**
* Anonymizes a conversation object
* @param {object} conversation - The conversation object
* @returns {object} The anonymized conversation object
*/
function anonymizeConvo(conversation) {
if (!conversation) {
return null;
}
const newConvo = { ...conversation };
if (newConvo.assistant_id) {
newConvo.assistant_id = anonymizeAssistantId();
newConvo.assistant_id = anonymizeAssistantId(newConvo.assistant_id);
}
return newConvo;
}
/**
* Anonymizes messages in a conversation
* @param {TMessage[]} messages - The original messages
* @param {string} newConvoId - The new conversation ID
* @returns {TMessage[]} The anonymized messages
*/
function anonymizeMessages(messages, newConvoId) {
if (!Array.isArray(messages)) {
return [];
}
const idMap = new Map();
return messages.map((message) => {
const newMessageId = anonymizeMessageId(message.messageId);
idMap.set(message.messageId, newMessageId);
const anonymizedMessage = Object.assign(message, {
return {
...message,
messageId: newMessageId,
parentMessageId:
idMap.get(message.parentMessageId) || anonymizeMessageId(message.parentMessageId),
conversationId: newConvoId,
});
if (anonymizedMessage.model && anonymizedMessage.model.startsWith('asst_')) {
anonymizedMessage.model = anonymizeAssistantId();
}
return anonymizedMessage;
model: message.model?.startsWith('asst_')
? anonymizeAssistantId(message.model)
: message.model,
};
});
}
/**
* Retrieves shared messages for a given share ID
* @param {string} shareId - The share ID
* @returns {Promise<object|null>} The shared conversation data or null if not found
*/
async function getSharedMessages(shareId) {
try {
const share = await SharedLink.findOne({ shareId })
const share = await SharedLink.findOne({ shareId, isPublic: true })
.populate({
path: 'messages',
select: '-_id -__v -user',
@@ -84,165 +73,264 @@ async function getSharedMessages(shareId) {
.select('-_id -__v -user')
.lean();
if (!share || !share.conversationId || !share.isPublic) {
if (!share?.conversationId || !share.isPublic) {
return null;
}
const newConvoId = anonymizeConvoId();
return Object.assign(share, {
const newConvoId = anonymizeConvoId(share.conversationId);
const result = {
...share,
conversationId: newConvoId,
messages: anonymizeMessages(share.messages, newConvoId),
});
};
return result;
} catch (error) {
logger.error('[getShare] Error getting share link', error);
throw new Error('Error getting share link');
logger.error('[getShare] Error getting share link', {
error: error.message,
shareId,
});
throw new ShareServiceError('Error getting share link', 'SHARE_FETCH_ERROR');
}
}
/**
* Retrieves shared links for a user
* @param {string} user - The user ID
* @param {number} [pageNumber=1] - The page number
* @param {number} [pageSize=25] - The page size
* @param {boolean} [isPublic=true] - Whether to retrieve public links only
* @returns {Promise<object>} The shared links and pagination data
*/
async function getSharedLinks(user, pageNumber = 1, pageSize = 25, isPublic = true) {
const query = { user, isPublic };
async function getSharedLinks(user, pageParam, pageSize, isPublic, sortBy, sortDirection, search) {
try {
const [totalConvos, sharedLinks] = await Promise.all([
SharedLink.countDocuments(query),
SharedLink.find(query)
.sort({ updatedAt: -1 })
.skip((pageNumber - 1) * pageSize)
.limit(pageSize)
.select('-_id -__v -user')
.lean(),
]);
const query = { user, isPublic };
const totalPages = Math.ceil((totalConvos || 1) / pageSize);
if (pageParam) {
if (sortDirection === 'desc') {
query[sortBy] = { $lt: pageParam };
} else {
query[sortBy] = { $gt: pageParam };
}
}
if (search && search.trim()) {
try {
const searchResults = await Conversation.meiliSearch(search);
if (!searchResults?.hits?.length) {
return {
links: [],
nextCursor: undefined,
hasNextPage: false,
};
}
const conversationIds = searchResults.hits.map((hit) => hit.conversationId);
query['conversationId'] = { $in: conversationIds };
} catch (searchError) {
logger.error('[getSharedLinks] Meilisearch error', {
error: searchError.message,
user,
});
return {
links: [],
nextCursor: undefined,
hasNextPage: false,
};
}
}
const sort = {};
sort[sortBy] = sortDirection === 'desc' ? -1 : 1;
if (Array.isArray(query.conversationId)) {
query.conversationId = { $in: query.conversationId };
}
const sharedLinks = await SharedLink.find(query)
.sort(sort)
.limit(pageSize + 1)
.select('-__v -user')
.lean();
const hasNextPage = sharedLinks.length > pageSize;
const links = sharedLinks.slice(0, pageSize);
const nextCursor = hasNextPage ? links[links.length - 1][sortBy] : undefined;
return {
sharedLinks,
pages: totalPages,
pageNumber,
pageSize,
links: links.map((link) => ({
shareId: link.shareId,
title: link?.title || 'Untitled',
isPublic: link.isPublic,
createdAt: link.createdAt,
conversationId: link.conversationId,
})),
nextCursor,
hasNextPage,
};
} catch (error) {
logger.error('[getShareByPage] Error getting shares', error);
throw new Error('Error getting shares');
}
}
/**
* Creates a new shared link
* @param {string} user - The user ID
* @param {object} shareData - The share data
* @param {string} shareData.conversationId - The conversation ID
* @returns {Promise<object>} The created shared link
*/
async function createSharedLink(user, { conversationId, ...shareData }) {
try {
const share = await SharedLink.findOne({ conversationId }).select('-_id -__v -user').lean();
if (share) {
const newConvoId = anonymizeConvoId();
const sharedConvo = anonymizeConvo(share);
return Object.assign(sharedConvo, {
conversationId: newConvoId,
messages: anonymizeMessages(share.messages, newConvoId),
});
}
const shareId = nanoid();
const messages = await getMessages({ conversationId });
const update = { ...shareData, shareId, messages, user };
const newShare = await SharedLink.findOneAndUpdate({ conversationId, user }, update, {
new: true,
upsert: true,
}).lean();
const newConvoId = anonymizeConvoId();
const sharedConvo = anonymizeConvo(newShare);
return Object.assign(sharedConvo, {
conversationId: newConvoId,
messages: anonymizeMessages(newShare.messages, newConvoId),
logger.error('[getSharedLinks] Error getting shares', {
error: error.message,
user,
});
} catch (error) {
logger.error('[createSharedLink] Error creating shared link', error);
throw new Error('Error creating shared link');
throw new ShareServiceError('Error getting shares', 'SHARES_FETCH_ERROR');
}
}
/**
* Updates an existing shared link
* @param {string} user - The user ID
* @param {object} shareData - The share data to update
* @param {string} shareData.conversationId - The conversation ID
* @returns {Promise<object>} The updated shared link
*/
async function updateSharedLink(user, { conversationId, ...shareData }) {
try {
const share = await SharedLink.findOne({ conversationId }).select('-_id -__v -user').lean();
if (!share) {
return { message: 'Share not found' };
}
const messages = await getMessages({ conversationId });
const update = { ...shareData, messages, user };
const updatedShare = await SharedLink.findOneAndUpdate({ conversationId, user }, update, {
new: true,
upsert: false,
}).lean();
const newConvoId = anonymizeConvoId();
const sharedConvo = anonymizeConvo(updatedShare);
return Object.assign(sharedConvo, {
conversationId: newConvoId,
messages: anonymizeMessages(updatedShare.messages, newConvoId),
});
} catch (error) {
logger.error('[updateSharedLink] Error updating shared link', error);
throw new Error('Error updating shared link');
}
}
/**
* Deletes a shared link
* @param {string} user - The user ID
* @param {object} params - The deletion parameters
* @param {string} params.shareId - The share ID to delete
* @returns {Promise<object>} The result of the deletion
*/
async function deleteSharedLink(user, { shareId }) {
try {
const result = await SharedLink.findOneAndDelete({ shareId, user });
return result ? { message: 'Share deleted successfully' } : { message: 'Share not found' };
} catch (error) {
logger.error('[deleteSharedLink] Error deleting shared link', error);
throw new Error('Error deleting shared link');
}
}
/**
* Deletes all shared links for a specific user
* @param {string} user - The user ID
* @returns {Promise<object>} The result of the deletion
*/
async function deleteAllSharedLinks(user) {
try {
const result = await SharedLink.deleteMany({ user });
return {
message: 'All shared links have been deleted successfully',
message: 'All shared links deleted successfully',
deletedCount: result.deletedCount,
};
} catch (error) {
logger.error('[deleteAllSharedLinks] Error deleting shared links', error);
throw new Error('Error deleting shared links');
logger.error('[deleteAllSharedLinks] Error deleting shared links', {
error: error.message,
user,
});
throw new ShareServiceError('Error deleting shared links', 'BULK_DELETE_ERROR');
}
}
async function createSharedLink(user, conversationId) {
if (!user || !conversationId) {
throw new ShareServiceError('Missing required parameters', 'INVALID_PARAMS');
}
try {
const [existingShare, conversationMessages] = await Promise.all([
SharedLink.findOne({ conversationId, isPublic: true }).select('-_id -__v -user').lean(),
getMessages({ conversationId }),
]);
if (existingShare && existingShare.isPublic) {
throw new ShareServiceError('Share already exists', 'SHARE_EXISTS');
} else if (existingShare) {
await SharedLink.deleteOne({ conversationId });
}
const conversation = await Conversation.findOne({ conversationId }).lean();
const title = conversation?.title || 'Untitled';
const shareId = nanoid();
await SharedLink.create({
shareId,
conversationId,
messages: conversationMessages,
title,
user,
});
return { shareId, conversationId };
} catch (error) {
logger.error('[createSharedLink] Error creating shared link', {
error: error.message,
user,
conversationId,
});
throw new ShareServiceError('Error creating shared link', 'SHARE_CREATE_ERROR');
}
}
async function getSharedLink(user, conversationId) {
if (!user || !conversationId) {
throw new ShareServiceError('Missing required parameters', 'INVALID_PARAMS');
}
try {
const share = await SharedLink.findOne({ conversationId, user, isPublic: true })
.select('shareId -_id')
.lean();
if (!share) {
return { shareId: null, success: false };
}
return { shareId: share.shareId, success: true };
} catch (error) {
logger.error('[getSharedLink] Error getting shared link', {
error: error.message,
user,
conversationId,
});
throw new ShareServiceError('Error getting shared link', 'SHARE_FETCH_ERROR');
}
}
async function updateSharedLink(user, shareId) {
if (!user || !shareId) {
throw new ShareServiceError('Missing required parameters', 'INVALID_PARAMS');
}
try {
const share = await SharedLink.findOne({ shareId }).select('-_id -__v -user').lean();
if (!share) {
throw new ShareServiceError('Share not found', 'SHARE_NOT_FOUND');
}
const [updatedMessages] = await Promise.all([
getMessages({ conversationId: share.conversationId }),
]);
const newShareId = nanoid();
const update = {
messages: updatedMessages,
user,
shareId: newShareId,
};
const updatedShare = await SharedLink.findOneAndUpdate({ shareId, user }, update, {
new: true,
upsert: false,
runValidators: true,
}).lean();
if (!updatedShare) {
throw new ShareServiceError('Share update failed', 'SHARE_UPDATE_ERROR');
}
anonymizeConvo(updatedShare);
return { shareId: newShareId, conversationId: updatedShare.conversationId };
} catch (error) {
logger.error('[updateSharedLink] Error updating shared link', {
error: error.message,
user,
shareId,
});
throw new ShareServiceError(
error.code === 'SHARE_UPDATE_ERROR' ? error.message : 'Error updating shared link',
error.code || 'SHARE_UPDATE_ERROR',
);
}
}
async function deleteSharedLink(user, shareId) {
if (!user || !shareId) {
throw new ShareServiceError('Missing required parameters', 'INVALID_PARAMS');
}
try {
const result = await SharedLink.findOneAndDelete({ shareId, user }).lean();
if (!result) {
return null;
}
return {
success: true,
shareId,
message: 'Share deleted successfully',
};
} catch (error) {
logger.error('[deleteSharedLink] Error deleting shared link', {
error: error.message,
user,
shareId,
});
throw new ShareServiceError('Error deleting shared link', 'SHARE_DELETE_ERROR');
}
}
module.exports = {
SharedLink,
getSharedLink,
getSharedLinks,
createSharedLink,
updateSharedLink,

View File

@@ -1,5 +1,6 @@
const tokenSchema = require('./schema/tokenSchema');
const mongoose = require('mongoose');
const { encryptV2 } = require('~/server/utils/crypto');
const tokenSchema = require('./schema/tokenSchema');
const { logger } = require('~/config');
/**
@@ -7,6 +8,32 @@ const { logger } = require('~/config');
* @type {mongoose.Model}
*/
const Token = mongoose.model('Token', tokenSchema);
/**
* Fixes the indexes for the Token collection from legacy TTL indexes to the new expiresAt index.
*/
async function fixIndexes() {
try {
const indexes = await Token.collection.indexes();
logger.debug('Existing Token Indexes:', JSON.stringify(indexes, null, 2));
const unwantedTTLIndexes = indexes.filter(
(index) => index.key.createdAt === 1 && index.expireAfterSeconds !== undefined,
);
if (unwantedTTLIndexes.length === 0) {
logger.debug('No unwanted Token indexes found.');
return;
}
for (const index of unwantedTTLIndexes) {
logger.debug(`Dropping unwanted Token index: ${index.name}`);
await Token.collection.dropIndex(index.name);
logger.debug(`Dropped Token index: ${index.name}`);
}
logger.debug('Token index cleanup completed successfully.');
} catch (error) {
logger.error('An error occurred while fixing Token indexes:', error);
}
}
fixIndexes();
/**
* Creates a new Token instance.
@@ -29,8 +56,7 @@ async function createToken(tokenData) {
expiresAt,
};
const newToken = new Token(newTokenData);
return await newToken.save();
return await Token.create(newTokenData);
} catch (error) {
logger.debug('An error occurred while creating token:', error);
throw error;
@@ -42,7 +68,8 @@ async function createToken(tokenData) {
* @param {Object} query - The query to match against.
* @param {mongoose.Types.ObjectId|String} query.userId - The ID of the user.
* @param {String} query.token - The token value.
* @param {String} query.email - The email of the user.
* @param {String} [query.email] - The email of the user.
* @param {String} [query.identifier] - Unique, alternative identifier for the token.
* @returns {Promise<Object|null>} The matched Token document, or null if not found.
* @throws Will throw an error if the find operation fails.
*/
@@ -59,6 +86,9 @@ async function findToken(query) {
if (query.email) {
conditions.push({ email: query.email });
}
if (query.identifier) {
conditions.push({ identifier: query.identifier });
}
const token = await Token.findOne({
$and: conditions,
@@ -76,6 +106,8 @@ async function findToken(query) {
* @param {Object} query - The query to match against.
* @param {mongoose.Types.ObjectId|String} query.userId - The ID of the user.
* @param {String} query.token - The token value.
* @param {String} [query.email] - The email of the user.
* @param {String} [query.identifier] - Unique, alternative identifier for the token.
* @param {Object} updateData - The data to update the Token with.
* @returns {Promise<mongoose.Document|null>} The updated Token document, or null if not found.
* @throws Will throw an error if the update operation fails.
@@ -94,14 +126,20 @@ async function updateToken(query, updateData) {
* @param {Object} query - The query to match against.
* @param {mongoose.Types.ObjectId|String} query.userId - The ID of the user.
* @param {String} query.token - The token value.
* @param {String} query.email - The email of the user.
* @param {String} [query.email] - The email of the user.
* @param {String} [query.identifier] - Unique, alternative identifier for the token.
* @returns {Promise<Object>} The result of the delete operation.
* @throws Will throw an error if the delete operation fails.
*/
async function deleteTokens(query) {
try {
return await Token.deleteMany({
$or: [{ userId: query.userId }, { token: query.token }, { email: query.email }],
$or: [
{ userId: query.userId },
{ token: query.token },
{ email: query.email },
{ identifier: query.identifier },
],
});
} catch (error) {
logger.debug('An error occurred while deleting tokens:', error);
@@ -109,9 +147,46 @@ async function deleteTokens(query) {
}
}
/**
* Handles the OAuth token by creating or updating the token.
* @param {object} fields
* @param {string} fields.userId - The user's ID.
* @param {string} fields.token - The full token to store.
* @param {string} fields.identifier - Unique, alternative identifier for the token.
* @param {number} fields.expiresIn - The number of seconds until the token expires.
* @param {object} fields.metadata - Additional metadata to store with the token.
* @param {string} [fields.type="oauth"] - The type of token. Default is 'oauth'.
*/
async function handleOAuthToken({
token,
userId,
identifier,
expiresIn,
metadata,
type = 'oauth',
}) {
const encrypedToken = await encryptV2(token);
const tokenData = {
type,
userId,
metadata,
identifier,
token: encrypedToken,
expiresIn: parseInt(expiresIn, 10) || 3600,
};
const existingToken = await findToken({ userId, identifier });
if (existingToken) {
return await updateToken({ identifier }, tokenData);
} else {
return await createToken(tokenData);
}
}
module.exports = {
createToken,
findToken,
createToken,
updateToken,
deleteTokens,
handleOAuthToken,
};

View File

@@ -27,6 +27,9 @@ transactionSchema.methods.calculateTokenValue = function () {
*/
transactionSchema.statics.create = async function (txData) {
const Transaction = this;
if (txData.rawAmount != null && isNaN(txData.rawAmount)) {
return;
}
const transaction = new Transaction(txData);
transaction.endpointTokenConfig = txData.endpointTokenConfig;

View File

@@ -1,5 +1,6 @@
const mongoose = require('mongoose');
const { MongoMemoryServer } = require('mongodb-memory-server');
const { Transaction } = require('./Transaction');
const Balance = require('./Balance');
const { spendTokens, spendStructuredTokens } = require('./spendTokens');
const { getMultiplier, getCacheMultiplier } = require('./tx');
@@ -346,3 +347,28 @@ describe('Structured Token Spending Tests', () => {
expect(result.completion.completion).toBeCloseTo(-50 * 15 * 1.15, 0); // Assuming multiplier is 15 and cancelRate is 1.15
});
});
describe('NaN Handling Tests', () => {
test('should skip transaction creation when rawAmount is NaN', async () => {
const userId = new mongoose.Types.ObjectId();
const initialBalance = 10000000;
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'gpt-3.5-turbo';
const txData = {
user: userId,
conversationId: 'test-conversation-id',
model,
context: 'test',
endpointTokenConfig: null,
rawAmount: NaN,
tokenType: 'prompt',
};
const result = await Transaction.create(txData);
expect(result).toBeUndefined();
const balance = await Balance.findOne({ user: userId });
expect(balance.tokenCredits).toBe(initialBalance);
});
});

View File

@@ -220,4 +220,94 @@ describe('Conversation Structure Tests', () => {
}
expect(currentNode.children.length).toBe(0); // Last message should have no children
});
test('Random order dates between parent and children messages', async () => {
const userId = 'testUser';
const conversationId = 'testConversation';
// Create messages with deliberately out-of-order timestamps but sequential creation
const messages = [
{
messageId: 'parent',
parentMessageId: null,
text: 'Parent Message',
createdAt: new Date('2023-01-01T00:00:00Z'), // Make parent earliest
},
{
messageId: 'child1',
parentMessageId: 'parent',
text: 'Child Message 1',
createdAt: new Date('2023-01-01T00:01:00Z'),
},
{
messageId: 'child2',
parentMessageId: 'parent',
text: 'Child Message 2',
createdAt: new Date('2023-01-01T00:02:00Z'),
},
{
messageId: 'grandchild1',
parentMessageId: 'child1',
text: 'Grandchild Message 1',
createdAt: new Date('2023-01-01T00:03:00Z'),
},
];
// Add common properties to all messages
messages.forEach((msg) => {
msg.conversationId = conversationId;
msg.user = userId;
msg.isCreatedByUser = false;
msg.error = false;
msg.unfinished = false;
});
// Save messages with overrideTimestamp set to true
await bulkSaveMessages(messages, true);
// Retrieve messages
const retrievedMessages = await getMessages({ conversationId, user: userId });
// Debug log to see what's being returned
console.log(
'Retrieved Messages:',
retrievedMessages.map((msg) => ({
messageId: msg.messageId,
parentMessageId: msg.parentMessageId,
createdAt: msg.createdAt,
})),
);
// Build tree
const tree = buildTree({ messages: retrievedMessages });
// Debug log to see the tree structure
console.log(
'Tree structure:',
tree.map((root) => ({
messageId: root.messageId,
children: root.children.map((child) => ({
messageId: child.messageId,
children: child.children.map((grandchild) => ({
messageId: grandchild.messageId,
})),
})),
})),
);
// Verify the structure before making assertions
expect(retrievedMessages.length).toBe(4); // Should have all 4 messages
// Check if messages are properly linked
const parentMsg = retrievedMessages.find((msg) => msg.messageId === 'parent');
expect(parentMsg.parentMessageId).toBeNull(); // Parent should have null parentMessageId
const childMsg1 = retrievedMessages.find((msg) => msg.messageId === 'child1');
expect(childMsg1.parentMessageId).toBe('parent');
// Then check tree structure
expect(tree.length).toBe(1); // Should have only one root message
expect(tree[0].messageId).toBe('parent');
expect(tree[0].children.length).toBe(2); // Should have two children
});
});

View File

@@ -26,10 +26,18 @@ const {
deleteMessagesSince,
deleteMessages,
} = require('./Message');
const {
createSession,
findSession,
updateExpiration,
deleteSession,
deleteAllUserSessions,
generateRefreshToken,
countActiveSessions,
} = require('./Session');
const { getConvoTitle, getConvo, saveConvo, deleteConvos } = require('./Conversation');
const { getPreset, getPresets, savePreset, deletePresets } = require('./Preset');
const { createToken, findToken, updateToken, deleteTokens } = require('./Token');
const Session = require('./Session');
const Balance = require('./Balance');
const User = require('./User');
const Key = require('./Key');
@@ -75,8 +83,15 @@ module.exports = {
updateToken,
deleteTokens,
createSession,
findSession,
updateExpiration,
deleteSession,
deleteAllUserSessions,
generateRefreshToken,
countActiveSessions,
User,
Key,
Session,
Balance,
};

View File

@@ -35,6 +35,9 @@ const agentSchema = mongoose.Schema(
model_parameters: {
type: Object,
},
artifacts: {
type: String,
},
access_level: {
type: Number,
},

View File

@@ -28,6 +28,10 @@ const assistantSchema = mongoose.Schema(
},
file_ids: { type: [String], default: undefined },
actions: { type: [String], default: undefined },
append_current_datetime: {
type: Boolean,
default: false,
},
},
{
timestamps: true,

View File

@@ -29,22 +29,6 @@ const convoSchema = mongoose.Schema(
agent_id: {
type: String,
},
// for bingAI only
bingConversationId: {
type: String,
},
jailbreakConversationId: {
type: String,
},
conversationSignature: {
type: String,
},
clientId: {
type: String,
},
invocationId: {
type: Number,
},
tags: {
type: [String],
default: [],
@@ -53,6 +37,9 @@ const convoSchema = mongoose.Schema(
files: {
type: [String],
},
expiredAt: {
type: Date,
},
},
{ timestamps: true },
);
@@ -66,6 +53,8 @@ if (process.env.MEILI_HOST && process.env.MEILI_MASTER_KEY) {
});
}
// Create TTL index
convoSchema.index({ expiredAt: 1 }, { expireAfterSeconds: 0 });
convoSchema.index({ createdAt: 1, updatedAt: 1 });
convoSchema.index({ conversationId: 1, user: 1 }, { unique: true });

View File

@@ -1,5 +1,5 @@
const conversationPreset = {
// endpoint: [azureOpenAI, openAI, bingAI, anthropic, chatGPTBrowser]
// endpoint: [azureOpenAI, openAI, anthropic, chatGPTBrowser]
endpoint: {
type: String,
default: null,
@@ -61,19 +61,6 @@ const conversationPreset = {
type: Number,
required: false,
},
// for bingai only
jailbreak: {
type: Boolean,
},
context: {
type: String,
},
systemMessage: {
type: String,
},
toneStyle: {
type: String,
},
file_ids: { type: [{ type: String }], default: undefined },
// deprecated
resendImages: {
@@ -130,6 +117,10 @@ const conversationPreset = {
max_tokens: {
type: Number,
},
/** omni models only */
reasoning_effort: {
type: String,
},
};
const agentOptions = {
@@ -179,12 +170,6 @@ const agentOptions = {
type: Number,
required: false,
},
context: {
type: String,
},
systemMessage: {
type: String,
},
};
module.exports = {

View File

@@ -16,7 +16,6 @@ const keySchema = mongoose.Schema({
},
expiresAt: {
type: Date,
expires: 0,
},
});

View File

@@ -62,10 +62,6 @@ const messageSchema = mongoose.Schema(
required: true,
default: false,
},
isEdited: {
type: Boolean,
default: false,
},
unfinished: {
type: Boolean,
default: false,
@@ -138,6 +134,9 @@ const messageSchema = mongoose.Schema(
default: undefined,
},
*/
expiredAt: {
type: Date,
},
},
{ timestamps: true },
);
@@ -150,7 +149,7 @@ if (process.env.MEILI_HOST && process.env.MEILI_MASTER_KEY) {
primaryKey: 'messageId',
});
}
messageSchema.index({ expiredAt: 1 }, { expireAfterSeconds: 0 });
messageSchema.index({ createdAt: 1 });
messageSchema.index({ messageId: 1, user: 1 }, { unique: true });

View File

@@ -0,0 +1,20 @@
const mongoose = require('mongoose');
const sessionSchema = mongoose.Schema({
refreshTokenHash: {
type: String,
required: true,
},
expiration: {
type: Date,
required: true,
expires: 0,
},
user: {
type: mongoose.Schema.Types.ObjectId,
ref: 'User',
required: true,
},
});
module.exports = sessionSchema;

View File

@@ -20,14 +20,6 @@ const shareSchema = mongoose.Schema(
index: true,
},
isPublic: {
type: Boolean,
default: false,
},
isVisible: {
type: Boolean,
default: false,
},
isAnonymous: {
type: Boolean,
default: true,
},

View File

@@ -10,6 +10,10 @@ const tokenSchema = new Schema({
email: {
type: String,
},
type: String,
identifier: {
type: String,
},
token: {
type: String,
required: true,
@@ -23,6 +27,10 @@ const tokenSchema = new Schema({
type: Date,
required: true,
},
metadata: {
type: Map,
of: Schema.Types.Mixed,
},
});
tokenSchema.index({ expiresAt: 1 }, { expireAfterSeconds: 0 });

View File

@@ -23,6 +23,7 @@ const { SystemRoles } = require('librechat-data-provider');
* @property {string} [ldapId] - Optional LDAP ID for the user
* @property {string} [githubId] - Optional GitHub ID for the user
* @property {string} [discordId] - Optional Discord ID for the user
* @property {string} [appleId] - Optional Apple ID for the user
* @property {Array} [plugins=[]] - List of plugins used by the user
* @property {Array.<MongoSession>} [refreshToken] - List of sessions with refresh tokens
* @property {Date} [expiresAt] - Optional expiration date of the file
@@ -83,33 +84,31 @@ const userSchema = mongoose.Schema(
},
googleId: {
type: String,
unique: true,
sparse: true,
index: true,
},
facebookId: {
type: String,
unique: true,
sparse: true,
index: true,
},
openidId: {
type: String,
unique: true,
sparse: true,
index: true,
},
ldapId: {
type: String,
unique: true,
sparse: true,
index: true,
},
githubId: {
type: String,
unique: true,
sparse: true,
index: true,
},
discordId: {
type: String,
unique: true,
sparse: true,
index: true,
},
appleId: {
type: String,
index: true,
},
plugins: {
type: Array,

View File

@@ -1,22 +1,50 @@
const { matchModelName } = require('../utils');
const defaultRate = 6;
/** AWS Bedrock pricing */
/**
* AWS Bedrock pricing
* source: https://aws.amazon.com/bedrock/pricing/
* */
const bedrockValues = {
// Basic llama2 patterns
'llama2-13b': { prompt: 0.75, completion: 1.0 },
'llama2-70b': { prompt: 1.95, completion: 2.56 },
'llama3-8b': { prompt: 0.3, completion: 0.6 },
'llama3-70b': { prompt: 2.65, completion: 3.5 },
'llama3-1-8b': { prompt: 0.3, completion: 0.6 },
'llama3-1-70b': { prompt: 2.65, completion: 3.5 },
'llama3-1-405b': { prompt: 5.32, completion: 16.0 },
'llama2:13b': { prompt: 0.75, completion: 1.0 },
'llama2:70b': { prompt: 1.95, completion: 2.56 },
'llama2-70b': { prompt: 1.95, completion: 2.56 },
// Basic llama3 patterns
'llama3-8b': { prompt: 0.3, completion: 0.6 },
'llama3:8b': { prompt: 0.3, completion: 0.6 },
'llama3-70b': { prompt: 2.65, completion: 3.5 },
'llama3:70b': { prompt: 2.65, completion: 3.5 },
'llama3.1:8b': { prompt: 0.3, completion: 0.6 },
'llama3.1:70b': { prompt: 2.65, completion: 3.5 },
'llama3.1:405b': { prompt: 5.32, completion: 16.0 },
// llama3-x-Nb pattern
'llama3-1-8b': { prompt: 0.22, completion: 0.22 },
'llama3-1-70b': { prompt: 0.72, completion: 0.72 },
'llama3-1-405b': { prompt: 2.4, completion: 2.4 },
'llama3-2-1b': { prompt: 0.1, completion: 0.1 },
'llama3-2-3b': { prompt: 0.15, completion: 0.15 },
'llama3-2-11b': { prompt: 0.16, completion: 0.16 },
'llama3-2-90b': { prompt: 0.72, completion: 0.72 },
// llama3.x:Nb pattern
'llama3.1:8b': { prompt: 0.22, completion: 0.22 },
'llama3.1:70b': { prompt: 0.72, completion: 0.72 },
'llama3.1:405b': { prompt: 2.4, completion: 2.4 },
'llama3.2:1b': { prompt: 0.1, completion: 0.1 },
'llama3.2:3b': { prompt: 0.15, completion: 0.15 },
'llama3.2:11b': { prompt: 0.16, completion: 0.16 },
'llama3.2:90b': { prompt: 0.72, completion: 0.72 },
// llama-3.x-Nb pattern
'llama-3.1-8b': { prompt: 0.22, completion: 0.22 },
'llama-3.1-70b': { prompt: 0.72, completion: 0.72 },
'llama-3.1-405b': { prompt: 2.4, completion: 2.4 },
'llama-3.2-1b': { prompt: 0.1, completion: 0.1 },
'llama-3.2-3b': { prompt: 0.15, completion: 0.15 },
'llama-3.2-11b': { prompt: 0.16, completion: 0.16 },
'llama-3.2-90b': { prompt: 0.72, completion: 0.72 },
'llama-3.3-70b': { prompt: 2.65, completion: 3.5 },
'mistral-7b': { prompt: 0.15, completion: 0.2 },
'mistral-small': { prompt: 0.15, completion: 0.2 },
'mixtral-8x7b': { prompt: 0.45, completion: 0.7 },
@@ -47,8 +75,9 @@ const tokenValues = Object.assign(
'4k': { prompt: 1.5, completion: 2 },
'16k': { prompt: 3, completion: 4 },
'gpt-3.5-turbo-1106': { prompt: 1, completion: 2 },
'o3-mini': { prompt: 1.1, completion: 4.4 },
'o1-mini': { prompt: 1.1, completion: 4.4 },
'o1-preview': { prompt: 15, completion: 60 },
'o1-mini': { prompt: 3, completion: 12 },
o1: { prompt: 15, completion: 60 },
'gpt-4o-mini': { prompt: 0.15, completion: 0.6 },
'gpt-4o': { prompt: 2.5, completion: 10 },
@@ -68,11 +97,19 @@ const tokenValues = Object.assign(
'claude-': { prompt: 0.8, completion: 2.4 },
'command-r-plus': { prompt: 3, completion: 15 },
'command-r': { prompt: 0.5, completion: 1.5 },
'deepseek-reasoner': { prompt: 0.55, completion: 2.19 },
deepseek: { prompt: 0.14, completion: 0.28 },
/* cohere doesn't have rates for the older command models,
so this was from https://artificialanalysis.ai/models/command-light/providers */
command: { prompt: 0.38, completion: 0.38 },
'gemini-1.5': { prompt: 7, completion: 21 }, // May 2nd, 2024 pricing
gemini: { prompt: 0.5, completion: 1.5 }, // May 2nd, 2024 pricing
'gemini-2.0-flash-lite': { prompt: 0.075, completion: 0.3 },
'gemini-2.0-flash': { prompt: 0.1, completion: 0.7 },
'gemini-2.0': { prompt: 0, completion: 0 }, // https://ai.google.dev/pricing
'gemini-1.5-flash-8b': { prompt: 0.075, completion: 0.3 },
'gemini-1.5-flash': { prompt: 0.15, completion: 0.6 },
'gemini-1.5': { prompt: 2.5, completion: 10 },
'gemini-pro-vision': { prompt: 0.5, completion: 1.5 },
gemini: { prompt: 0.5, completion: 1.5 },
},
bedrockValues,
);

View File

@@ -263,6 +263,37 @@ describe('AWS Bedrock Model Tests', () => {
});
});
describe('Deepseek Model Tests', () => {
const deepseekModels = ['deepseek-chat', 'deepseek-coder', 'deepseek-reasoner'];
it('should return the correct prompt multipliers for all models', () => {
const results = deepseekModels.map((model) => {
const valueKey = getValueKey(model);
const multiplier = getMultiplier({ valueKey, tokenType: 'prompt' });
return tokenValues[valueKey].prompt && multiplier === tokenValues[valueKey].prompt;
});
expect(results.every(Boolean)).toBe(true);
});
it('should return the correct completion multipliers for all models', () => {
const results = deepseekModels.map((model) => {
const valueKey = getValueKey(model);
const multiplier = getMultiplier({ valueKey, tokenType: 'completion' });
return tokenValues[valueKey].completion && multiplier === tokenValues[valueKey].completion;
});
expect(results.every(Boolean)).toBe(true);
});
it('should return the correct prompt multipliers for reasoning model', () => {
const model = 'deepseek-reasoner';
const valueKey = getValueKey(model);
expect(valueKey).toBe(model);
const multiplier = getMultiplier({ valueKey, tokenType: 'prompt' });
const result = tokenValues[valueKey].prompt && multiplier === tokenValues[valueKey].prompt;
expect(result).toBe(true);
});
});
describe('getCacheMultiplier', () => {
it('should return the correct cache multiplier for a given valueKey and cacheType', () => {
expect(getCacheMultiplier({ valueKey: 'claude-3-5-sonnet', cacheType: 'write' })).toBe(
@@ -349,3 +380,81 @@ describe('getCacheMultiplier', () => {
).toBe(0.03);
});
});
describe('Google Model Tests', () => {
const googleModels = [
'gemini-2.0-flash-lite-preview-02-05',
'gemini-2.0-flash-001',
'gemini-2.0-flash-exp',
'gemini-2.0-pro-exp-02-05',
'gemini-1.5-flash-8b',
'gemini-1.5-flash-thinking',
'gemini-1.5-pro-latest',
'gemini-1.5-pro-preview-0409',
'gemini-pro-vision',
'gemini-1.0',
'gemini-pro',
];
it('should return the correct prompt and completion rates for all models', () => {
const results = googleModels.map((model) => {
const valueKey = getValueKey(model, EModelEndpoint.google);
const promptRate = getMultiplier({
model,
tokenType: 'prompt',
endpoint: EModelEndpoint.google,
});
const completionRate = getMultiplier({
model,
tokenType: 'completion',
endpoint: EModelEndpoint.google,
});
return { model, valueKey, promptRate, completionRate };
});
results.forEach(({ valueKey, promptRate, completionRate }) => {
expect(promptRate).toBe(tokenValues[valueKey].prompt);
expect(completionRate).toBe(tokenValues[valueKey].completion);
});
});
it('should map to the correct model keys', () => {
const expected = {
'gemini-2.0-flash-lite-preview-02-05': 'gemini-2.0-flash-lite',
'gemini-2.0-flash-001': 'gemini-2.0-flash',
'gemini-2.0-flash-exp': 'gemini-2.0-flash',
'gemini-2.0-pro-exp-02-05': 'gemini-2.0',
'gemini-1.5-flash-8b': 'gemini-1.5-flash-8b',
'gemini-1.5-flash-thinking': 'gemini-1.5-flash',
'gemini-1.5-pro-latest': 'gemini-1.5',
'gemini-1.5-pro-preview-0409': 'gemini-1.5',
'gemini-pro-vision': 'gemini-pro-vision',
'gemini-1.0': 'gemini',
'gemini-pro': 'gemini',
};
Object.entries(expected).forEach(([model, expectedKey]) => {
const valueKey = getValueKey(model, EModelEndpoint.google);
expect(valueKey).toBe(expectedKey);
});
});
it('should handle model names with different formats', () => {
const testCases = [
{ input: 'google/gemini-pro', expected: 'gemini' },
{ input: 'gemini-pro/google', expected: 'gemini' },
{ input: 'google/gemini-2.0-flash-lite', expected: 'gemini-2.0-flash-lite' },
];
testCases.forEach(({ input, expected }) => {
const valueKey = getValueKey(input, EModelEndpoint.google);
expect(valueKey).toBe(expected);
expect(
getMultiplier({ model: input, tokenType: 'prompt', endpoint: EModelEndpoint.google }),
).toBe(tokenValues[expected].prompt);
expect(
getMultiplier({ model: input, tokenType: 'completion', endpoint: EModelEndpoint.google }),
).toBe(tokenValues[expected].completion);
});
});
});

View File

@@ -1,6 +1,6 @@
{
"name": "@librechat/backend",
"version": "v0.7.5",
"version": "v0.7.7-rc1",
"description": "",
"scripts": {
"start": "echo 'please run this from the root directory'",
@@ -37,17 +37,18 @@
"@anthropic-ai/sdk": "^0.32.1",
"@azure/search-documents": "^12.0.0",
"@google/generative-ai": "^0.21.0",
"@googleapis/youtube": "^20.0.0",
"@keyv/mongo": "^2.1.8",
"@keyv/redis": "^2.8.1",
"@langchain/community": "^0.3.14",
"@langchain/core": "^0.3.18",
"@langchain/google-genai": "^0.1.4",
"@langchain/google-vertexai": "^0.1.2",
"@langchain/core": "^0.3.37",
"@langchain/google-genai": "^0.1.7",
"@langchain/google-vertexai": "^0.1.8",
"@langchain/textsplitters": "^0.1.0",
"@librechat/agents": "^1.8.5",
"axios": "^1.7.7",
"@librechat/agents": "^2.0.4",
"@waylaidwanderer/fetch-event-source": "^3.0.1",
"axios": "1.7.8",
"bcryptjs": "^2.4.3",
"cheerio": "^1.0.0-rc.12",
"cohere-ai": "^7.9.1",
"compression": "^1.7.4",
"connect-redis": "^7.1.0",
@@ -56,7 +57,7 @@
"cors": "^2.8.5",
"dedent": "^1.5.3",
"dotenv": "^16.0.3",
"express": "^4.21.1",
"express": "^4.21.2",
"express-mongo-sanitize": "^2.2.0",
"express-rate-limit": "^7.4.1",
"express-session": "^1.18.1",
@@ -64,7 +65,6 @@
"firebase": "^11.0.2",
"googleapis": "^126.0.1",
"handlebars": "^4.7.7",
"html": "^1.0.0",
"ioredis": "^5.3.2",
"js-yaml": "^4.1.0",
"jsonwebtoken": "^9.0.0",
@@ -73,21 +73,22 @@
"klona": "^2.0.6",
"langchain": "^0.2.19",
"librechat-data-provider": "*",
"librechat-mcp": "*",
"lodash": "^4.17.21",
"meilisearch": "^0.38.0",
"memorystore": "^1.6.7",
"mime": "^3.0.0",
"module-alias": "^2.2.3",
"mongoose": "^8.8.3",
"mongoose": "^8.9.5",
"multer": "^1.4.5-lts.1",
"nanoid": "^3.3.7",
"nodejs-gpt": "^1.37.4",
"nodemailer": "^6.9.15",
"ollama": "^0.5.0",
"openai": "^4.47.1",
"openai-chat-tokens": "^0.2.8",
"openid-client": "^5.4.2",
"passport": "^0.6.0",
"passport-custom": "^1.1.1",
"passport-apple": "^2.0.2",
"passport-discord": "^0.1.4",
"passport-facebook": "^3.0.0",
"passport-github2": "^0.1.12",
@@ -95,19 +96,19 @@
"passport-jwt": "^4.0.1",
"passport-ldapauth": "^3.0.1",
"passport-local": "^1.0.0",
"pino": "^8.12.1",
"sharp": "^0.32.6",
"tiktoken": "^1.0.15",
"traverse": "^0.6.7",
"ua-parser-js": "^1.0.36",
"winston": "^3.11.0",
"winston-daily-rotate-file": "^4.7.1",
"youtube-transcript": "^1.2.1",
"zod": "^3.22.4"
},
"devDependencies": {
"jest": "^29.7.0",
"mongodb-memory-server": "^10.0.0",
"nodemon": "^3.0.1",
"supertest": "^6.3.3"
"mongodb-memory-server": "^10.1.3",
"nodemon": "^3.0.3",
"supertest": "^7.0.0"
}
}

View File

@@ -1,8 +1,6 @@
const throttle = require('lodash/throttle');
const { getResponseSender, Constants, CacheKeys, Time } = require('librechat-data-provider');
const { getResponseSender, Constants } = require('librechat-data-provider');
const { createAbortController, handleAbortError } = require('~/server/middleware');
const { sendMessage, createOnProgress } = require('~/server/utils');
const { getLogStores } = require('~/cache');
const { saveMessage } = require('~/models');
const { logger } = require('~/config');
@@ -57,33 +55,9 @@ const AskController = async (req, res, next, initializeClient, addTitle) => {
try {
const { client } = await initializeClient({ req, res, endpointOption });
const messageCache = getLogStores(CacheKeys.MESSAGES);
const { onProgress: progressCallback, getPartialText } = createOnProgress({
onProgress: throttle(
({ text: partialText }) => {
/*
const unfinished = endpointOption.endpoint === EModelEndpoint.google ? false : true;
messageCache.set(responseMessageId, {
messageId: responseMessageId,
sender,
conversationId,
parentMessageId: overrideParentMessageId ?? userMessageId,
text: partialText,
model: client.modelOptions.model,
unfinished,
error: false,
user,
}, Time.FIVE_MINUTES);
*/
const { onProgress: progressCallback, getPartialText } = createOnProgress();
messageCache.set(responseMessageId, partialText, Time.FIVE_MINUTES);
},
3000,
{ trailing: false },
),
});
getText = getPartialText;
getText = client.getStreamText != null ? client.getStreamText.bind(client) : getPartialText;
const getAbortData = () => ({
sender,
@@ -91,7 +65,7 @@ const AskController = async (req, res, next, initializeClient, addTitle) => {
userMessagePromise,
messageId: responseMessageId,
parentMessageId: overrideParentMessageId ?? userMessageId,
text: getPartialText(),
text: getText(),
userMessage,
promptTokens,
});
@@ -181,6 +155,8 @@ const AskController = async (req, res, next, initializeClient, addTitle) => {
sender,
messageId: responseMessageId,
parentMessageId: userMessageId ?? parentMessageId,
}).catch((err) => {
logger.error('[AskController] Error in `handleAbortError`', err);
});
}
};

View File

@@ -6,8 +6,7 @@ const {
setAuthTokens,
requestPasswordReset,
} = require('~/server/services/AuthService');
const { hashToken } = require('~/server/utils/crypto');
const { Session, getUserById } = require('~/models');
const { findSession, getUserById, deleteAllUserSessions } = require('~/models');
const { logger } = require('~/config');
const registrationController = async (req, res) => {
@@ -45,6 +44,7 @@ const resetPasswordController = async (req, res) => {
if (resetPasswordService instanceof Error) {
return res.status(400).json(resetPasswordService);
} else {
await deleteAllUserSessions({ userId: req.body.userId });
return res.status(200).json(resetPasswordService);
}
} catch (e) {
@@ -73,11 +73,9 @@ const refreshController = async (req, res) => {
return res.status(200).send({ token, user });
}
// Hash the refresh token
const hashedToken = await hashToken(refreshToken);
// Find the session with the hashed refresh token
const session = await Session.findOne({ user: userId, refreshTokenHash: hashedToken });
const session = await findSession({ userId: userId, refreshToken: refreshToken });
if (session && session.expiration > new Date()) {
const token = await setAuthTokens(userId, res, session._id);
res.status(200).send({ token, user });

View File

@@ -1,8 +1,6 @@
const throttle = require('lodash/throttle');
const { getResponseSender, CacheKeys, Time } = require('librechat-data-provider');
const { getResponseSender } = require('librechat-data-provider');
const { createAbortController, handleAbortError } = require('~/server/middleware');
const { sendMessage, createOnProgress } = require('~/server/utils');
const { getLogStores } = require('~/cache');
const { saveMessage } = require('~/models');
const { logger } = require('~/config');
@@ -53,62 +51,44 @@ const EditController = async (req, res, next, initializeClient) => {
}
};
const messageCache = getLogStores(CacheKeys.MESSAGES);
const { onProgress: progressCallback, getPartialText } = createOnProgress({
generation,
onProgress: throttle(
({ text: partialText }) => {
/*
const unfinished = endpointOption.endpoint === EModelEndpoint.google ? false : true;
{
messageId: responseMessageId,
sender,
conversationId,
parentMessageId: overrideParentMessageId ?? userMessageId,
text: partialText,
model: endpointOption.modelOptions.model,
unfinished,
isEdited: true,
error: false,
user,
} */
messageCache.set(responseMessageId, partialText, Time.FIVE_MINUTES);
},
3000,
{ trailing: false },
),
});
const getAbortData = () => ({
conversationId,
userMessagePromise,
messageId: responseMessageId,
sender,
parentMessageId: overrideParentMessageId ?? userMessageId,
text: getPartialText(),
userMessage,
promptTokens,
});
const { abortController, onStart } = createAbortController(req, res, getAbortData, getReqData);
res.on('close', () => {
logger.debug('[EditController] Request closed');
if (!abortController) {
return;
} else if (abortController.signal.aborted) {
return;
} else if (abortController.requestCompleted) {
return;
}
abortController.abort();
logger.debug('[EditController] Request aborted on close');
});
let getText;
try {
const { client } = await initializeClient({ req, res, endpointOption });
getText = client.getStreamText != null ? client.getStreamText.bind(client) : getPartialText;
const getAbortData = () => ({
conversationId,
userMessagePromise,
messageId: responseMessageId,
sender,
parentMessageId: overrideParentMessageId ?? userMessageId,
text: getText(),
userMessage,
promptTokens,
});
const { abortController, onStart } = createAbortController(req, res, getAbortData, getReqData);
res.on('close', () => {
logger.debug('[EditController] Request closed');
if (!abortController) {
return;
} else if (abortController.signal.aborted) {
return;
} else if (abortController.requestCompleted) {
return;
}
abortController.abort();
logger.debug('[EditController] Request aborted on close');
});
let response = await client.sendMessage(text, {
user,
generation,
@@ -153,13 +133,15 @@ const EditController = async (req, res, next, initializeClient) => {
);
}
} catch (error) {
const partialText = getPartialText();
const partialText = getText();
handleAbortError(res, req, error, {
partialText,
conversationId,
sender,
messageId: responseMessageId,
parentMessageId: userMessageId ?? parentMessageId,
}).catch((err) => {
logger.error('[EditController] Error in `handleAbortError`', err);
});
}
};

View File

@@ -1,60 +1,7 @@
const { CacheKeys, EModelEndpoint, orderEndpointsConfig } = require('librechat-data-provider');
const { loadDefaultEndpointsConfig, loadConfigEndpoints } = require('~/server/services/Config');
const { getLogStores } = require('~/cache');
const { getEndpointsConfig } = require('~/server/services/Config');
async function endpointController(req, res) {
const cache = getLogStores(CacheKeys.CONFIG_STORE);
const cachedEndpointsConfig = await cache.get(CacheKeys.ENDPOINT_CONFIG);
if (cachedEndpointsConfig) {
res.send(cachedEndpointsConfig);
return;
}
const defaultEndpointsConfig = await loadDefaultEndpointsConfig(req);
const customConfigEndpoints = await loadConfigEndpoints(req);
/** @type {TEndpointsConfig} */
const mergedConfig = { ...defaultEndpointsConfig, ...customConfigEndpoints };
if (mergedConfig[EModelEndpoint.assistants] && req.app.locals?.[EModelEndpoint.assistants]) {
const { disableBuilder, retrievalModels, capabilities, version, ..._rest } =
req.app.locals[EModelEndpoint.assistants];
mergedConfig[EModelEndpoint.assistants] = {
...mergedConfig[EModelEndpoint.assistants],
version,
retrievalModels,
disableBuilder,
capabilities,
};
}
if (
mergedConfig[EModelEndpoint.azureAssistants] &&
req.app.locals?.[EModelEndpoint.azureAssistants]
) {
const { disableBuilder, retrievalModels, capabilities, version, ..._rest } =
req.app.locals[EModelEndpoint.azureAssistants];
mergedConfig[EModelEndpoint.azureAssistants] = {
...mergedConfig[EModelEndpoint.azureAssistants],
version,
retrievalModels,
disableBuilder,
capabilities,
};
}
if (mergedConfig[EModelEndpoint.bedrock] && req.app.locals?.[EModelEndpoint.bedrock]) {
const { availableRegions } = req.app.locals[EModelEndpoint.bedrock];
mergedConfig[EModelEndpoint.bedrock] = {
...mergedConfig[EModelEndpoint.bedrock],
availableRegions,
};
}
const endpointsConfig = orderEndpointsConfig(mergedConfig);
await cache.set(CacheKeys.ENDPOINT_CONFIG, endpointsConfig);
const endpointsConfig = await getEndpointsConfig(req);
res.send(JSON.stringify(endpointsConfig));
}

View File

@@ -1,6 +1,8 @@
const { promises: fs } = require('fs');
const { CacheKeys, AuthType } = require('librechat-data-provider');
const { addOpenAPISpecs } = require('~/app/clients/tools/util/addOpenAPISpecs');
const { getCustomConfig } = require('~/server/services/Config');
const { availableTools } = require('~/app/clients/tools');
const { getMCPManager } = require('~/config');
const { getLogStores } = require('~/cache');
/**
@@ -57,10 +59,9 @@ const getAvailablePluginsController = async (req, res) => {
/** @type {{ filteredTools: string[], includedTools: string[] }} */
const { filteredTools = [], includedTools = [] } = req.app.locals;
const pluginManifest = await fs.readFile(req.app.locals.paths.pluginManifest, 'utf8');
const jsonData = JSON.parse(pluginManifest);
const pluginManifest = availableTools;
const uniquePlugins = filterUniquePlugins(jsonData);
const uniquePlugins = filterUniquePlugins(pluginManifest);
let authenticatedPlugins = [];
for (const plugin of uniquePlugins) {
authenticatedPlugins.push(
@@ -104,11 +105,15 @@ const getAvailableTools = async (req, res) => {
return;
}
const pluginManifest = await fs.readFile(req.app.locals.paths.pluginManifest, 'utf8');
const pluginManifest = availableTools;
const customConfig = await getCustomConfig();
if (customConfig?.mcpServers != null) {
const mcpManager = await getMCPManager();
await mcpManager.loadManifestTools(pluginManifest);
}
const jsonData = JSON.parse(pluginManifest);
/** @type {TPlugin[]} */
const uniquePlugins = filterUniquePlugins(jsonData);
const uniquePlugins = filterUniquePlugins(pluginManifest);
const authenticatedPlugins = uniquePlugins.map((plugin) => {
if (checkPluginAuth(plugin)) {
@@ -118,8 +123,12 @@ const getAvailableTools = async (req, res) => {
}
});
const toolDefinitions = req.app.locals.availableTools;
const tools = authenticatedPlugins.filter(
(plugin) => req.app.locals.availableTools[plugin.pluginKey] !== undefined,
(plugin) =>
toolDefinitions[plugin.pluginKey] !== undefined ||
(plugin.toolkit === true &&
Object.keys(toolDefinitions).some((key) => key.startsWith(`${plugin.pluginKey}_`))),
);
await cache.set(CacheKeys.TOOLS, tools);

View File

@@ -1,5 +1,4 @@
const {
Session,
Balance,
getFiles,
deleteFiles,
@@ -7,6 +6,7 @@ const {
deletePresets,
deleteMessages,
deleteUserById,
deleteAllUserSessions,
} = require('~/models');
const User = require('~/models/User');
const { updateUserPluginAuth, deleteUserPluginAuth } = require('~/server/services/PluginService');
@@ -112,7 +112,7 @@ const deleteUserController = async (req, res) => {
try {
await deleteMessages({ user: user.id }); // delete user messages
await Session.deleteMany({ user: user.id }); // delete user sessions
await deleteAllUserSessions({ userId: user.id }); // delete user sessions
await Transaction.deleteMany({ user: user.id }); // delete user transactions
await deleteUserKey({ userId: user.id, all: true }); // delete user keys
await Balance.deleteMany({ user: user._id }); // delete user balances

View File

@@ -1,13 +1,17 @@
const { Tools, StepTypes, imageGenTools } = require('librechat-data-provider');
const { Tools, StepTypes, imageGenTools, FileContext } = require('librechat-data-provider');
const {
EnvVar,
Providers,
GraphEvents,
getMessageId,
ToolEndHandler,
handleToolCalls,
ChatModelStreamHandler,
} = require('@librechat/agents');
const { processCodeOutput } = require('~/server/services/Files/Code/process');
const { saveBase64Image } = require('~/server/services/Files/process');
const { loadAuthValues } = require('~/app/clients/tools/util');
const { logger } = require('~/config');
const { logger, sendEvent } = require('~/config');
/** @typedef {import('@librechat/agents').Graph} Graph */
/** @typedef {import('@librechat/agents').EventHandler} EventHandler */
@@ -18,20 +22,6 @@ const { logger } = require('~/config');
/** @typedef {import('@librechat/agents').ContentAggregatorResult['aggregateContent']} ContentAggregator */
/** @typedef {import('@librechat/agents').GraphEvents} GraphEvents */
/**
* Sends message data in Server Sent Events format.
* @param {ServerResponse} res - The server response.
* @param {{ data: string | Record<string, unknown>, event?: string }} event - The message event.
* @param {string} event.event - The type of event.
* @param {string} event.data - The message to be sent.
*/
const sendEvent = (res, event) => {
if (typeof event.data === 'string' && event.data.length === 0) {
return;
}
res.write(`event: message\ndata: ${JSON.stringify(event)}\n\n`);
};
class ModelEndHandler {
/**
* @param {Array<UsageMetadata>} collectedUsage
@@ -56,13 +46,54 @@ class ModelEndHandler {
return;
}
const usage = data?.output?.usage_metadata;
if (metadata?.model) {
usage.model = metadata.model;
}
try {
if (metadata.provider === Providers.GOOGLE || graph.clientOptions?.disableStreaming) {
handleToolCalls(data?.output?.tool_calls, metadata, graph);
}
const usage = data?.output?.usage_metadata;
if (!usage) {
return;
}
if (metadata?.model) {
usage.model = metadata.model;
}
if (usage) {
this.collectedUsage.push(usage);
if (!graph.clientOptions?.disableStreaming) {
return;
}
if (!data.output.content) {
return;
}
const stepKey = graph.getStepKey(metadata);
const message_id = getMessageId(stepKey, graph) ?? '';
if (message_id) {
graph.dispatchRunStep(stepKey, {
type: StepTypes.MESSAGE_CREATION,
message_creation: {
message_id,
},
});
}
const stepId = graph.getStepIdByKey(stepKey);
const content = data.output.content;
if (typeof content === 'string') {
graph.dispatchMessageDelta(stepId, {
content: [
{
type: 'text',
text: content,
},
],
});
} else if (content.every((c) => c.type?.startsWith('text'))) {
graph.dispatchMessageDelta(stepId, {
content,
});
}
} catch (error) {
logger.error('Error handling model end event:', error);
}
}
}
@@ -191,7 +222,11 @@ function createToolEndCallback({ req, res, artifactPromises }) {
return;
}
if (imageGenTools.has(output.name) && output.artifact) {
if (!output.artifact) {
return;
}
if (imageGenTools.has(output.name)) {
artifactPromises.push(
(async () => {
const fileMetadata = Object.assign(output.artifact, {
@@ -217,10 +252,53 @@ function createToolEndCallback({ req, res, artifactPromises }) {
return;
}
if (output.name !== Tools.execute_code) {
if (output.artifact.content) {
/** @type {FormattedContent[]} */
const content = output.artifact.content;
for (const part of content) {
if (part.type !== 'image_url') {
continue;
}
const { url } = part.image_url;
artifactPromises.push(
(async () => {
const filename = `${output.tool_call_id}-image-${new Date().getTime()}`;
const file = await saveBase64Image(url, {
req,
filename,
endpoint: metadata.provider,
context: FileContext.image_generation,
});
const fileMetadata = Object.assign(file, {
messageId: metadata.run_id,
toolCallId: output.tool_call_id,
conversationId: metadata.thread_id,
});
if (!res.headersSent) {
return fileMetadata;
}
if (!fileMetadata) {
return null;
}
res.write(`event: attachment\ndata: ${JSON.stringify(fileMetadata)}\n\n`);
return fileMetadata;
})().catch((error) => {
logger.error('Error processing artifact content:', error);
return null;
}),
);
}
return;
}
{
if (output.name !== Tools.execute_code) {
return;
}
}
if (!output.artifact.files) {
return;
}
@@ -263,7 +341,6 @@ function createToolEndCallback({ req, res, artifactPromises }) {
}
module.exports = {
sendEvent,
getDefaultHandlers,
createToolEndCallback,
};

View File

@@ -40,6 +40,7 @@ const { createRun } = require('./run');
const { logger } = require('~/config');
/** @typedef {import('@librechat/agents').MessageContentComplex} MessageContentComplex */
/** @typedef {import('@langchain/core/runnables').RunnableConfig} RunnableConfig */
const providerParsers = {
[EModelEndpoint.openAI]: openAISchema,
@@ -59,6 +60,9 @@ const noSystemModelRegex = [/\bo1\b/gi];
class AgentClient extends BaseClient {
constructor(options = {}) {
super(null, options);
/** The current client class
* @type {string} */
this.clientName = EModelEndpoint.agents;
/** @type {'discard' | 'summarize'} */
this.contextStrategy = 'discard';
@@ -90,6 +94,14 @@ class AgentClient extends BaseClient {
this.options = Object.assign({ endpoint: options.endpoint }, clientOptions);
/** @type {string} */
this.model = this.options.agent.model_parameters.model;
/** The key for the usage object's input tokens
* @type {string} */
this.inputTokensKey = 'input_tokens';
/** The key for the usage object's output tokens
* @type {string} */
this.outputTokensKey = 'output_tokens';
/** @type {UsageMetadata} */
this.usage;
}
/**
@@ -192,6 +204,7 @@ class AgentClient extends BaseClient {
resendFiles: this.options.resendFiles,
imageDetail: this.options.imageDetail,
spec: this.options.spec,
iconURL: this.options.iconURL,
},
// TODO: PARSE OPTIONS BY PROVIDER, MAY CONTAIN SENSITIVE DATA
runOptions,
@@ -327,16 +340,18 @@ class AgentClient extends BaseClient {
this.options.agent.instructions = systemContent;
}
/** @type {Record<string, number> | undefined} */
let tokenCountMap;
if (this.contextStrategy) {
({ payload, promptTokens, messages } = await this.handleContextStrategy({
({ payload, promptTokens, tokenCountMap, messages } = await this.handleContextStrategy({
orderedMessages,
formattedMessages,
/* prefer usage_metadata from final message */
buildTokenMap: false,
}));
}
const result = {
tokenCountMap,
prompt: payload,
promptTokens,
messages,
@@ -366,8 +381,26 @@ class AgentClient extends BaseClient {
* @param {UsageMetadata[]} [params.collectedUsage=this.collectedUsage]
*/
async recordCollectedUsage({ model, context = 'message', collectedUsage = this.collectedUsage }) {
for (const usage of collectedUsage) {
await spendTokens(
if (!collectedUsage || !collectedUsage.length) {
return;
}
const input_tokens = collectedUsage[0]?.input_tokens || 0;
let output_tokens = 0;
let previousTokens = input_tokens; // Start with original input
for (let i = 0; i < collectedUsage.length; i++) {
const usage = collectedUsage[i];
if (i > 0) {
// Count new tokens generated (input_tokens minus previous accumulated tokens)
output_tokens += (Number(usage.input_tokens) || 0) - previousTokens;
}
// Add this message's output tokens
output_tokens += Number(usage.output_tokens) || 0;
// Update previousTokens to include this message's output
previousTokens += Number(usage.output_tokens) || 0;
spendTokens(
{
context,
conversationId: this.conversationId,
@@ -376,8 +409,66 @@ class AgentClient extends BaseClient {
model: usage.model ?? model ?? this.model ?? this.options.agent.model_parameters.model,
},
{ promptTokens: usage.input_tokens, completionTokens: usage.output_tokens },
);
).catch((err) => {
logger.error(
'[api/server/controllers/agents/client.js #recordCollectedUsage] Error spending tokens',
err,
);
});
}
this.usage = {
input_tokens,
output_tokens,
};
}
/**
* Get stream usage as returned by this client's API response.
* @returns {UsageMetadata} The stream usage object.
*/
getStreamUsage() {
return this.usage;
}
/**
* @param {TMessage} responseMessage
* @returns {number}
*/
getTokenCountForResponse({ content }) {
return this.getTokenCountForMessage({
role: 'assistant',
content,
});
}
/**
* Calculates the correct token count for the current user message based on the token count map and API usage.
* Edge case: If the calculation results in a negative value, it returns the original estimate.
* If revisiting a conversation with a chat history entirely composed of token estimates,
* the cumulative token count going forward should become more accurate as the conversation progresses.
* @param {Object} params - The parameters for the calculation.
* @param {Record<string, number>} params.tokenCountMap - A map of message IDs to their token counts.
* @param {string} params.currentMessageId - The ID of the current message to calculate.
* @param {OpenAIUsageMetadata} params.usage - The usage object returned by the API.
* @returns {number} The correct token count for the current user message.
*/
calculateCurrentTokenCount({ tokenCountMap, currentMessageId, usage }) {
const originalEstimate = tokenCountMap[currentMessageId] || 0;
if (!usage || typeof usage[this.inputTokensKey] !== 'number') {
return originalEstimate;
}
tokenCountMap[currentMessageId] = 0;
const totalTokensFromMap = Object.values(tokenCountMap).reduce((sum, count) => {
const numCount = Number(count);
return sum + (isNaN(numCount) ? 0 : numCount);
}, 0);
const totalInputTokens = usage[this.inputTokensKey] ?? 0;
const currentMessageTokens = totalInputTokens - totalTokensFromMap;
return currentMessageTokens > 0 ? currentMessageTokens : originalEstimate;
}
async chatCompletion({ payload, abortController = null }) {
@@ -488,12 +579,14 @@ class AgentClient extends BaseClient {
// });
// }
/** @type {Partial<RunnableConfig> & { version: 'v1' | 'v2'; run_id?: string; streamMode: string }} */
const config = {
configurable: {
thread_id: this.conversationId,
last_agent_index: this.agentConfigs?.size ?? 0,
hide_sequential_outputs: this.options.agent.hide_sequential_outputs,
},
recursionLimit: this.options.req.app.locals[EModelEndpoint.agents]?.recursionLimit,
signal: abortController.signal,
streamMode: 'values',
version: 'v2',
@@ -672,12 +765,14 @@ class AgentClient extends BaseClient {
);
});
this.recordCollectedUsage({ context: 'message' }).catch((err) => {
try {
await this.recordCollectedUsage({ context: 'message' });
} catch (err) {
logger.error(
'[api/server/controllers/agents/client.js #chatCompletion] Error recording collected usage',
err,
);
});
}
} catch (err) {
if (!abortController.signal.aborted) {
logger.error(
@@ -763,8 +858,11 @@ class AgentClient extends BaseClient {
}
}
/** Silent method, as `recordCollectedUsage` is used instead */
async recordTokenUsage() {}
getEncoding() {
return this.model?.includes('gpt-4o') ? 'o200k_base' : 'cl100k_base';
return 'o200k_base';
}
/**

View File

@@ -143,6 +143,8 @@ const AgentController = async (req, res, next, initializeClient, addTitle) => {
sender,
messageId: responseMessageId,
parentMessageId: userMessageId ?? parentMessageId,
}).catch((err) => {
logger.error('[api/server/controllers/agents/request] Error in `handleAbortError`', err);
});
}
};

View File

@@ -41,6 +41,11 @@ async function createRun({
agent.model_parameters,
);
if (/o1(?!-(?:mini|preview)).*$/.test(llmConfig.model)) {
llmConfig.streaming = false;
llmConfig.disableStreaming = true;
}
/** @type {StandardGraphConfig} */
const graphConfig = {
signal,

View File

@@ -1,6 +1,12 @@
const fs = require('fs').promises;
const { nanoid } = require('nanoid');
const { FileContext, Constants, Tools, SystemRoles } = require('librechat-data-provider');
const {
FileContext,
Constants,
Tools,
SystemRoles,
actionDelimiter,
} = require('librechat-data-provider');
const {
getAgent,
createAgent,
@@ -10,6 +16,7 @@ const {
} = require('~/models/Agent');
const { uploadImageBuffer, filterFile } = require('~/server/services/Files/process');
const { getStrategyFunctions } = require('~/server/services/Files/strategies');
const { updateAction, getActions } = require('~/models/Action');
const { getProjectByName } = require('~/models/Project');
const { updateAgentProjects } = require('~/models/Agent');
const { deleteFileByFilter } = require('~/models/File');
@@ -173,6 +180,99 @@ const updateAgentHandler = async (req, res) => {
}
};
/**
* Duplicates an Agent based on the provided ID.
* @route POST /Agents/:id/duplicate
* @param {object} req - Express Request
* @param {object} req.params - Request params
* @param {string} req.params.id - Agent identifier.
* @returns {Agent} 201 - success response - application/json
*/
const duplicateAgentHandler = async (req, res) => {
const { id } = req.params;
const { id: userId } = req.user;
const sensitiveFields = ['api_key', 'oauth_client_id', 'oauth_client_secret'];
try {
const agent = await getAgent({ id });
if (!agent) {
return res.status(404).json({
error: 'Agent not found',
status: 'error',
});
}
const {
_id: __id,
id: _id,
author: _author,
createdAt: _createdAt,
updatedAt: _updatedAt,
...cloneData
} = agent;
const newAgentId = `agent_${nanoid()}`;
const newAgentData = Object.assign(cloneData, {
id: newAgentId,
author: userId,
});
const newActionsList = [];
const originalActions = (await getActions({ agent_id: id }, true)) ?? [];
const promises = [];
/**
* Duplicates an action and returns the new action ID.
* @param {Action} action
* @returns {Promise<string>}
*/
const duplicateAction = async (action) => {
const newActionId = nanoid();
const [domain] = action.action_id.split(actionDelimiter);
const fullActionId = `${domain}${actionDelimiter}${newActionId}`;
const newAction = await updateAction(
{ action_id: newActionId },
{
metadata: action.metadata,
agent_id: newAgentId,
user: userId,
},
);
const filteredMetadata = { ...newAction.metadata };
for (const field of sensitiveFields) {
delete filteredMetadata[field];
}
newAction.metadata = filteredMetadata;
newActionsList.push(newAction);
return fullActionId;
};
for (const action of originalActions) {
promises.push(
duplicateAction(action).catch((error) => {
logger.error('[/agents/:id/duplicate] Error duplicating Action:', error);
}),
);
}
const agentActions = await Promise.all(promises);
newAgentData.actions = agentActions;
const newAgent = await createAgent(newAgentData);
return res.status(201).json({
agent: newAgent,
actions: newActionsList,
});
} catch (error) {
logger.error('[/Agents/:id/duplicate] Error duplicating Agent:', error);
res.status(500).json({ error: error.message });
}
};
/**
* Deletes an Agent based on the provided ID.
* @route DELETE /Agents/:id
@@ -292,6 +392,7 @@ module.exports = {
createAgent: createAgentHandler,
getAgent: getAgentHandler,
updateAgent: updateAgentHandler,
duplicateAgent: duplicateAgentHandler,
deleteAgent: deleteAgentHandler,
getListAgents: getListAgentsHandler,
uploadAgentAvatar: uploadAgentAvatarHandler,

View File

@@ -1,5 +1,6 @@
const { v4 } = require('uuid');
const {
Time,
Constants,
RunStatus,
CacheKeys,
@@ -24,6 +25,7 @@ const validateAuthor = require('~/server/middleware/assistants/validateAuthor');
const { formatMessage, createVisionPrompt } = require('~/app/clients/prompts');
const { createRun, StreamRunManager } = require('~/server/services/Runs');
const { addTitle } = require('~/server/services/Endpoints/assistants');
const { createRunBody } = require('~/server/services/createRunBody');
const { getTransactions } = require('~/models/Transaction');
const checkBalance = require('~/models/checkBalance');
const { getConvo } = require('~/models/Conversation');
@@ -32,8 +34,6 @@ const { getModelMaxTokens } = require('~/utils');
const { getOpenAIClient } = require('./helpers');
const { logger } = require('~/config');
const ten_minutes = 1000 * 60 * 10;
/**
* @route POST /
* @desc Chat with an assistant
@@ -59,6 +59,7 @@ const chatV1 = async (req, res) => {
messageId: _messageId,
conversationId: convoId,
parentMessageId: _parentId = Constants.NO_PARENT,
clientTimestamp,
} = req.body;
/** @type {OpenAIClient} */
@@ -304,24 +305,14 @@ const chatV1 = async (req, res) => {
};
/** @type {CreateRunBody | undefined} */
const body = {
const body = createRunBody({
assistant_id,
model,
};
if (promptPrefix) {
body.additional_instructions = promptPrefix;
}
if (typeof endpointOption.artifactsPrompt === 'string' && endpointOption.artifactsPrompt) {
body.additional_instructions = `${body.additional_instructions ?? ''}\n${
endpointOption.artifactsPrompt
}`.trim();
}
if (instructions) {
body.instructions = instructions;
}
promptPrefix,
instructions,
endpointOption,
clientTimestamp,
});
const getRequestFileIds = async () => {
let thread_file_ids = [];
@@ -518,7 +509,7 @@ const chatV1 = async (req, res) => {
});
run_id = run.id;
await cache.set(cacheKey, `${thread_id}:${run_id}`, ten_minutes);
await cache.set(cacheKey, `${thread_id}:${run_id}`, Time.TEN_MINUTES);
sendInitialResponse();
// todo: retry logic
@@ -529,7 +520,7 @@ const chatV1 = async (req, res) => {
/** @type {{[AssistantStreamEvents.ThreadRunCreated]: (event: ThreadRunCreated) => Promise<void>}} */
const handlers = {
[AssistantStreamEvents.ThreadRunCreated]: async (event) => {
await cache.set(cacheKey, `${thread_id}:${event.data.id}`, ten_minutes);
await cache.set(cacheKey, `${thread_id}:${event.data.id}`, Time.TEN_MINUTES);
run_id = event.data.id;
sendInitialResponse();
},

View File

@@ -23,6 +23,7 @@ const { createErrorHandler } = require('~/server/controllers/assistants/errors')
const validateAuthor = require('~/server/middleware/assistants/validateAuthor');
const { createRun, StreamRunManager } = require('~/server/services/Runs');
const { addTitle } = require('~/server/services/Endpoints/assistants');
const { createRunBody } = require('~/server/services/createRunBody');
const { getTransactions } = require('~/models/Transaction');
const checkBalance = require('~/models/checkBalance');
const { getConvo } = require('~/models/Conversation');
@@ -31,8 +32,6 @@ const { getModelMaxTokens } = require('~/utils');
const { getOpenAIClient } = require('./helpers');
const { logger } = require('~/config');
const ten_minutes = 1000 * 60 * 10;
/**
* @route POST /
* @desc Chat with an assistant
@@ -58,6 +57,7 @@ const chatV2 = async (req, res) => {
messageId: _messageId,
conversationId: convoId,
parentMessageId: _parentId = Constants.NO_PARENT,
clientTimestamp,
} = req.body;
/** @type {OpenAIClient} */
@@ -186,22 +186,14 @@ const chatV2 = async (req, res) => {
};
/** @type {CreateRunBody | undefined} */
const body = {
const body = createRunBody({
assistant_id,
model,
};
if (promptPrefix) {
body.additional_instructions = promptPrefix;
}
if (typeof endpointOption.artifactsPrompt === 'string' && endpointOption.artifactsPrompt) {
body.additional_instructions = `${body.additional_instructions ?? ''}\n${endpointOption.artifactsPrompt}`.trim();
}
if (instructions) {
body.instructions = instructions;
}
promptPrefix,
instructions,
endpointOption,
clientTimestamp,
});
const getRequestFileIds = async () => {
let thread_file_ids = [];
@@ -361,7 +353,7 @@ const chatV2 = async (req, res) => {
});
run_id = run.id;
await cache.set(cacheKey, `${thread_id}:${run_id}`, ten_minutes);
await cache.set(cacheKey, `${thread_id}:${run_id}`, Time.TEN_MINUTES);
sendInitialResponse();
// todo: retry logic
@@ -372,7 +364,7 @@ const chatV2 = async (req, res) => {
/** @type {{[AssistantStreamEvents.ThreadRunCreated]: (event: ThreadRunCreated) => Promise<void>}} */
const handlers = {
[AssistantStreamEvents.ThreadRunCreated]: async (event) => {
await cache.set(cacheKey, `${thread_id}:${event.data.id}`, ten_minutes);
await cache.set(cacheKey, `${thread_id}:${event.data.id}`, Time.TEN_MINUTES);
run_id = event.data.id;
sendInitialResponse();
},
@@ -405,16 +397,6 @@ const chatV2 = async (req, res) => {
response = streamRunManager;
response.text = streamRunManager.intermediateText;
const messageCache = getLogStores(CacheKeys.MESSAGES);
messageCache.set(
responseMessageId,
{
complete: true,
text: response.text,
},
Time.FIVE_MINUTES,
);
};
await processRun();

View File

@@ -1,5 +1,4 @@
const {
CacheKeys,
SystemRoles,
EModelEndpoint,
defaultOrderQuery,
@@ -9,7 +8,7 @@ const {
initializeClient: initAzureClient,
} = require('~/server/services/Endpoints/azureAssistants');
const { initializeClient } = require('~/server/services/Endpoints/assistants');
const { getLogStores } = require('~/cache');
const { getEndpointsConfig } = require('~/server/services/Config');
/**
* @param {Express.Request} req
@@ -23,11 +22,8 @@ const getCurrentVersion = async (req, endpoint) => {
version = `v${req.body.version}`;
}
if (!version && endpoint) {
const cache = getLogStores(CacheKeys.CONFIG_STORE);
const cachedEndpointsConfig = await cache.get(CacheKeys.ENDPOINT_CONFIG);
version = `v${
cachedEndpointsConfig?.[endpoint]?.version ?? defaultAssistantsVersion[endpoint]
}`;
const endpointsConfig = await getEndpointsConfig(req);
version = `v${endpointsConfig?.[endpoint]?.version ?? defaultAssistantsVersion[endpoint]}`;
}
if (!version?.startsWith('v') && version.length !== 2) {
throw new Error(`[${req.baseUrl}] Invalid version: ${version}`);

View File

@@ -6,6 +6,7 @@ const { getStrategyFunctions } = require('~/server/services/Files/strategies');
const { deleteAssistantActions } = require('~/server/services/ActionService');
const { updateAssistantDoc, getAssistants } = require('~/models/Assistant');
const { getOpenAIClient, fetchAssistants } = require('./helpers');
const { manifestToolMap } = require('~/app/clients/tools');
const { deleteFileByFilter } = require('~/models/File');
const { logger } = require('~/config');
@@ -19,8 +20,15 @@ const createAssistant = async (req, res) => {
try {
const { openai } = await getOpenAIClient({ req, res });
const { tools = [], endpoint, conversation_starters, ...assistantData } = req.body;
const {
tools = [],
endpoint,
conversation_starters,
append_current_datetime,
...assistantData
} = req.body;
delete assistantData.conversation_starters;
delete assistantData.append_current_datetime;
assistantData.tools = tools
.map((tool) => {
@@ -28,9 +36,21 @@ const createAssistant = async (req, res) => {
return tool;
}
return req.app.locals.availableTools[tool];
const toolDefinitions = req.app.locals.availableTools;
const toolDef = toolDefinitions[tool];
if (!toolDef && manifestToolMap[tool] && manifestToolMap[tool].toolkit === true) {
return (
Object.entries(toolDefinitions)
.filter(([key]) => key.startsWith(`${tool}_`))
// eslint-disable-next-line no-unused-vars
.map(([_, val]) => val)
);
}
return toolDef;
})
.filter((tool) => tool);
.filter((tool) => tool)
.flat();
let azureModelIdentifier = null;
if (openai.locals?.azureOptions) {
@@ -49,6 +69,9 @@ const createAssistant = async (req, res) => {
if (conversation_starters) {
createData.conversation_starters = conversation_starters;
}
if (append_current_datetime !== undefined) {
createData.append_current_datetime = append_current_datetime;
}
const document = await updateAssistantDoc({ assistant_id: assistant.id }, createData);
@@ -60,6 +83,10 @@ const createAssistant = async (req, res) => {
assistant.conversation_starters = document.conversation_starters;
}
if (append_current_datetime !== undefined) {
assistant.append_current_datetime = append_current_datetime;
}
logger.debug('/assistants/', assistant);
res.status(201).json(assistant);
} catch (error) {
@@ -102,16 +129,33 @@ const patchAssistant = async (req, res) => {
await validateAuthor({ req, openai });
const assistant_id = req.params.id;
const { endpoint: _e, conversation_starters, ...updateData } = req.body;
const {
endpoint: _e,
conversation_starters,
append_current_datetime,
...updateData
} = req.body;
updateData.tools = (updateData.tools ?? [])
.map((tool) => {
if (typeof tool !== 'string') {
return tool;
}
return req.app.locals.availableTools[tool];
const toolDefinitions = req.app.locals.availableTools;
const toolDef = toolDefinitions[tool];
if (!toolDef && manifestToolMap[tool] && manifestToolMap[tool].toolkit === true) {
return (
Object.entries(toolDefinitions)
.filter(([key]) => key.startsWith(`${tool}_`))
// eslint-disable-next-line no-unused-vars
.map(([_, val]) => val)
);
}
return toolDef;
})
.filter((tool) => tool);
.filter((tool) => tool)
.flat();
if (openai.locals?.azureOptions && updateData.model) {
updateData.model = openai.locals.azureOptions.azureOpenAIApiDeploymentName;
@@ -127,6 +171,11 @@ const patchAssistant = async (req, res) => {
updatedAssistant.conversation_starters = conversationStartersUpdate.conversation_starters;
}
if (append_current_datetime !== undefined) {
await updateAssistantDoc({ assistant_id }, { append_current_datetime });
updatedAssistant.append_current_datetime = append_current_datetime;
}
res.json(updatedAssistant);
} catch (error) {
logger.error('[/assistants/:id] Error updating assistant', error);
@@ -219,6 +268,7 @@ const getAssistantDocuments = async (req, res) => {
conversation_starters: 1,
createdAt: 1,
updatedAt: 1,
append_current_datetime: 1,
},
);

View File

@@ -2,6 +2,7 @@ const { ToolCallTypes } = require('librechat-data-provider');
const validateAuthor = require('~/server/middleware/assistants/validateAuthor');
const { validateAndUpdateTool } = require('~/server/services/ActionService');
const { updateAssistantDoc } = require('~/models/Assistant');
const { manifestToolMap } = require('~/app/clients/tools');
const { getOpenAIClient } = require('./helpers');
const { logger } = require('~/config');
@@ -16,8 +17,15 @@ const createAssistant = async (req, res) => {
/** @type {{ openai: OpenAIClient }} */
const { openai } = await getOpenAIClient({ req, res });
const { tools = [], endpoint, conversation_starters, ...assistantData } = req.body;
const {
tools = [],
endpoint,
conversation_starters,
append_current_datetime,
...assistantData
} = req.body;
delete assistantData.conversation_starters;
delete assistantData.append_current_datetime;
assistantData.tools = tools
.map((tool) => {
@@ -25,9 +33,21 @@ const createAssistant = async (req, res) => {
return tool;
}
return req.app.locals.availableTools[tool];
const toolDefinitions = req.app.locals.availableTools;
const toolDef = toolDefinitions[tool];
if (!toolDef && manifestToolMap[tool] && manifestToolMap[tool].toolkit === true) {
return (
Object.entries(toolDefinitions)
.filter(([key]) => key.startsWith(`${tool}_`))
// eslint-disable-next-line no-unused-vars
.map(([_, val]) => val)
);
}
return toolDef;
})
.filter((tool) => tool);
.filter((tool) => tool)
.flat();
let azureModelIdentifier = null;
if (openai.locals?.azureOptions) {
@@ -46,6 +66,9 @@ const createAssistant = async (req, res) => {
if (conversation_starters) {
createData.conversation_starters = conversation_starters;
}
if (append_current_datetime !== undefined) {
createData.append_current_datetime = append_current_datetime;
}
const document = await updateAssistantDoc({ assistant_id: assistant.id }, createData);
@@ -56,6 +79,9 @@ const createAssistant = async (req, res) => {
if (document.conversation_starters) {
assistant.conversation_starters = document.conversation_starters;
}
if (append_current_datetime !== undefined) {
assistant.append_current_datetime = append_current_datetime;
}
logger.debug('/assistants/', assistant);
res.status(201).json(assistant);
@@ -89,11 +115,40 @@ const updateAssistant = async ({ req, openai, assistant_id, updateData }) => {
delete updateData.conversation_starters;
}
if (updateData?.append_current_datetime !== undefined) {
await updateAssistantDoc(
{ assistant_id: assistant_id },
{ append_current_datetime: updateData.append_current_datetime },
);
delete updateData.append_current_datetime;
}
let hasFileSearch = false;
for (const tool of updateData.tools ?? []) {
let actualTool = typeof tool === 'string' ? req.app.locals.availableTools[tool] : tool;
const toolDefinitions = req.app.locals.availableTools;
let actualTool = typeof tool === 'string' ? toolDefinitions[tool] : tool;
if (!actualTool) {
if (!actualTool && manifestToolMap[tool] && manifestToolMap[tool].toolkit === true) {
actualTool = Object.entries(toolDefinitions)
.filter(([key]) => key.startsWith(`${tool}_`))
// eslint-disable-next-line no-unused-vars
.map(([_, val]) => val);
} else if (!actualTool) {
continue;
}
if (Array.isArray(actualTool)) {
for (const subTool of actualTool) {
if (!subTool.function) {
tools.push(subTool);
continue;
}
const updatedTool = await validateAndUpdateTool({ req, tool: subTool, assistant_id });
if (updatedTool) {
tools.push(updatedTool);
}
}
continue;
}

View File

@@ -1,14 +1,32 @@
const cookies = require('cookie');
const { Issuer } = require('openid-client');
const { logoutUser } = require('~/server/services/AuthService');
const { isEnabled } = require('~/server/utils');
const { logger } = require('~/config');
const logoutController = async (req, res) => {
const refreshToken = req.headers.cookie ? cookies.parse(req.headers.cookie).refreshToken : null;
try {
const logout = await logoutUser(req.user._id, refreshToken);
const logout = await logoutUser(req, refreshToken);
const { status, message } = logout;
res.clearCookie('refreshToken');
return res.status(status).send({ message });
const response = { message };
if (
req.user.openidId != null &&
isEnabled(process.env.OPENID_USE_END_SESSION_ENDPOINT) &&
process.env.OPENID_ISSUER
) {
const issuer = await Issuer.discover(process.env.OPENID_ISSUER);
const redirect = issuer.metadata.end_session_endpoint;
if (!redirect) {
logger.warn(
'[logoutController] end_session_endpoint not found in OpenID issuer metadata. Please verify that the issuer is correct.',
);
} else {
response.redirect = redirect;
}
}
return res.status(status).send(response);
} catch (err) {
logger.error('[logoutController]', err);
return res.status(500).json({ message: err.message });

View File

@@ -84,6 +84,7 @@ const startServer = async () => {
app.use('/oauth', routes.oauth);
/* API Endpoints */
app.use('/api/auth', routes.auth);
app.use('/api/actions', routes.actions);
app.use('/api/keys', routes.keys);
app.use('/api/user', routes.user);
app.use('/api/search', routes.search);

View File

@@ -75,8 +75,9 @@ const createAbortController = (req, res, getAbortData, getReqData) => {
const abortKey = userMessage?.conversationId ?? req.user.id;
const prevRequest = abortControllers.get(abortKey);
const { overrideUserMessageId } = req?.body ?? {};
if (prevRequest && prevRequest?.abortController) {
if (overrideUserMessageId != null && prevRequest && prevRequest?.abortController) {
const data = prevRequest.abortController.getAbortData();
getReqData({ userMessage: data?.userMessage });
const addedAbortKey = `${abortKey}:${responseMessageId}`;

View File

@@ -27,6 +27,10 @@ async function abortRun(req, res) {
const cacheKey = `${req.user.id}:${conversationId}`;
const cache = getLogStores(CacheKeys.ABORT_KEYS);
const runValues = await cache.get(cacheKey);
if (!runValues) {
logger.warn('[abortRun] Run not found in cache', { cacheKey });
return res.status(204).send({ message: 'Run not found' });
}
const [thread_id, run_id] = runValues.split(':');
if (!run_id) {

View File

@@ -63,6 +63,10 @@ async function buildEndpointOption(req, res, next) {
}
try {
currentModelSpec.preset.spec = spec;
if (currentModelSpec.iconURL != null && currentModelSpec.iconURL !== '') {
currentModelSpec.preset.iconURL = currentModelSpec.iconURL;
}
parsedBody = parseCompactConvo({
endpoint,
endpointType,
@@ -79,7 +83,7 @@ async function buildEndpointOption(req, res, next) {
const builder = isAgents ? (...args) => endpointFn(req, ...args) : endpointFn;
// TODO: use object params
req.body.endpointOption = builder(endpoint, parsedBody, endpointType);
req.body.endpointOption = await builder(endpoint, parsedBody, endpointType);
// TODO: use `getModelsConfig` only when necessary
const modelsConfig = await getModelsConfig(req);

View File

@@ -1,4 +1,4 @@
const { isDomainAllowed } = require('~/server/services/AuthService');
const { isEmailDomainAllowed } = require('~/server/services/domains');
const { logger } = require('~/config');
/**
@@ -14,7 +14,7 @@ const { logger } = require('~/config');
*/
const checkDomainAllowed = async (req, res, next = () => {}) => {
const email = req?.user?.email;
if (email && !(await isDomainAllowed(email))) {
if (email && !(await isEmailDomainAllowed(email))) {
logger.error(`[Social Login] [Social Login not allowed] [Email: ${email}]`);
return res.redirect('/login');
} else {

View File

@@ -1,3 +1,4 @@
jest.mock('~/cache/getLogStores');
const request = require('supertest');
const express = require('express');
const routes = require('../');

View File

@@ -0,0 +1,136 @@
const express = require('express');
const jwt = require('jsonwebtoken');
const { getAccessToken } = require('~/server/services/TokenService');
const { logger, getFlowStateManager } = require('~/config');
const { getLogStores } = require('~/cache');
const router = express.Router();
const JWT_SECRET = process.env.JWT_SECRET;
/**
* Handles the OAuth callback and exchanges the authorization code for tokens.
*
* @route GET /actions/:action_id/oauth/callback
* @param {string} req.params.action_id - The ID of the action.
* @param {string} req.query.code - The authorization code returned by the provider.
* @param {string} req.query.state - The state token to verify the authenticity of the request.
* @returns {void} Sends a success message after updating the action with OAuth tokens.
*/
router.get('/:action_id/oauth/callback', async (req, res) => {
const { action_id } = req.params;
const { code, state } = req.query;
const flowManager = await getFlowStateManager(getLogStores);
let identifier = action_id;
try {
let decodedState;
try {
decodedState = jwt.verify(state, JWT_SECRET);
} catch (err) {
await flowManager.failFlow(identifier, 'oauth', 'Invalid or expired state parameter');
return res.status(400).send('Invalid or expired state parameter');
}
if (decodedState.action_id !== action_id) {
await flowManager.failFlow(identifier, 'oauth', 'Mismatched action ID in state parameter');
return res.status(400).send('Mismatched action ID in state parameter');
}
if (!decodedState.user) {
await flowManager.failFlow(identifier, 'oauth', 'Invalid user ID in state parameter');
return res.status(400).send('Invalid user ID in state parameter');
}
identifier = `${decodedState.user}:${action_id}`;
const flowState = await flowManager.getFlowState(identifier, 'oauth');
if (!flowState) {
throw new Error('OAuth flow not found');
}
const tokenData = await getAccessToken({
code,
userId: decodedState.user,
identifier,
client_url: flowState.metadata.client_url,
redirect_uri: flowState.metadata.redirect_uri,
/** Encrypted values */
encrypted_oauth_client_id: flowState.metadata.encrypted_oauth_client_id,
encrypted_oauth_client_secret: flowState.metadata.encrypted_oauth_client_secret,
});
await flowManager.completeFlow(identifier, 'oauth', tokenData);
res.send(`
<!DOCTYPE html>
<html>
<head>
<title>Authentication Successful</title>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<style>
body {
font-family: ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont;
background-color: rgb(249, 250, 251);
margin: 0;
padding: 2rem;
display: flex;
justify-content: center;
align-items: center;
min-height: 100vh;
}
.card {
background-color: white;
border-radius: 0.5rem;
padding: 2rem;
max-width: 28rem;
width: 100%;
box-shadow: 0 4px 6px -1px rgb(0 0 0 / 0.1), 0 2px 4px -2px rgb(0 0 0 / 0.1);
text-align: center;
}
.heading {
color: rgb(17, 24, 39);
font-size: 1.875rem;
font-weight: 700;
margin: 0 0 1rem;
}
.description {
color: rgb(75, 85, 99);
font-size: 0.875rem;
margin: 0.5rem 0;
}
.countdown {
color: rgb(99, 102, 241);
font-weight: 500;
}
</style>
</head>
<body>
<div class="card">
<h1 class="heading">Authentication Successful</h1>
<p class="description">
Your authentication was successful. This window will close in
<span class="countdown" id="countdown">3</span> seconds.
</p>
</div>
<script>
let secondsLeft = 3;
const countdownElement = document.getElementById('countdown');
const countdown = setInterval(() => {
secondsLeft--;
countdownElement.textContent = secondsLeft;
if (secondsLeft <= 0) {
clearInterval(countdown);
window.close();
}
}, 1000);
</script>
</body>
</html>
`);
} catch (error) {
logger.error('Error in OAuth callback:', error);
await flowManager.failFlow(identifier, 'oauth', error);
res.status(500).send('Authentication failed. Please try again.');
}
});
module.exports = router;

Some files were not shown because too many files have changed in this diff Show More