Compare commits

...

75 Commits

Author SHA1 Message Date
Danny Avila
600d21780b v0.7.5 (#4541) 2024-10-24 17:32:55 -04:00
Danny Avila
3f3b5929e9 🛡️ fix: Minor Vulnerabilities (#4543)
* fix: ReDoS in ChatGPT Import

* ci: should correctly process citations from real ChatGPT data

* ci: Add ReDoS vulnerability test for processAssistantMessage

* refactor: Update thread management and citation handling

* refactor(validateImageRequest): robust validation

* refactor(Prompt.js): update name search regex to escape special characters

* refactor(Preset): exclude user from preset update to prevent mass assignment

* refactor(files.js): Improve file deletion process

* ci: updated validateImageRequest.spec.js

* a11y: plugin pagination

* refactor(CreatePromptForm.tsx): Improve input field styling

* chore(Prompts): typing and accessibility

* fix: prompt creation access role check

* chore: remove duplicate jsdocs
2024-10-24 15:50:48 -04:00
Danny Avila
094a40dbb0 🌏 i18n: Added Missing Localizations (Ar, De, Es, Fr, It, Jp, Ko, Ru, Zh) (#4540)
* chore: remove comparisons

* feat: use prompt caching for translations

* chore: wip translation readme

* i18n: korean translations

* refactor: use promises for faster translation processing

* refactor: update translation model to 'claude-3-5-sonnet-20241022'

* refactor: optimize sleep duration for translation processing

* i18n: add missing keys

* refactor: standardize languages in their own respective languages

* Refactor translation instructions in README.md
2024-10-24 10:48:57 -04:00
Danny Avila
840851cb0f 🍎 fix: Update "Enter to send" behavior for Mac users (#4539)
* fix: Update useTextarea to handle Ctrl+Enter on Mac

* fix: Update language files for message sending behavior
2024-10-24 09:49:10 -04:00
J'aimemin
c346596131 🌏 i18n: modify username min length in Ko.ts (3→2) (#4532) 2024-10-24 09:45:38 -04:00
Danny Avila
e0e393b8a4 🎚️ fix: Google top_k Slider Step to Integers (#4537) 2024-10-24 09:24:38 -04:00
Danny Avila
2996058fa2 🔘 a11y: Switch Contrast and File Input Key Events to WCAG (#4536)
* 🔘 a11y: Improve Contrast of Switch/Toggles to WCAG Standard

* refactor: Improve file attachment accessibility in Chat Input component

* refactor: clear input ref value before clicks
2024-10-24 09:12:49 -04:00
Danny Avila
655f63714b 👓 fix: Assistants Vision Prompt Error Handling (legacy) (#4529)
* fix: vision prompt error handling

* fix: Update model reference in chatV1 controller

* remove model reference
2024-10-23 15:21:22 -04:00
Danny Avila
4da35b9cf5 🤖 feat: Add support for claude-3-5-sonnet-20241022 (#4510) 2024-10-22 16:45:26 -04:00
Danny Avila
ebe3e7f796 🖼️ fix: Avatar Handling for Agents and Assistants (#4507)
The changes include:
- In the agent controller:
  - Removed the parsing of the avatar metadata from the request body.
  - Fetched the avatar data from the agent object using the agent ID.
  - Updated the error logging when fetching the agent.
  - Updated the deleteFileByFilter function to include the user ID when deleting the old avatar file.

- In the assistant controller:
  - Removed the parsing of the metadata from the request body.
  - Fetched the metadata from the assistant object using the assistant ID.
  - Updated the error logging when fetching the assistant.
  - Updated the deleteFileByFilter function to include the user ID when deleting the old avatar file.
2024-10-22 14:53:45 -04:00
Danny Avila
ec922986a9 🤖 fix: Address Minor Agent Issues (#4483)
* fix(Agents): remove test code in openAI/llm.js

* refactor: add use of enums in encodeAndFormat

* fix: image attachment payload formatting for agents

* chore: imports
2024-10-21 09:41:04 -04:00
Danny Avila
a6fbe7591a chore: bump librechat-data-provider 2024-10-21 09:38:40 -04:00
Danny Avila
f121439960 🔐 refactor: Unverified User Verification Logic (#4482) 2024-10-21 07:51:45 -04:00
Marco Beretta
4d4a6b53f1 🎨 style: UI Style Enhancements and Refactor for Improved Consistency and Layout (#4471)
* 🎨 style: adjust padding and class names in UI components

* 🎨 style: update ExportModal export button, update Export button hover style, refactor ChatForm style and fixed isRTL styles, update AttachFile position

* 🎨 style: remove redundant border classes in SettingsTabs components for cleaner UI

* 🎨 style: refactor Account component, extract DisplayUsernameMessages, and remove redundant border classes for cleaner layout

* 🎨 style: conditionally render Dropdown in ForkSettings component for improved UI responsiveness

* 🎨 style: replace DropdownNoState with Dropdown in voice selection components for consistency

* 🎨 style: update Settings component layout for better responsivenes on large screens

* 🎨 style: remove redundant margin-top classes for cleaner layout in various components
2024-10-20 11:29:47 -04:00
Danny Avila
ecf5699513 🧪 chore: raise max temperature to 2 for OpenAI/Custom Endpoints 2024-10-19 08:39:15 -04:00
Alex Wegener
e25c16cd4f 🖼️ feat: Add dat.gui to Artifacts UI libs (#4344) 2024-10-19 08:34:57 -04:00
Marco Beretta
8f3de7d11f 🎨 refactor: UI stlye (#4438)
* feat: Refactor ChatForm and StopButton components for improved styling and localization

* feat: Refactor AudioRecorder, ChatForm, AttachFile, and SendButton components for improved styling and layout

* feat: Add RevokeAllKeys component and update styling for buttons and inputs

* feat: Refactor ClearChats component and update ClearConvos functionality for improved clarity and user experience

* feat: Remove ClearConvos component and update related imports and functionality in Avatar and DeleteCacheButton components

* feat: Rename DeleteCacheButton to DeleteCache and update related imports; enhance confirmation message in localization

* feat: Update ChatForm layout for RTL support and improve component structure

* feat: Adjust ChatForm layout for improved RTL support and alignment

* feat: Refactor Bookmark components to use new UI elements and improve styling

* feat: Update FileSearch and ShareAgent components for improved button styling and layout

* feat: Update ChatForm and TextareaHeader styles for improved UI consistency

* feat: Refactor Nav components for improved styling and layout adjustments

* feat: Update button sizes and padding for improved UI consistency across chat components

* feat: Remove ClearChatsButton test file as part of code cleanup
2024-10-19 08:30:52 -04:00
adrianfagerland
20fb7f05ae 🔃 refactor: rename all server endpoints to use same file names (#4364) 2024-10-19 08:24:07 -04:00
Yuki Matsukura
f3e2bd0a12 🐋 chore: remove Docker version syntax as its no longer (#4375) 2024-10-19 08:22:26 -04:00
Danny Avila
b85c6206ab 🤖 fix: Minor Assistants Issues (#4436)
* refactor(OpenAIClient): titleChatCompletion try/catch

* fix: remove duplicate concatenation as seems to be handled by client SDK now

* fix: assistants image upload

* chore: imports order
2024-10-16 15:04:10 -04:00
Danny Avila
65888c274a ⬆️ feat: Cancel chat file uploads; fix: Assistant uploads (#4433)
* refactor: move file mutations to dedicated file, improve typing

* refactor(ChatForm): utilize FileFormWrapper to consolidate file upload logic/rendering to single parent

* refactor: better TSX heirarchies between AttachFile and FileFormWrapper

* refactor: `abortUpload` WIP

* fix: file debugging and file upload issues

* refactor: reject promise outright if axios intercepted error does not include response property

* chore: bump data-provider version to 0.7.428

* refactor: Add return type to localize function in Translation.ts

* refactor: allow message file attachment upload request cancellations, and add localizations for file upload errors

* refactor: include Azure OpenAI in paramEndpoints set

* fix: assistant form uploads and better typing

* refactor: consolidate logic
2024-10-16 11:24:40 -04:00
Danny Avila
0870acd086 📦 chore: npm package audit (#4424)
* chore: bump cookie dependencies

* chore: bump express, express-session, express-rate-limit, and all vulnerable `cookie` dependencies
2024-10-15 20:05:22 -04:00
Danny Avila
c54a57019e 🕒 feat: Add 5-second timeout for Fetching Model Lists (#4423)
* refactor: add 5 second timeout for fetching AI provider model lists

* ci: fix test due to recent changes
2024-10-15 19:37:41 -04:00
Marco Beretta
ef118009f6 feat: Add GOOGLE_LOC environment variable (#4395) 2024-10-15 18:10:48 -04:00
Hongkai Ye
bf5b87e0b2 🪙 feat: Update token value for gpt-4o (#4387) 2024-10-11 08:27:29 -04:00
Danny Avila
bab0152c58 🤖 feat: Enhance Assistant Model Handling for Model Specs (#4390)
* chore: cleanup type issues in client/src/utils/endpoints

* refactor: use Constant enum for 'new' conversationId

* refactor: select assistant model if not provided for model spec
2024-10-11 08:20:32 -04:00
Abhijith E A
2846779603 🔨 fix(AzureOpenAI): o1 model, stream parameter check (#4381) 2024-10-10 11:10:15 -04:00
Danny Avila
873e0473ec 🧠 feat: Implement O1 Model Support for Max Tokens Handling (#4376) 2024-10-10 02:36:36 -04:00
Hanna Daoud
bdc2fd307f 🔨 fix(ToolCall): Check output string type before performing .toLowerCase() (#4324) 2024-10-08 01:26:03 -04:00
Jürgen Walter
5da7766fad 💬 fix: adjust regex in ModelService to recognize o1 models
API query for OpenAI returns list of models. Their names are filtered using a regex. The regex did not yet account for model names starting with o1-
2024-10-07 13:33:43 -04:00
Danny Avila
519df46e1f 🪨 feat: RAG API Support for AWS Bedrock (#4322)
* feat: bedrock/agents legacy rag api file processing

* refactor: use agent instructions for build message options
2024-10-03 08:53:28 -04:00
Danny Avila
104341e0e7 🖼️ fix: Prevent Empty Avatar Source (#4321) 2024-10-03 07:42:15 -04:00
Pranshu Mahajan
cb0b69e807 🪖 refactor: Helm chart release workflow (#4311) 2024-10-03 07:20:56 -04:00
normunds-wipo
77bcb80e00 🛠️ fix: Remove expiresAt field when setting expiry to "never" (#4294) 2024-10-03 07:17:21 -04:00
bijucyborg
ee5b96a7c8 🔖 fix: bookmark error using CosmosDB - Added index to position field in schema (#4296) 2024-10-03 07:15:27 -04:00
Danny Avila
2ca257dfb9 ⚙️ fix: minor issues related to agents (#4297)
* chore: deprecate `web-browser` tool

* fix: edit agent permission
2024-10-01 11:11:15 -04:00
Danny Avila
2ce8647540 👷 refactor(removeNullishValues): allow empty strings configured in parameters (#4291) 2024-09-30 18:15:16 -04:00
Danny Avila
ad74350036 🚧 chore: merge latest dev build (#4288)
* fix: agent initialization, add `collectedUsage` handling

* style: improve side panel styling

* refactor(loadAgent): Optimize order agent project ID retrieval

* feat: code execution

* fix: typing issues

* feat: ExecuteCode content part

* refactor: use local state for default collapsed state of analysis content parts

* fix: code parsing in ExecuteCode component

* chore: bump agents package, export loadAuthValues

* refactor: Update handleTools.js to use EnvVar for code execution tool authentication

* WIP

* feat: download code outputs

* fix(useEventHandlers): type issues

* feat: backend handling for code outputs

* Refactor: Remove console.log statement in Part.tsx

* refactor: add attachments to TMessage/messageSchema

* WIP: prelim handling for code outputs

* feat: attachments rendering

* refactor: improve attachments rendering

* fix: attachments, nullish edge case, handle attachments from event stream, bump agents package

* fix filename download

* fix: tool assignment for 'run code' on agent creation

* fix: image handling by adding attachments

* refactor: prevent agent creation without provider/model

* refactor: remove unnecessary space in agent creation success message

* refactor: select first model if selecting provider from empty on form

* fix: Agent avatar bug

* fix: `defaultAgentFormValues` causing boolean typing issue and typeerror

* fix: capabilities counting as tools, causing duplication of them

* fix: formatted messages edge case where consecutive content text type parts with the latter having tool_call_ids would cause consecutive AI messages to be created. furthermore, content could not be an array for tool_use messages (anthropic limitation)

* chore: bump @librechat/agents dependency to version 1.6.9

* feat: bedrock agents

* feat: new Agents icon

* feat: agent titling

* feat: agent landing

* refactor: allow sharing agent globally only if user is admin or author

* feat: initial AgentPanelSkeleton

* feat: AgentPanelSkeleton

* feat: collaborative agents

* chore: add potential authorName as part of schema

* chore: Remove unnecessary console.log statement

* WIP: agent model parameters

* chore: ToolsDialog typing and tool related localization chnages

* refactor: update tool instance type (latest langchain class), and rename google tool to 'google' proper

* chore: add back tools

* feat: Agent knowledge files upload

* refactor: better verbiage for disabled knowledge

* chore: debug logs for file deletions

* chore: debug logs for file deletions

* feat: upload/delete agent knowledge/file-search files

* feat: file search UI for agents

* feat: first pass, file search tool

* chore: update default agent capabilities and info
2024-09-30 17:17:57 -04:00
Danny Avila
f33e75e2ee 🏷️ fix: Ensure modelLabel Field Usage for ModelSpecs (custom/openAI endpoints) pt. 3 (#4228) 2024-09-24 11:34:42 -04:00
Danny Avila
9e371d6157 🧹 chore: bump vite-plugin-pwa to ^0.20.5, and use overrides to address CVE-2024-47068 (#4226)
* chore: bump vite-plugin-pwa to `^0.20.5`, and use `overrides` to address CVE-2024-47068

* chore: bump data-provider version
2024-09-24 10:36:11 -04:00
Danny Avila
ba1014a038 🏷️ fix: Ensure modelLabel Field Usage for ModelSpecs pt. 2 (#4225)
* fix: ensure modelSpec presets have endpointType defined, add `modelLabel` to openAISchema

* chore: bump rollup due to CVE-2024-47068
2024-09-24 09:52:22 -04:00
Danny Avila
6f498eee0f 🏷️ fix: Ensure modelLabel Field Usage for ModelSpecs/GPTPlugins (#4224) 2024-09-24 07:27:11 -04:00
Danny Avila
321260e3c7 🔄 refactor: Apply Config Preset for Model Spec Enforcement (#4214) 2024-09-23 21:47:49 -04:00
Danny Avila
17e59349ff 📎 feat: Attachment Handling for v1/completions (#4205)
* refactor: add handling of attachments in v1/completions method

* ci: update OpenAIClient.test.js
2024-09-23 11:03:28 -04:00
Danny Avila
4328a25b6b 🧹 fix: Resolve Unarchive Conversation Bug, Archive Pagination (#4189)
* feat: add cleanup service for 'bugged' conversations (empty/nullish conversationIds)

* fix(ArchivedChatsTable): typing and minor styling issues

* fix: properly archive conversations

* fix: archive convo application crash

* chore: remove unused `useEffect`

* fix: add basic navigation

* chore: typing
2024-09-22 17:21:50 -04:00
Marco Beretta
2d62eca612 👐 style: Improve a11y/theming for Settings Dialog, Dropdown Menus; fix: SearchBar focus issues (#4091)
* fix: cursor pointer not applying correct in the root component

* fix: add cursor-not-allowed to disabled state in SendButton component

* feat: update Dropdown to ariakit and changed LLM error's style

* feat: switched to ariakit's Dropdown and style improvements

* feat: archive updates

* refactor: delete conversations in archive

* refactor: settings

* add cool settings animation

* a11y: settings update

* style: update settings

* style: settings account settings menu; a11y(AccountSettings): switched to AriaKit

* a11y: account settings update

* style: update my files dialog

* fix: tests

* chore: remove console.log()

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2024-09-21 22:45:50 -04:00
Danny Avila
eba2c9a032 📅 fix: Conversation grouping and labeling for prior years (#4180) 2024-09-21 18:50:04 -04:00
Danny Avila
b0a48fd693 📧 feat: LDAP Authentication Enhancement for Email Handling (#4177)
* allow other ldap field besides "mail", or fallback to made up email

* chore(ldap): add detailed logging for email fallback scenarios

---------

Co-authored-by: Maxim Bonnaerens <maxim@bonnaerens.be>
2024-09-21 10:44:27 -04:00
Riya Amemiya
561650d6f9 🐛 fix(analytics): prevent multiple GTM initializations (#4174)
* feat(types): Add global window interface for Google Tag Manager

* refactor(Chat/Footer): Move GTM initialization to useEffect for better lifecycle management

* fix(hooks): add useEffect to initialize TagManager conditionally
2024-09-21 10:30:40 -04:00
Danny Avila
c1c13a69dc 🗂️ fix: Optimize Conversation Grouping and Sorting (#4173)
* chore: remove double import of TrashIcon

* fix(convos): eslint warnings

* ci(convos): add test for month sorting

* fix(convos): grouping by month in chronological order instead of alphabetical, optimize sort

* ci: additional tests for conversation sorting

* chore: fix eslint disable rule

* chore: imports, use constant enum for 'new' value

* fix: test dependent on current date
2024-09-21 10:20:30 -04:00
Danny Avila
44458d3832 🔖 fix: URI Encoding for Bookmarks (#4172)
* fix: never defined AcceptTermsMutationOptions

* fix: lack of URI encoding in tag mutations
2024-09-21 09:06:02 -04:00
Danny Avila
be44caaab1 🖋️ feat: Add option to render User Messages as Markdown (#4170) 2024-09-20 20:29:42 -04:00
Danny Avila
42b7373ddc 🎨 fix: Terms and Conditions Modal Styling (#4169)
* fix: modal styling issue, where buttons in light mode are not accessible/visible

* refactor: use MarkdownLite instead

* chore: make inner content accessible
2024-09-20 18:33:56 -04:00
Vesna Tan
d096c281ba 👐 a11y: New Chat button - focus, mobile label, collapsed sidebar label (#4069) 2024-09-19 20:32:04 -04:00
Danny Avila
94d1afee84 🛡️ chore: address several npm vulnerabilities (#4151)
* chore: bump express to 4.21.0 to address CVE-2024-45590 and CVE-2024-43796

* chore: npm audit fix

* chore: uninstall unused `ws` dependency

* chore: bump nodemailer to 6.9.15

* chore: bump mongoose to v7.3.3

* chore: bump lint-staged for micromatch upgrade

* chore: bump axios to 1.7.7

* chore: npm audit fix for mongodb/mongoose vulns
2024-09-19 20:28:32 -04:00
Danny Avila
f7341336dd 🤖 ci: Dependabot for Security Updates (#4134) 2024-09-19 18:22:03 -04:00
Danny Avila
fd056d2e9c 🤖 ci: Dependabot for Security Updates (#4134) 2024-09-19 18:16:18 -04:00
Danny Avila
486db5722b 🤖 ci: Configure Dependabot for Security Updates (#4134) 2024-09-19 18:14:11 -04:00
Danny Avila
33f80cd70c 🤖 ci: Configure Dependabot for Security Updates (#4134) 2024-09-19 18:11:06 -04:00
Danny Avila
3ea2d908e0 🛠️ fix: getStreamUsage Method in OpenAIClient (#4133) 2024-09-19 18:07:49 -04:00
Danny Avila
5f28682314 🔧 fix: OpenAIClient Response Handling for Legacy /v1/completions (#4128) 2024-09-19 13:20:29 -04:00
Danny Avila
8dc5b320bc 📊 refactor: use Parameters from Side Panel for OpenAI, Anthropic, and Custom endpoints (#4092)
* feat: openai parameters

* refactor: anthropic/bedrock params, add preset params for openai, and add azure params

* refactor: use 'compact' schemas for anthropic/openai

* refactor: ensure custom endpoints are properly recognized as valid param endpoints

* refactor: update paramEndpoints check in BaseClient.js

* chore: optimize logging by omitting modelsConfig

* refactor: update label casing in baseDefinitions combobox items

* fix: remove 'stop' model options when using o1 series models

* refactor(AnthropicClient): remove default `stop` value

* refactor: reset params on parameters change

* refactor: remove unused default parameter value map introduced in prior commit

* fix: 'min' typo for 'max' value

* refactor: preset settings

* refactor: replace dropdown for image detail with slider; remove `preventDelayedUpdate` condition from DynamicSlider

* fix: localizations for freq./pres. penalty

* Refactor maxOutputTokens to use coerceNumber in tConversationSchema

* refactor(AnthropicClient): use `getModelMaxOutputTokens`
2024-09-17 22:25:54 -04:00
Danny Avila
ebdbfe8427 🛠️ fix: Chrome App Crash on Endpoint Selection in Edit Preset Dialog (#4096) 2024-09-17 15:33:12 -04:00
Danny Avila
fc887ba847 📁 feat: Add C# Support for Native File Search (#4058) 2024-09-15 13:14:33 -04:00
Danny Avila
ab82966210 🔐 feat: Enhance Bedrock Credential Handling (#4051) 2024-09-14 12:52:35 -04:00
Danny Avila
f1ae267850 🪙 fix: usage check for reasoning_tokens 2024-09-13 09:51:09 -04:00
Daniel
c792e3279f 🍪 fix: input validation for lang cookie (#4024)
Co-authored-by: DanielAlt <daniel.altenburg@proton.me>
2024-09-13 09:00:59 -04:00
Marco Beretta
4ef5ae6f71 💡 style: switched to Ariakit's tooltip (#3748)
* inital Tooltip implementation and test

* style(tooltip): L/R sidePanel and Nav

* style(tooltip): unarchive button; refactor: `useArchiveHandler` and `ArchiveButton`

* style(tooltip): Delete button

* refactor: remove unused className prop in DeleteButton component

* style(tooltip): finish final tooltip and fix bookmark edit and delete button

* refactor(ui): remove TooltipTest and DropDownMenu component and unused imports

* style: update mobile UI

* fix: sidePanel icon not showing

* feat(AttachFile): add tooltip

* fix(NavToggle): remove button
without this button, kb users don't have to manually press 2 times to change the focus
Also, tooltips with buttons focus don't trigger

* fix: right side panel issue with double button

* fix: merge issues

* fix: sharedLink table issue

* chore: update ariakit and framer-motion version

* a11y: kb toggle for sidebar

* feat: tooltip for some buttons
2024-09-13 08:59:09 -04:00
Danny Avila
e293ff63f9 🪨 feat: AWS Bedrock Default Credentials Chain (#4038)
* feat: use AWS cascading default providers if credentials are omitted

Environment variables exposed via process.env
SSO credentials from token cache
Web identity token credentials
Shared credentials and config ini files
The EC2/ECS Instance Metadata Service

The default credential provider will invoke one provider at a time and only continue to the next if no credentials have been located. For example, if the process finds values defined via the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables, the files at ~/.aws/credentials and ~/.aws/config will not be read, nor will any messages be sent to the Instance Metadata Service.

* fix: usage check in OpenAIClient

* refactor: Improve usage check in OpenAIClient
2024-09-13 08:53:50 -04:00
Danny Avila
45b42830a5 🚀 feat: o1 (#4019)
* feat: o1 default response sender string

* feat: add o1 models to default openai models list, add `no_system_messages` error type; refactor: use error type as localization key

* refactor(MessageEndpointIcon): differentiate openAI icon model color for o1 models

* refactor(AnthropicClient): use new input/output tokens keys; add prompt caching for claude-3-opus

* refactor(BaseClient): to use new input/output tokens keys; update typedefs

* feat: initial o1 model handling, including token cost complexity

* EXPERIMENTAL: special handling for o1 model with custom instructions
2024-09-12 18:15:43 -04:00
Danny Avila
9a393be012 🪨 fix: Formatting Edge Case Handling for Bedrock Messages (#4016)
* refactor: Remove console.log statement in SelectDropDown component

* fix(bedrock): edge case - message.content as string creating message formatting issue
2024-09-12 06:06:22 -04:00
Marco Beretta
c3dc03b063 🔐 fix: token not using webcrypto (#4005)
* fix: token

* style: auth pages updated `|` color
2024-09-11 22:25:14 -04:00
Yuichi Oneda
aea01f0bc5 🚀 feat: Banner (#3952)
* feat: Add banner schema and model

* feat: Add optional JwtAuth

To handle the conditional logic with and without authentication within the model.

* feat: Add an endpoint to retrieve a banner

* feat: Add implementation for client to use banner and access API

* feat: Display a banner on UI

* feat: Script for updating and deleting banners

* style: Update banner style

* fix: Adjust the height when the banner is displayed

* fix: failed specs
2024-09-11 09:34:25 -04:00
Sebastian Diez
07e5531b5b ⚙️ fix: Ensure Azure AI Search TOP is a number (#3891)
allows configuration through AZURE_AI_SEARCH_SEARCH_OPTION_TOP enviroment variable
2024-09-11 09:21:33 -04:00
Marco Beretta
35a89bfa99 🔐 style: update auth and loading screen (#3875)
* style: improve auth UI

* style(SocialButton): fix hover style

* remove testing files

* fix: package-lock

* feat: loading screen color based on theme

* fix: handle `system` style on loading screen

* fix(ThemeSelector): Correct icon and text color handling for `system` theme

* remove test file
2024-09-11 09:20:19 -04:00
415 changed files with 13743 additions and 36003 deletions

View File

@@ -1,5 +1,3 @@
version: "3.8"
services:
app:
build:

View File

@@ -82,7 +82,7 @@ PROXY=
#============#
ANTHROPIC_API_KEY=user_provided
# ANTHROPIC_MODELS=claude-3-5-sonnet-20240620,claude-3-opus-20240229,claude-3-sonnet-20240229,claude-3-haiku-20240307,claude-2.1,claude-2,claude-1.2,claude-1,claude-1-100k,claude-instant-1,claude-instant-1-100k
# ANTHROPIC_MODELS=claude-3-5-sonnet-20241022,claude-3-5-sonnet-latest,claude-3-5-sonnet-20240620,claude-3-opus-20240229,claude-3-sonnet-20240229,claude-3-haiku-20240307,claude-2.1,claude-2,claude-1.2,claude-1,claude-1-100k,claude-instant-1,claude-instant-1-100k
# ANTHROPIC_REVERSE_PROXY=
#============#
@@ -146,6 +146,8 @@ GOOGLE_KEY=user_provided
# GOOGLE_TITLE_MODEL=gemini-pro
# GOOGLE_LOC=us-central1
# Google Safety Settings
# NOTE: These settings apply to both Vertex AI and Gemini API (AI Studio)
#
@@ -412,6 +414,7 @@ LDAP_CA_CERT_PATH=
# LDAP_LOGIN_USES_USERNAME=true
# LDAP_ID=
# LDAP_USERNAME=
# LDAP_EMAIL=
# LDAP_FULL_NAME=
#========================#

View File

@@ -1,47 +0,0 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
version: 2
updates:
- package-ecosystem: "npm" # See documentation for possible values
directory: "/api" # Location of package manifests
target-branch: "dev"
versioning-strategy: increase-if-necessary
schedule:
interval: "weekly"
allow:
# Allow both direct and indirect updates for all packages
- dependency-type: "all"
commit-message:
prefix: "npm api prod"
prefix-development: "npm api dev"
include: "scope"
- package-ecosystem: "npm" # See documentation for possible values
directory: "/client" # Location of package manifests
target-branch: "dev"
versioning-strategy: increase-if-necessary
schedule:
interval: "weekly"
allow:
# Allow both direct and indirect updates for all packages
- dependency-type: "all"
commit-message:
prefix: "npm client prod"
prefix-development: "npm client dev"
include: "scope"
- package-ecosystem: "npm" # See documentation for possible values
directory: "/" # Location of package manifests
target-branch: "dev"
versioning-strategy: increase-if-necessary
schedule:
interval: "weekly"
allow:
# Allow both direct and indirect updates for all packages
- dependency-type: "all"
commit-message:
prefix: "npm all prod"
prefix-development: "npm all dev"
include: "scope"

View File

@@ -25,11 +25,9 @@ jobs:
- name: Install Helm
uses: azure/setup-helm@v4
env:
GITHUB_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
GITHUB_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
- name: Run chart-releaser
uses: helm/chart-releaser-action@v1.6.0
with:
charts_dir: helmchart
env:
CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"

View File

@@ -1,4 +1,4 @@
# v0.7.5-rc2
# v0.7.5
# Base node image
FROM node:20-alpine AS node

View File

@@ -1,5 +1,5 @@
# Dockerfile.multi
# v0.7.5-rc2
# v0.7.5
# Base for all builds
FROM node:20-alpine AS base

View File

@@ -17,8 +17,8 @@ const {
parseParamFromPrompt,
createContextHandlers,
} = require('./prompts');
const { getModelMaxTokens, getModelMaxOutputTokens, matchModelName } = require('~/utils');
const { spendTokens, spendStructuredTokens } = require('~/models/spendTokens');
const { getModelMaxTokens, matchModelName } = require('~/utils');
const { sleep } = require('~/server/utils');
const BaseClient = require('./BaseClient');
const { logger } = require('~/config');
@@ -64,6 +64,12 @@ class AnthropicClient extends BaseClient {
/** Whether or not the model supports Prompt Caching
* @type {boolean} */
this.supportsCacheControl;
/** The key for the usage object's input tokens
* @type {string} */
this.inputTokensKey = 'input_tokens';
/** The key for the usage object's output tokens
* @type {string} */
this.outputTokensKey = 'output_tokens';
}
setOptions(options) {
@@ -114,7 +120,14 @@ class AnthropicClient extends BaseClient {
this.options.maxContextTokens ??
getModelMaxTokens(this.modelOptions.model, EModelEndpoint.anthropic) ??
100000;
this.maxResponseTokens = this.modelOptions.maxOutputTokens || 1500;
this.maxResponseTokens =
this.modelOptions.maxOutputTokens ??
getModelMaxOutputTokens(
this.modelOptions.model,
this.options.endpointType ?? this.options.endpoint,
this.options.endpointTokenConfig,
) ??
1500;
this.maxPromptTokens =
this.options.maxPromptTokens || this.maxContextTokens - this.maxResponseTokens;
@@ -138,17 +151,6 @@ class AnthropicClient extends BaseClient {
this.endToken = '';
this.gptEncoder = this.constructor.getTokenizer('cl100k_base');
if (!this.modelOptions.stop) {
const stopTokens = [this.startToken];
if (this.endToken && this.endToken !== this.startToken) {
stopTokens.push(this.endToken);
}
stopTokens.push(`${this.userLabel}`);
stopTokens.push('<|diff_marker|>');
this.modelOptions.stop = stopTokens;
}
return this;
}
@@ -200,7 +202,7 @@ class AnthropicClient extends BaseClient {
}
/**
* Calculates the correct token count for the current message based on the token count map and API usage.
* Calculates the correct token count for the current user message based on the token count map and API usage.
* Edge case: If the calculation results in a negative value, it returns the original estimate.
* If revisiting a conversation with a chat history entirely composed of token estimates,
* the cumulative token count going forward should become more accurate as the conversation progresses.
@@ -208,7 +210,7 @@ class AnthropicClient extends BaseClient {
* @param {Record<string, number>} params.tokenCountMap - A map of message IDs to their token counts.
* @param {string} params.currentMessageId - The ID of the current message to calculate.
* @param {AnthropicStreamUsage} params.usage - The usage object returned by the API.
* @returns {number} The correct token count for the current message.
* @returns {number} The correct token count for the current user message.
*/
calculateCurrentTokenCount({ tokenCountMap, currentMessageId, usage }) {
const originalEstimate = tokenCountMap[currentMessageId] || 0;
@@ -680,7 +682,14 @@ class AnthropicClient extends BaseClient {
*/
checkPromptCacheSupport(modelName) {
const modelMatch = matchModelName(modelName, EModelEndpoint.anthropic);
if (modelMatch === 'claude-3-5-sonnet' || modelMatch === 'claude-3-haiku') {
if (modelMatch.includes('claude-3-5-sonnet-latest')) {
return false;
}
if (
modelMatch === 'claude-3-5-sonnet' ||
modelMatch === 'claude-3-haiku' ||
modelMatch === 'claude-3-opus'
) {
return true;
}
return false;

View File

@@ -3,7 +3,7 @@ const fetch = require('node-fetch');
const {
supportsBalanceCheck,
isAgentsEndpoint,
paramEndpoints,
isParamEndpoint,
ErrorTypes,
Constants,
CacheKeys,
@@ -42,6 +42,14 @@ class BaseClient {
this.conversationId;
/** @type {string} */
this.responseMessageId;
/** @type {TAttachment[]} */
this.attachments;
/** The key for the usage object's input tokens
* @type {string} */
this.inputTokensKey = 'prompt_tokens';
/** The key for the usage object's output tokens
* @type {string} */
this.outputTokensKey = 'completion_tokens';
}
setOptions() {
@@ -582,7 +590,10 @@ class BaseClient {
if (typeof completion === 'string') {
responseMessage.text = addSpaceIfNeeded(generation) + completion;
} else if (Array.isArray(completion) && paramEndpoints.has(this.options.endpoint)) {
} else if (
Array.isArray(completion) &&
isParamEndpoint(this.options.endpoint, this.options.endpointType)
) {
responseMessage.text = '';
responseMessage.content = completion;
} else if (Array.isArray(completion)) {
@@ -604,8 +615,8 @@ class BaseClient {
* @type {StreamUsage | null} */
const usage = this.getStreamUsage != null ? this.getStreamUsage() : null;
if (usage != null && Number(usage.output_tokens) > 0) {
responseMessage.tokenCount = usage.output_tokens;
if (usage != null && Number(usage[this.outputTokensKey]) > 0) {
responseMessage.tokenCount = usage[this.outputTokensKey];
completionTokens = responseMessage.tokenCount;
await this.updateUserMessageTokenCount({ usage, tokenCountMap, userMessage, opts });
} else {
@@ -620,6 +631,10 @@ class BaseClient {
await this.userMessagePromise;
}
if (this.artifactPromises) {
responseMessage.attachments = (await Promise.all(this.artifactPromises)).filter((a) => a);
}
this.responsePromise = this.saveMessageToDatabase(responseMessage, saveOptions, user);
const messageCache = getLogStores(CacheKeys.MESSAGES);
messageCache.set(
@@ -655,7 +670,7 @@ class BaseClient {
/** @type {boolean} */
const shouldUpdateCount =
this.calculateCurrentTokenCount != null &&
Number(usage.input_tokens) > 0 &&
Number(usage[this.inputTokensKey]) > 0 &&
(this.options.resendFiles ||
(!this.options.resendFiles && !this.options.attachments?.length)) &&
!this.options.promptPrefix;

View File

@@ -1,19 +1,21 @@
const Keyv = require('keyv');
const crypto = require('crypto');
const { CohereClient } = require('cohere-ai');
const { fetchEventSource } = require('@waylaidwanderer/fetch-event-source');
const { encoding_for_model: encodingForModel, get_encoding: getEncoding } = require('tiktoken');
const {
ImageDetail,
EModelEndpoint,
resolveHeaders,
CohereConstants,
mapModelToAzureConfig,
} = require('librechat-data-provider');
const { CohereClient } = require('cohere-ai');
const { encoding_for_model: encodingForModel, get_encoding: getEncoding } = require('tiktoken');
const { fetchEventSource } = require('@waylaidwanderer/fetch-event-source');
const { extractBaseURL, constructAzureURL, genAzureChatCompletion } = require('~/utils');
const { createContextHandlers } = require('./prompts');
const { createCoherePayload } = require('./llm');
const { Agent, ProxyAgent } = require('undici');
const BaseClient = require('./BaseClient');
const { logger } = require('~/config');
const { extractBaseURL, constructAzureURL, genAzureChatCompletion } = require('~/utils');
const CHATGPT_MODEL = 'gpt-3.5-turbo';
const tokenizersCache = {};
@@ -612,21 +614,66 @@ ${botMessage.message}
async buildPrompt(messages, { isChatGptModel = false, promptPrefix = null }) {
promptPrefix = (promptPrefix || this.options.promptPrefix || '').trim();
// Handle attachments and create augmentedPrompt
if (this.options.attachments) {
const attachments = await this.options.attachments;
const lastMessage = messages[messages.length - 1];
if (this.message_file_map) {
this.message_file_map[lastMessage.messageId] = attachments;
} else {
this.message_file_map = {
[lastMessage.messageId]: attachments,
};
}
const files = await this.addImageURLs(lastMessage, attachments);
this.options.attachments = files;
this.contextHandlers = createContextHandlers(this.options.req, lastMessage.text);
}
if (this.message_file_map) {
this.contextHandlers = createContextHandlers(
this.options.req,
messages[messages.length - 1].text,
);
}
// Calculate image token cost and process embedded files
messages.forEach((message, i) => {
if (this.message_file_map && this.message_file_map[message.messageId]) {
const attachments = this.message_file_map[message.messageId];
for (const file of attachments) {
if (file.embedded) {
this.contextHandlers?.processFile(file);
continue;
}
messages[i].tokenCount =
(messages[i].tokenCount || 0) +
this.calculateImageTokenCost({
width: file.width,
height: file.height,
detail: this.options.imageDetail ?? ImageDetail.auto,
});
}
}
});
if (this.contextHandlers) {
this.augmentedPrompt = await this.contextHandlers.createContext();
promptPrefix = this.augmentedPrompt + promptPrefix;
}
if (promptPrefix) {
// If the prompt prefix doesn't end with the end token, add it.
if (!promptPrefix.endsWith(`${this.endToken}`)) {
promptPrefix = `${promptPrefix.trim()}${this.endToken}\n\n`;
}
promptPrefix = `${this.startToken}Instructions:\n${promptPrefix}`;
} else {
const currentDateString = new Date().toLocaleDateString('en-us', {
year: 'numeric',
month: 'long',
day: 'numeric',
});
promptPrefix = `${this.startToken}Instructions:\nYou are ChatGPT, a large language model trained by OpenAI. Respond conversationally.\nCurrent date: ${currentDateString}${this.endToken}\n\n`;
}
const promptSuffix = `${this.startToken}${this.chatGptLabel}:\n`; // Prompt ChatGPT to respond.
const instructionsPayload = {
@@ -714,10 +761,6 @@ ${botMessage.message}
this.maxResponseTokens,
);
if (this.options.debug) {
console.debug(`Prompt : ${prompt}`);
}
if (isChatGptModel) {
return { prompt: [instructionsPayload, messagePayload], context };
}

View File

@@ -28,7 +28,7 @@ const {
} = require('./prompts');
const BaseClient = require('./BaseClient');
const loc = 'us-central1';
const loc = process.env.GOOGLE_LOC || 'us-central1';
const publisher = 'google';
const endpointPrefix = `https://${loc}-aiplatform.googleapis.com`;
// const apiEndpoint = loc + '-aiplatform.googleapis.com';
@@ -593,6 +593,8 @@ class GoogleClient extends BaseClient {
createLLM(clientOptions) {
const model = clientOptions.modelName ?? clientOptions.model;
clientOptions.location = loc;
clientOptions.endpoint = `${loc}-aiplatform.googleapis.com`;
if (this.project_id && this.isTextModel) {
logger.debug('Creating Google VertexAI client');
return new GoogleVertexAI(clientOptions);

View File

@@ -60,7 +60,9 @@ class OllamaClient {
try {
const ollamaEndpoint = deriveBaseURL(baseURL);
/** @type {Promise<AxiosResponse<OllamaListResponse>>} */
const response = await axios.get(`${ollamaEndpoint}/api/tags`);
const response = await axios.get(`${ollamaEndpoint}/api/tags`, {
timeout: 5000,
});
models = response.data.models.map((tag) => tag.name);
return models;
} catch (error) {

View File

@@ -19,6 +19,7 @@ const {
constructAzureURL,
getModelMaxTokens,
genAzureChatCompletion,
getModelMaxOutputTokens,
} = require('~/utils');
const {
truncateText,
@@ -64,6 +65,11 @@ class OpenAIClient extends BaseClient {
/** @type {string | undefined} - The API Completions URL */
this.completionsUrl;
/** @type {OpenAIUsageMetadata | undefined} */
this.usage;
/** @type {boolean|undefined} */
this.isO1Model;
}
// TODO: PluginsClient calls this 3x, unneeded
@@ -94,6 +100,8 @@ class OpenAIClient extends BaseClient {
this.options.modelOptions,
);
this.isO1Model = /\bo1\b/i.test(this.modelOptions.model);
this.defaultVisionModel = this.options.visionModel ?? 'gpt-4-vision-preview';
if (typeof this.options.attachments?.then === 'function') {
this.options.attachments.then((attachments) => this.checkVisionRequest(attachments));
@@ -138,7 +146,8 @@ class OpenAIClient extends BaseClient {
const { model } = this.modelOptions;
this.isChatCompletion = this.useOpenRouter || !!reverseProxy || model.includes('gpt');
this.isChatCompletion =
/\bo1\b/i.test(model) || model.includes('gpt') || this.useOpenRouter || !!reverseProxy;
this.isChatGptModel = this.isChatCompletion;
if (
model.includes('text-davinci') ||
@@ -169,7 +178,14 @@ class OpenAIClient extends BaseClient {
logger.debug('[OpenAIClient] maxContextTokens', this.maxContextTokens);
}
this.maxResponseTokens = this.modelOptions.max_tokens || 1024;
this.maxResponseTokens =
this.modelOptions.max_tokens ??
getModelMaxOutputTokens(
model,
this.options.endpointType ?? this.options.endpoint,
this.options.endpointTokenConfig,
) ??
1024;
this.maxPromptTokens =
this.options.maxPromptTokens || this.maxContextTokens - this.maxResponseTokens;
@@ -187,8 +203,8 @@ class OpenAIClient extends BaseClient {
model: this.modelOptions.model,
endpoint: this.options.endpoint,
endpointType: this.options.endpointType,
chatGptLabel: this.options.chatGptLabel,
modelDisplayLabel: this.options.modelDisplayLabel,
chatGptLabel: this.options.chatGptLabel || this.options.modelLabel,
});
this.userLabel = this.options.userLabel || 'User';
@@ -533,7 +549,7 @@ class OpenAIClient extends BaseClient {
promptPrefix = this.augmentedPrompt + promptPrefix;
}
if (promptPrefix) {
if (promptPrefix && this.isO1Model !== true) {
promptPrefix = `Instructions:\n${promptPrefix.trim()}`;
instructions = {
role: 'system',
@@ -561,6 +577,16 @@ class OpenAIClient extends BaseClient {
messages,
};
/** EXPERIMENTAL */
if (promptPrefix && this.isO1Model === true) {
const lastUserMessageIndex = payload.findLastIndex((message) => message.role === 'user');
if (lastUserMessageIndex !== -1) {
payload[
lastUserMessageIndex
].content = `${promptPrefix}\n${payload[lastUserMessageIndex].content}`;
}
}
if (tokenCountMap) {
tokenCountMap.instructions = instructions?.tokenCount;
result.tokenCountMap = tokenCountMap;
@@ -621,6 +647,12 @@ class OpenAIClient extends BaseClient {
if (completionResult && typeof completionResult === 'string') {
reply = completionResult;
} else if (
completionResult &&
typeof completionResult === 'object' &&
Array.isArray(completionResult.choices)
) {
reply = completionResult.choices[0]?.text?.replace(this.endToken, '');
}
} else if (typeof opts.onProgress === 'function' || this.options.useChatCompletion) {
reply = await this.chatCompletion({
@@ -810,27 +842,27 @@ class OpenAIClient extends BaseClient {
}
const titleChatCompletion = async () => {
modelOptions.model = model;
try {
modelOptions.model = model;
if (this.azure) {
modelOptions.model = process.env.AZURE_OPENAI_DEFAULT_MODEL ?? modelOptions.model;
this.azureEndpoint = genAzureChatCompletion(this.azure, modelOptions.model, this);
}
if (this.azure) {
modelOptions.model = process.env.AZURE_OPENAI_DEFAULT_MODEL ?? modelOptions.model;
this.azureEndpoint = genAzureChatCompletion(this.azure, modelOptions.model, this);
}
const instructionsPayload = [
{
role: this.options.titleMessageRole ?? (this.isOllama ? 'user' : 'system'),
content: `Please generate ${titleInstruction}
const instructionsPayload = [
{
role: this.options.titleMessageRole ?? (this.isOllama ? 'user' : 'system'),
content: `Please generate ${titleInstruction}
${convo}
||>Title:`,
},
];
},
];
const promptTokens = this.getTokenCountForMessage(instructionsPayload[0]);
const promptTokens = this.getTokenCountForMessage(instructionsPayload[0]);
try {
let useChatCompletion = true;
if (this.options.reverseProxyUrl === CohereConstants.API_URL) {
@@ -885,6 +917,60 @@ ${convo}
return title;
}
/**
* Get stream usage as returned by this client's API response.
* @returns {OpenAIUsageMetadata} The stream usage object.
*/
getStreamUsage() {
if (
this.usage &&
typeof this.usage === 'object' &&
'completion_tokens_details' in this.usage &&
this.usage.completion_tokens_details &&
typeof this.usage.completion_tokens_details === 'object' &&
'reasoning_tokens' in this.usage.completion_tokens_details
) {
const outputTokens = Math.abs(
this.usage.completion_tokens_details.reasoning_tokens - this.usage[this.outputTokensKey],
);
return {
...this.usage.completion_tokens_details,
[this.inputTokensKey]: this.usage[this.inputTokensKey],
[this.outputTokensKey]: outputTokens,
};
}
return this.usage;
}
/**
* Calculates the correct token count for the current user message based on the token count map and API usage.
* Edge case: If the calculation results in a negative value, it returns the original estimate.
* If revisiting a conversation with a chat history entirely composed of token estimates,
* the cumulative token count going forward should become more accurate as the conversation progresses.
* @param {Object} params - The parameters for the calculation.
* @param {Record<string, number>} params.tokenCountMap - A map of message IDs to their token counts.
* @param {string} params.currentMessageId - The ID of the current message to calculate.
* @param {OpenAIUsageMetadata} params.usage - The usage object returned by the API.
* @returns {number} The correct token count for the current user message.
*/
calculateCurrentTokenCount({ tokenCountMap, currentMessageId, usage }) {
const originalEstimate = tokenCountMap[currentMessageId] || 0;
if (!usage || typeof usage[this.inputTokensKey] !== 'number') {
return originalEstimate;
}
tokenCountMap[currentMessageId] = 0;
const totalTokensFromMap = Object.values(tokenCountMap).reduce((sum, count) => {
const numCount = Number(count);
return sum + (isNaN(numCount) ? 0 : numCount);
}, 0);
const totalInputTokens = usage[this.inputTokensKey] ?? 0;
const currentMessageTokens = totalInputTokens - totalTokensFromMap;
return currentMessageTokens > 0 ? currentMessageTokens : originalEstimate;
}
async summarizeMessages({ messagesToRefine, remainingContextTokens }) {
logger.debug('[OpenAIClient] Summarizing messages...');
let context = messagesToRefine;
@@ -1000,7 +1086,16 @@ ${convo}
}
}
async recordTokenUsage({ promptTokens, completionTokens, context = 'message' }) {
/**
* @param {object} params
* @param {number} params.promptTokens
* @param {number} params.completionTokens
* @param {OpenAIUsageMetadata} [params.usage]
* @param {string} [params.model]
* @param {string} [params.context='message']
* @returns {Promise<void>}
*/
async recordTokenUsage({ promptTokens, completionTokens, usage, context = 'message' }) {
await spendTokens(
{
context,
@@ -1011,6 +1106,24 @@ ${convo}
},
{ promptTokens, completionTokens },
);
if (
usage &&
typeof usage === 'object' &&
'reasoning_tokens' in usage &&
typeof usage.reasoning_tokens === 'number'
) {
await spendTokens(
{
context: 'reasoning',
model: this.modelOptions.model,
conversationId: this.conversationId,
user: this.user ?? this.options.req.user?.id,
endpointTokenConfig: this.options.endpointTokenConfig,
},
{ completionTokens: usage.reasoning_tokens },
);
}
}
getTokenCountForResponse(response) {
@@ -1117,6 +1230,11 @@ ${convo}
opts.defaultHeaders = { ...opts.defaultHeaders, 'api-key': this.apiKey };
}
if (this.isO1Model === true && modelOptions.max_tokens != null) {
modelOptions.max_completion_tokens = modelOptions.max_tokens;
delete modelOptions.max_tokens;
}
if (process.env.OPENAI_ORGANIZATION) {
opts.organization = process.env.OPENAI_ORGANIZATION;
}
@@ -1191,6 +1309,11 @@ ${convo}
/** @type {(value: void | PromiseLike<void>) => void} */
let streamResolve;
if (modelOptions.stream && this.isO1Model) {
delete modelOptions.stream;
delete modelOptions.stop;
}
if (modelOptions.stream) {
streamPromise = new Promise((resolve) => {
streamResolve = resolve;
@@ -1269,6 +1392,8 @@ ${convo}
}
const { choices } = chatCompletion;
this.usage = chatCompletion.usage;
if (!Array.isArray(choices) || choices.length === 0) {
logger.warn('[OpenAIClient] Chat completion response has no choices');
return intermediateReply.join('');

View File

@@ -0,0 +1,285 @@
const { ToolMessage } = require('@langchain/core/messages');
const { ContentTypes } = require('librechat-data-provider');
const { HumanMessage, AIMessage, SystemMessage } = require('langchain/schema');
const { formatAgentMessages } = require('./formatMessages');
describe('formatAgentMessages', () => {
it('should format simple user and AI messages', () => {
const payload = [
{ role: 'user', content: 'Hello' },
{ role: 'assistant', content: 'Hi there!' },
];
const result = formatAgentMessages(payload);
expect(result).toHaveLength(2);
expect(result[0]).toBeInstanceOf(HumanMessage);
expect(result[1]).toBeInstanceOf(AIMessage);
});
it('should handle system messages', () => {
const payload = [{ role: 'system', content: 'You are a helpful assistant.' }];
const result = formatAgentMessages(payload);
expect(result).toHaveLength(1);
expect(result[0]).toBeInstanceOf(SystemMessage);
});
it('should format messages with content arrays', () => {
const payload = [
{
role: 'user',
content: [{ type: ContentTypes.TEXT, [ContentTypes.TEXT]: 'Hello' }],
},
];
const result = formatAgentMessages(payload);
expect(result).toHaveLength(1);
expect(result[0]).toBeInstanceOf(HumanMessage);
});
it('should handle tool calls and create ToolMessages', () => {
const payload = [
{
role: 'assistant',
content: [
{
type: ContentTypes.TEXT,
[ContentTypes.TEXT]: 'Let me check that for you.',
tool_call_ids: ['123'],
},
{
type: ContentTypes.TOOL_CALL,
tool_call: {
id: '123',
name: 'search',
args: '{"query":"weather"}',
output: 'The weather is sunny.',
},
},
],
},
];
const result = formatAgentMessages(payload);
expect(result).toHaveLength(2);
expect(result[0]).toBeInstanceOf(AIMessage);
expect(result[1]).toBeInstanceOf(ToolMessage);
expect(result[0].tool_calls).toHaveLength(1);
expect(result[1].tool_call_id).toBe('123');
});
it('should handle multiple content parts in assistant messages', () => {
const payload = [
{
role: 'assistant',
content: [
{ type: ContentTypes.TEXT, [ContentTypes.TEXT]: 'Part 1' },
{ type: ContentTypes.TEXT, [ContentTypes.TEXT]: 'Part 2' },
],
},
];
const result = formatAgentMessages(payload);
expect(result).toHaveLength(1);
expect(result[0]).toBeInstanceOf(AIMessage);
expect(result[0].content).toHaveLength(2);
});
it('should throw an error for invalid tool call structure', () => {
const payload = [
{
role: 'assistant',
content: [
{
type: ContentTypes.TOOL_CALL,
tool_call: {
id: '123',
name: 'search',
args: '{"query":"weather"}',
output: 'The weather is sunny.',
},
},
],
},
];
expect(() => formatAgentMessages(payload)).toThrow('Invalid tool call structure');
});
it('should handle tool calls with non-JSON args', () => {
const payload = [
{
role: 'assistant',
content: [
{ type: ContentTypes.TEXT, [ContentTypes.TEXT]: 'Checking...', tool_call_ids: ['123'] },
{
type: ContentTypes.TOOL_CALL,
tool_call: {
id: '123',
name: 'search',
args: 'non-json-string',
output: 'Result',
},
},
],
},
];
const result = formatAgentMessages(payload);
expect(result).toHaveLength(2);
expect(result[0].tool_calls[0].args).toBe('non-json-string');
});
it('should handle complex tool calls with multiple steps', () => {
const payload = [
{
role: 'assistant',
content: [
{
type: ContentTypes.TEXT,
[ContentTypes.TEXT]: 'I\'ll search for that information.',
tool_call_ids: ['search_1'],
},
{
type: ContentTypes.TOOL_CALL,
tool_call: {
id: 'search_1',
name: 'search',
args: '{"query":"weather in New York"}',
output: 'The weather in New York is currently sunny with a temperature of 75°F.',
},
},
{
type: ContentTypes.TEXT,
[ContentTypes.TEXT]: 'Now, I\'ll convert the temperature.',
tool_call_ids: ['convert_1'],
},
{
type: ContentTypes.TOOL_CALL,
tool_call: {
id: 'convert_1',
name: 'convert_temperature',
args: '{"temperature": 75, "from": "F", "to": "C"}',
output: '23.89°C',
},
},
{ type: ContentTypes.TEXT, [ContentTypes.TEXT]: 'Here\'s your answer.' },
],
},
];
const result = formatAgentMessages(payload);
expect(result).toHaveLength(5);
expect(result[0]).toBeInstanceOf(AIMessage);
expect(result[1]).toBeInstanceOf(ToolMessage);
expect(result[2]).toBeInstanceOf(AIMessage);
expect(result[3]).toBeInstanceOf(ToolMessage);
expect(result[4]).toBeInstanceOf(AIMessage);
// Check first AIMessage
expect(result[0].content).toBe('I\'ll search for that information.');
expect(result[0].tool_calls).toHaveLength(1);
expect(result[0].tool_calls[0]).toEqual({
id: 'search_1',
name: 'search',
args: { query: 'weather in New York' },
});
// Check first ToolMessage
expect(result[1].tool_call_id).toBe('search_1');
expect(result[1].name).toBe('search');
expect(result[1].content).toBe(
'The weather in New York is currently sunny with a temperature of 75°F.',
);
// Check second AIMessage
expect(result[2].content).toBe('Now, I\'ll convert the temperature.');
expect(result[2].tool_calls).toHaveLength(1);
expect(result[2].tool_calls[0]).toEqual({
id: 'convert_1',
name: 'convert_temperature',
args: { temperature: 75, from: 'F', to: 'C' },
});
// Check second ToolMessage
expect(result[3].tool_call_id).toBe('convert_1');
expect(result[3].name).toBe('convert_temperature');
expect(result[3].content).toBe('23.89°C');
// Check final AIMessage
expect(result[4].content).toStrictEqual([
{ [ContentTypes.TEXT]: 'Here\'s your answer.', type: ContentTypes.TEXT },
]);
});
it.skip('should not produce two consecutive assistant messages and format content correctly', () => {
const payload = [
{ role: 'user', content: 'Hello' },
{
role: 'assistant',
content: [{ type: ContentTypes.TEXT, [ContentTypes.TEXT]: 'Hi there!' }],
},
{
role: 'assistant',
content: [{ type: ContentTypes.TEXT, [ContentTypes.TEXT]: 'How can I help you?' }],
},
{ role: 'user', content: 'What\'s the weather?' },
{
role: 'assistant',
content: [
{
type: ContentTypes.TEXT,
[ContentTypes.TEXT]: 'Let me check that for you.',
tool_call_ids: ['weather_1'],
},
{
type: ContentTypes.TOOL_CALL,
tool_call: {
id: 'weather_1',
name: 'check_weather',
args: '{"location":"New York"}',
output: 'Sunny, 75°F',
},
},
],
},
{
role: 'assistant',
content: [
{ type: ContentTypes.TEXT, [ContentTypes.TEXT]: 'Here\'s the weather information.' },
],
},
];
const result = formatAgentMessages(payload);
// Check correct message count and types
expect(result).toHaveLength(6);
expect(result[0]).toBeInstanceOf(HumanMessage);
expect(result[1]).toBeInstanceOf(AIMessage);
expect(result[2]).toBeInstanceOf(HumanMessage);
expect(result[3]).toBeInstanceOf(AIMessage);
expect(result[4]).toBeInstanceOf(ToolMessage);
expect(result[5]).toBeInstanceOf(AIMessage);
// Check content of messages
expect(result[0].content).toStrictEqual([
{ [ContentTypes.TEXT]: 'Hello', type: ContentTypes.TEXT },
]);
expect(result[1].content).toStrictEqual([
{ [ContentTypes.TEXT]: 'Hi there!', type: ContentTypes.TEXT },
{ [ContentTypes.TEXT]: 'How can I help you?', type: ContentTypes.TEXT },
]);
expect(result[2].content).toStrictEqual([
{ [ContentTypes.TEXT]: 'What\'s the weather?', type: ContentTypes.TEXT },
]);
expect(result[3].content).toBe('Let me check that for you.');
expect(result[4].content).toBe('Sunny, 75°F');
expect(result[5].content).toStrictEqual([
{ [ContentTypes.TEXT]: 'Here\'s the weather information.', type: ContentTypes.TEXT },
]);
// Check that there are no consecutive AIMessages
const messageTypes = result.map((message) => message.constructor);
for (let i = 0; i < messageTypes.length - 1; i++) {
expect(messageTypes[i] === AIMessage && messageTypes[i + 1] === AIMessage).toBe(false);
}
// Additional check to ensure the consecutive assistant messages were combined
expect(result[1].content).toHaveLength(2);
});
});

View File

@@ -142,6 +142,9 @@ const formatAgentMessages = (payload) => {
const messages = [];
for (const message of payload) {
if (typeof message.content === 'string') {
message.content = [{ type: ContentTypes.TEXT, [ContentTypes.TEXT]: message.content }];
}
if (message.role !== 'assistant') {
messages.push(formatMessage({ message, langChain: true }));
continue;
@@ -152,10 +155,22 @@ const formatAgentMessages = (payload) => {
for (const part of message.content) {
if (part.type === ContentTypes.TEXT && part.tool_call_ids) {
// If there's pending content, add it as an AIMessage
/*
If there's pending content, it needs to be aggregated as a single string to prepare for tool calls.
For Anthropic models, the "tool_calls" field on a message is only respected if content is a string.
*/
if (currentContent.length > 0) {
messages.push(new AIMessage({ content: currentContent }));
let content = currentContent.reduce((acc, curr) => {
if (curr.type === ContentTypes.TEXT) {
return `${acc}${curr[ContentTypes.TEXT]}\n`;
}
return acc;
}, '');
content = `${content}\n${part[ContentTypes.TEXT] ?? ''}`.trim();
lastAIMessage = new AIMessage({ content });
messages.push(lastAIMessage);
currentContent = [];
continue;
}
// Create a new AIMessage with this text and prepare for tool calls

View File

@@ -201,10 +201,10 @@ describe('AnthropicClient', () => {
);
});
it('should add beta header for claude-3-5-sonnet model', () => {
it('should add "max-tokens" & "prompt-caching" beta header for claude-3-5-sonnet model', () => {
const client = new AnthropicClient('test-api-key');
const modelOptions = {
model: 'claude-3-5-sonnet-20240307',
model: 'claude-3-5-sonnet-20241022',
};
client.setOptions({ modelOptions, promptCache: true });
const anthropicClient = client.getClient(modelOptions);
@@ -215,7 +215,7 @@ describe('AnthropicClient', () => {
);
});
it('should add beta header for claude-3-haiku model', () => {
it('should add "prompt-caching" beta header for claude-3-haiku model', () => {
const client = new AnthropicClient('test-api-key');
const modelOptions = {
model: 'claude-3-haiku-2028',
@@ -229,6 +229,30 @@ describe('AnthropicClient', () => {
);
});
it('should add "prompt-caching" beta header for claude-3-opus model', () => {
const client = new AnthropicClient('test-api-key');
const modelOptions = {
model: 'claude-3-opus-2028',
};
client.setOptions({ modelOptions, promptCache: true });
const anthropicClient = client.getClient(modelOptions);
expect(anthropicClient._options.defaultHeaders).toBeDefined();
expect(anthropicClient._options.defaultHeaders).toHaveProperty('anthropic-beta');
expect(anthropicClient._options.defaultHeaders['anthropic-beta']).toBe(
'prompt-caching-2024-07-31',
);
});
it('should not add beta header for claude-3-5-sonnet-latest model', () => {
const client = new AnthropicClient('test-api-key');
const modelOptions = {
model: 'anthropic/claude-3-5-sonnet-latest',
};
client.setOptions({ modelOptions, promptCache: true });
const anthropicClient = client.getClient(modelOptions);
expect(anthropicClient.defaultHeaders).not.toHaveProperty('anthropic-beta');
});
it('should not add beta header for other models', () => {
const client = new AnthropicClient('test-api-key');
client.setOptions({

View File

@@ -611,15 +611,7 @@ describe('OpenAIClient', () => {
expect(getCompletion).toHaveBeenCalled();
expect(getCompletion.mock.calls.length).toBe(1);
const currentDateString = new Date().toLocaleDateString('en-us', {
year: 'numeric',
month: 'long',
day: 'numeric',
});
expect(getCompletion.mock.calls[0][0]).toBe(
`||>Instructions:\nYou are ChatGPT, a large language model trained by OpenAI. Respond conversationally.\nCurrent date: ${currentDateString}\n\n||>User:\nHi mom!\n||>Assistant:\n`,
);
expect(getCompletion.mock.calls[0][0]).toBe('||>User:\nHi mom!\n||>Assistant:\n');
expect(fetchEventSource).toHaveBeenCalled();
expect(fetchEventSource.mock.calls.length).toBe(1);
@@ -701,4 +693,70 @@ describe('OpenAIClient', () => {
expect(client.modelOptions.stop).toBeUndefined();
});
});
describe('getStreamUsage', () => {
it('should return this.usage when completion_tokens_details is null', () => {
const client = new OpenAIClient('test-api-key', defaultOptions);
client.usage = {
completion_tokens_details: null,
prompt_tokens: 10,
completion_tokens: 20,
};
client.inputTokensKey = 'prompt_tokens';
client.outputTokensKey = 'completion_tokens';
const result = client.getStreamUsage();
expect(result).toEqual(client.usage);
});
it('should return this.usage when completion_tokens_details is missing reasoning_tokens', () => {
const client = new OpenAIClient('test-api-key', defaultOptions);
client.usage = {
completion_tokens_details: {
other_tokens: 5,
},
prompt_tokens: 10,
completion_tokens: 20,
};
client.inputTokensKey = 'prompt_tokens';
client.outputTokensKey = 'completion_tokens';
const result = client.getStreamUsage();
expect(result).toEqual(client.usage);
});
it('should calculate output tokens correctly when completion_tokens_details is present with reasoning_tokens', () => {
const client = new OpenAIClient('test-api-key', defaultOptions);
client.usage = {
completion_tokens_details: {
reasoning_tokens: 30,
other_tokens: 5,
},
prompt_tokens: 10,
completion_tokens: 20,
};
client.inputTokensKey = 'prompt_tokens';
client.outputTokensKey = 'completion_tokens';
const result = client.getStreamUsage();
expect(result).toEqual({
reasoning_tokens: 30,
other_tokens: 5,
prompt_tokens: 10,
completion_tokens: 10, // |30 - 20| = 10
});
});
it('should return this.usage when it is undefined', () => {
const client = new OpenAIClient('test-api-key', defaultOptions);
client.usage = undefined;
const result = client.getStreamUsage();
expect(result).toBeUndefined();
});
});
});

View File

@@ -77,7 +77,7 @@ class AzureAISearch extends StructuredTool {
try {
const searchOption = {
queryType: this.queryType,
top: this.top,
top: typeof this.top === 'string' ? Number(this.top) : this.top,
};
if (this.select) {
searchOption.select = this.select.split(',');

View File

@@ -25,7 +25,6 @@ module.exports = {
// Basic Tools
CodeBrew,
AzureAiSearch,
GoogleSearchAPI,
WolframAlphaAPI,
OpenAICreateImage,
StableDiffusionAPI,
@@ -37,8 +36,9 @@ module.exports = {
CodeSherpa,
StructuredSD,
StructuredACS,
GoogleSearchAPI,
CodeSherpaTools,
TraversaalSearch,
StructuredWolfram,
TavilySearchResults,
TraversaalSearch,
};

View File

@@ -1,9 +1,9 @@
const { z } = require('zod');
const { StructuredTool } = require('langchain/tools');
const { Tool } = require('@langchain/core/tools');
const { SearchClient, AzureKeyCredential } = require('@azure/search-documents');
const { logger } = require('~/config');
class AzureAISearch extends StructuredTool {
class AzureAISearch extends Tool {
// Constants for default values
static DEFAULT_API_VERSION = '2023-11-01';
static DEFAULT_QUERY_TYPE = 'simple';
@@ -83,7 +83,7 @@ class AzureAISearch extends StructuredTool {
try {
const searchOption = {
queryType: this.queryType,
top: this.top,
top: typeof this.top === 'string' ? Number(this.top) : this.top,
};
if (this.select) {
searchOption.select = this.select.split(',');

View File

@@ -2,7 +2,7 @@ const { z } = require('zod');
const path = require('path');
const OpenAI = require('openai');
const { v4: uuidv4 } = require('uuid');
const { Tool } = require('langchain/tools');
const { Tool } = require('@langchain/core/tools');
const { HttpsProxyAgent } = require('https-proxy-agent');
const { FileContext } = require('librechat-data-provider');
const { getImageBasename } = require('~/server/services/Files/images');

View File

@@ -4,11 +4,12 @@ const { getEnvironmentVariable } = require('@langchain/core/utils/env');
class GoogleSearchResults extends Tool {
static lc_name() {
return 'GoogleSearchResults';
return 'google';
}
constructor(fields = {}) {
super(fields);
this.name = 'google';
this.envVarApiKey = 'GOOGLE_SEARCH_API_KEY';
this.envVarSearchEngineId = 'GOOGLE_CSE_ID';
this.override = fields.override ?? false;

View File

@@ -5,12 +5,12 @@ const path = require('path');
const axios = require('axios');
const sharp = require('sharp');
const { v4: uuidv4 } = require('uuid');
const { StructuredTool } = require('langchain/tools');
const { Tool } = require('@langchain/core/tools');
const { FileContext } = require('librechat-data-provider');
const paths = require('~/config/paths');
const { logger } = require('~/config');
class StableDiffusionAPI extends StructuredTool {
class StableDiffusionAPI extends Tool {
constructor(fields) {
super();
/** @type {string} User ID */

View File

@@ -1,10 +1,10 @@
/* eslint-disable no-useless-escape */
const axios = require('axios');
const { z } = require('zod');
const { StructuredTool } = require('langchain/tools');
const { Tool } = require('@langchain/core/tools');
const { logger } = require('~/config');
class WolframAlphaAPI extends StructuredTool {
class WolframAlphaAPI extends Tool {
constructor(fields) {
super();
/* Used to initialize the Tool without necessary variables. */

View File

@@ -0,0 +1,104 @@
const { z } = require('zod');
const axios = require('axios');
const { tool } = require('@langchain/core/tools');
const { Tools, EToolResources } = require('librechat-data-provider');
const { getFiles } = require('~/models/File');
const { logger } = require('~/config');
/**
*
* @param {Object} options
* @param {ServerRequest} options.req
* @param {Agent['tool_resources']} options.tool_resources
* @returns
*/
const createFileSearchTool = async (options) => {
const { req, tool_resources } = options;
const file_ids = tool_resources?.[EToolResources.file_search]?.file_ids ?? [];
const files = (await getFiles({ file_id: { $in: file_ids } })).map((file) => ({
file_id: file.file_id,
filename: file.filename,
}));
const fileList = files.map((file) => `- ${file.filename}`).join('\n');
const toolDescription = `Performs a semantic search based on a natural language query across the following files:\n${fileList}`;
const FileSearch = tool(
async ({ query }) => {
if (files.length === 0) {
return 'No files to search. Instruct the user to add files for the search.';
}
const jwtToken = req.headers.authorization.split(' ')[1];
if (!jwtToken) {
return 'There was an error authenticating the file search request.';
}
const queryPromises = files.map((file) =>
axios
.post(
`${process.env.RAG_API_URL}/query`,
{
file_id: file.file_id,
query,
k: 5,
},
{
headers: {
Authorization: `Bearer ${jwtToken}`,
'Content-Type': 'application/json',
},
},
)
.catch((error) => {
logger.error(
`Error encountered in \`file_search\` while querying file_id ${file._id}:`,
error,
);
return null;
}),
);
const results = await Promise.all(queryPromises);
const validResults = results.filter((result) => result !== null);
if (validResults.length === 0) {
return 'No results found or errors occurred while searching the files.';
}
const formattedResults = validResults
.flatMap((result) =>
result.data.map(([docInfo, relevanceScore]) => ({
filename: docInfo.metadata.source.split('/').pop(),
content: docInfo.page_content,
relevanceScore,
})),
)
.sort((a, b) => b.relevanceScore - a.relevanceScore);
const formattedString = formattedResults
.map(
(result) =>
`File: ${result.filename}\nRelevance: ${result.relevanceScore.toFixed(4)}\nContent: ${
result.content
}\n`,
)
.join('\n---\n');
return formattedString;
},
{
name: Tools.file_search,
description: toolDescription,
schema: z.object({
query: z
.string()
.describe(
'A natural language query to search for relevant information in the files. Be specific and use keywords related to the information you\'re looking for. The query will be used for semantic similarity matching against the file contents.',
),
}),
},
);
return FileSearch;
};
module.exports = createFileSearchTool;

View File

@@ -1,8 +1,8 @@
const { Tools } = require('librechat-data-provider');
const { ZapierToolKit } = require('langchain/agents');
const { Calculator } = require('langchain/tools/calculator');
const { WebBrowser } = require('langchain/tools/webbrowser');
const { SerpAPI, ZapierNLAWrapper } = require('langchain/tools');
const { OpenAIEmbeddings } = require('langchain/embeddings/openai');
const { createCodeExecutionTool, EnvVar } = require('@librechat/agents');
const { getUserPluginAuthValue } = require('~/server/services/PluginService');
const {
availableTools,
@@ -24,16 +24,11 @@ const {
StructuredWolfram,
TavilySearchResults,
} = require('../');
const createFileSearchTool = require('./createFileSearchTool');
const { loadToolSuite } = require('./loadToolSuite');
const { loadSpecs } = require('./loadSpecs');
const { logger } = require('~/config');
const getOpenAIKey = async (options, user) => {
let openAIApiKey = options.openAIApiKey ?? process.env.OPENAI_API_KEY;
openAIApiKey = openAIApiKey === 'user_provided' ? null : openAIApiKey;
return openAIApiKey || (await getUserPluginAuthValue(user, 'OPENAI_API_KEY'));
};
/**
* Validates the availability and authentication of tools for a user based on environment variables or user-specific plugin authentication values.
* Tools without required authentication or with valid authentication are considered valid.
@@ -97,6 +92,45 @@ const validateTools = async (user, tools = []) => {
}
};
const loadAuthValues = async ({ userId, authFields }) => {
let authValues = {};
/**
* Finds the first non-empty value for the given authentication field, supporting alternate fields.
* @param {string[]} fields Array of strings representing the authentication fields. Supports alternate fields delimited by "||".
* @returns {Promise<{ authField: string, authValue: string} | null>} An object containing the authentication field and value, or null if not found.
*/
const findAuthValue = async (fields) => {
for (const field of fields) {
let value = process.env[field];
if (value) {
return { authField: field, authValue: value };
}
try {
value = await getUserPluginAuthValue(userId, field);
} catch (err) {
if (field === fields[fields.length - 1] && !value) {
throw err;
}
}
if (value) {
return { authField: field, authValue: value };
}
}
return null;
};
for (let authField of authFields) {
const fields = authField.split('||');
const result = await findAuthValue(fields);
if (result) {
authValues[result.authField] = result.authValue;
}
}
return authValues;
};
/**
* Initializes a tool with authentication values for the given user, supporting alternate authentication fields.
* Authentication fields can have alternates separated by "||", and the first defined variable will be used.
@@ -109,41 +143,7 @@ const validateTools = async (user, tools = []) => {
*/
const loadToolWithAuth = (userId, authFields, ToolConstructor, options = {}) => {
return async function () {
let authValues = {};
/**
* Finds the first non-empty value for the given authentication field, supporting alternate fields.
* @param {string[]} fields Array of strings representing the authentication fields. Supports alternate fields delimited by "||".
* @returns {Promise<{ authField: string, authValue: string} | null>} An object containing the authentication field and value, or null if not found.
*/
const findAuthValue = async (fields) => {
for (const field of fields) {
let value = process.env[field];
if (value) {
return { authField: field, authValue: value };
}
try {
value = await getUserPluginAuthValue(userId, field);
} catch (err) {
if (field === fields[fields.length - 1] && !value) {
throw err;
}
}
if (value) {
return { authField: field, authValue: value };
}
}
return null;
};
for (let authField of authFields) {
const fields = authField.split('||');
const result = await findAuthValue(fields);
if (result) {
authValues[result.authField] = result.authValue;
}
}
const authValues = await loadAuthValues({ userId, authFields });
return new ToolConstructor({ ...options, ...authValues, userId });
};
};
@@ -169,8 +169,6 @@ const loadTools = async ({
traversaal_search: TraversaalSearch,
};
const openAIApiKey = await getOpenAIKey(options, user);
const customConstructors = {
e2b_code_interpreter: async () => {
if (!functions) {
@@ -183,7 +181,6 @@ const loadTools = async ({
user,
options: {
model,
openAIApiKey,
...options,
},
});
@@ -200,14 +197,6 @@ const loadTools = async ({
options,
});
},
'web-browser': async () => {
// let openAIApiKey = options.openAIApiKey ?? process.env.OPENAI_API_KEY;
// openAIApiKey = openAIApiKey === 'user_provided' ? null : openAIApiKey;
// openAIApiKey = openAIApiKey || (await getUserPluginAuthValue(user, 'OPENAI_API_KEY'));
const browser = new WebBrowser({ model, embeddings: new OpenAIEmbeddings({ openAIApiKey }) });
browser.description_for_model = browser.description;
return browser;
},
serpapi: async () => {
let apiKey = process.env.SERPAPI_API_KEY;
if (!apiKey) {
@@ -264,6 +253,22 @@ const loadTools = async ({
const remainingTools = [];
for (const tool of tools) {
if (tool === Tools.execute_code) {
const authValues = await loadAuthValues({
userId: user.id,
authFields: [EnvVar.CODE_API_KEY],
});
requestedTools[tool] = () =>
createCodeExecutionTool({
user_id: user.id,
...authValues,
});
continue;
} else if (tool === Tools.file_search) {
requestedTools[tool] = () => createFileSearchTool(options);
continue;
}
if (customConstructors[tool]) {
requestedTools[tool] = customConstructors[tool];
continue;
@@ -331,6 +336,7 @@ const loadTools = async ({
module.exports = {
loadToolWithAuth,
loadAuthValues,
validateTools,
loadTools,
};

View File

@@ -1,8 +1,9 @@
const { validateTools, loadTools } = require('./handleTools');
const { validateTools, loadTools, loadAuthValues } = require('./handleTools');
const handleOpenAIErrors = require('./handleOpenAIErrors');
module.exports = {
handleOpenAIErrors,
loadAuthValues,
validateTools,
loadTools,
};

View File

@@ -1,11 +1,14 @@
const mongoose = require('mongoose');
const { SystemRoles } = require('librechat-data-provider');
const { GLOBAL_PROJECT_NAME } = require('librechat-data-provider').Constants;
const { CONFIG_STORE, STARTUP_CONFIG } = require('librechat-data-provider').CacheKeys;
const {
getProjectByName,
addAgentIdsToProject,
removeAgentIdsFromProject,
removeAgentFromAllProjects,
} = require('./Project');
const getLogStores = require('~/cache/getLogStores');
const agentSchema = require('./schema/agent');
const Agent = mongoose.model('agent', agentSchema);
@@ -30,6 +33,43 @@ const createAgent = async (agentData) => {
*/
const getAgent = async (searchParameter) => await Agent.findOne(searchParameter).lean();
/**
* Load an agent based on the provided ID
*
* @param {Object} params
* @param {ServerRequest} params.req
* @param {string} params.agent_id
* @returns {Promise<Agent|null>} The agent document as a plain object, or null if not found.
*/
const loadAgent = async ({ req, agent_id }) => {
const agent = await getAgent({
id: agent_id,
});
if (agent.author.toString() === req.user.id) {
return agent;
}
if (!agent.projectIds) {
return null;
}
const cache = getLogStores(CONFIG_STORE);
/** @type {TStartupConfig} */
const cachedStartupConfig = await cache.get(STARTUP_CONFIG);
let { instanceProjectId } = cachedStartupConfig ?? {};
if (!instanceProjectId) {
instanceProjectId = (await getProjectByName(GLOBAL_PROJECT_NAME, '_id'))._id.toString();
}
for (const projectObjectId of agent.projectIds) {
const projectId = projectObjectId.toString();
if (projectId === instanceProjectId) {
return agent;
}
}
};
/**
* Update an agent with new data without overwriting existing
* properties, or create a new agent if it doesn't exist.
@@ -41,10 +81,76 @@ const getAgent = async (searchParameter) => await Agent.findOne(searchParameter)
* @returns {Promise<Agent>} The updated or newly created agent document as a plain object.
*/
const updateAgent = async (searchParameter, updateData) => {
const options = { new: true, upsert: true };
const options = { new: true, upsert: false };
return await Agent.findOneAndUpdate(searchParameter, updateData, options).lean();
};
/**
* Modifies an agent with the resource file id.
* @param {object} params
* @param {ServerRequest} params.req
* @param {string} params.agent_id
* @param {string} params.tool_resource
* @param {string} params.file_id
* @returns {Promise<Agent>} The updated agent.
*/
const addAgentResourceFile = async ({ agent_id, tool_resource, file_id }) => {
const searchParameter = { id: agent_id };
const agent = await getAgent(searchParameter);
if (!agent) {
throw new Error('Agent not found for adding resource file');
}
const tool_resources = agent.tool_resources || {};
if (!tool_resources[tool_resource]) {
tool_resources[tool_resource] = { file_ids: [] };
}
if (!tool_resources[tool_resource].file_ids.includes(file_id)) {
tool_resources[tool_resource].file_ids.push(file_id);
}
const updateData = { tool_resources };
return await updateAgent(searchParameter, updateData);
};
/**
* Removes a resource file id from an agent.
* @param {object} params
* @param {ServerRequest} params.req
* @param {string} params.agent_id
* @param {string} params.tool_resource
* @param {string} params.file_id
* @returns {Promise<Agent>} The updated agent.
*/
const removeAgentResourceFile = async ({ agent_id, tool_resource, file_id }) => {
const searchParameter = { id: agent_id };
const agent = await getAgent(searchParameter);
if (!agent) {
throw new Error('Agent not found for removing resource file');
}
const tool_resources = agent.tool_resources || {};
if (tool_resources[tool_resource] && tool_resources[tool_resource].file_ids) {
tool_resources[tool_resource].file_ids = tool_resources[tool_resource].file_ids.filter(
(id) => id !== file_id,
);
if (tool_resources[tool_resource].file_ids.length === 0) {
delete tool_resources[tool_resource];
}
}
const updateData = { tool_resources };
return await updateAgent(searchParameter, updateData);
};
/**
* Deletes an agent based on the provided ID.
*
@@ -79,12 +185,25 @@ const getListAgents = async (searchParameter) => {
query = { $or: [globalQuery, query] };
}
const agents = await Agent.find(query, {
id: 1,
name: 1,
avatar: 1,
projectIds: 1,
}).lean();
const agents = (
await Agent.find(query, {
id: 1,
_id: 0,
name: 1,
avatar: 1,
author: 1,
projectIds: 1,
isCollaborative: 1,
}).lean()
).map((agent) => {
if (agent.author?.toString() !== author) {
delete agent.author;
}
if (agent.author) {
agent.author = agent.author.toString();
}
return agent;
});
const hasMore = agents.length > 0;
const firstId = agents.length > 0 ? agents[0].id : null;
@@ -102,13 +221,15 @@ const getListAgents = async (searchParameter) => {
* Updates the projects associated with an agent, adding and removing project IDs as specified.
* This function also updates the corresponding projects to include or exclude the agent ID.
*
* @param {string} agentId - The ID of the agent to update.
* @param {string[]} [projectIds] - Array of project IDs to add to the agent.
* @param {string[]} [removeProjectIds] - Array of project IDs to remove from the agent.
* @param {Object} params - Parameters for updating the agent's projects.
* @param {import('librechat-data-provider').TUser} params.user - Parameters for updating the agent's projects.
* @param {string} params.agentId - The ID of the agent to update.
* @param {string[]} [params.projectIds] - Array of project IDs to add to the agent.
* @param {string[]} [params.removeProjectIds] - Array of project IDs to remove from the agent.
* @returns {Promise<MongoAgent>} The updated agent document.
* @throws {Error} If there's an error updating the agent or projects.
*/
const updateAgentProjects = async (agentId, projectIds, removeProjectIds) => {
const updateAgentProjects = async ({ user, agentId, projectIds, removeProjectIds }) => {
const updateOps = {};
if (removeProjectIds && removeProjectIds.length > 0) {
@@ -129,14 +250,36 @@ const updateAgentProjects = async (agentId, projectIds, removeProjectIds) => {
return await getAgent({ id: agentId });
}
return await updateAgent({ id: agentId }, updateOps);
const updateQuery = { id: agentId, author: user.id };
if (user.role === SystemRoles.ADMIN) {
delete updateQuery.author;
}
const updatedAgent = await updateAgent(updateQuery, updateOps);
if (updatedAgent) {
return updatedAgent;
}
if (updateOps.$addToSet) {
for (const projectId of projectIds) {
await removeAgentIdsFromProject(projectId, [agentId]);
}
} else if (updateOps.$pull) {
for (const projectId of removeProjectIds) {
await addAgentIdsToProject(projectId, [agentId]);
}
}
return await getAgent({ id: agentId });
};
module.exports = {
createAgent,
getAgent,
loadAgent,
createAgent,
updateAgent,
deleteAgent,
getListAgents,
updateAgentProjects,
addAgentResourceFile,
removeAgentResourceFile,
};

27
api/models/Banner.js Normal file
View File

@@ -0,0 +1,27 @@
const Banner = require('./schema/banner');
const logger = require('~/config/winston');
/**
* Retrieves the current active banner.
* @returns {Promise<Object|null>} The active banner object or null if no active banner is found.
*/
const getBanner = async (user) => {
try {
const now = new Date();
const banner = await Banner.findOne({
displayFrom: { $lte: now },
$or: [{ displayTo: { $gte: now } }, { displayTo: null }],
type: 'banner',
}).lean();
if (!banner || banner.isPublic || user) {
return banner;
}
return null;
} catch (error) {
logger.error('[getBanners] Error getting banners', error);
throw new Error('Error getting banners');
}
};
module.exports = { getBanner };

View File

@@ -31,9 +31,39 @@ const getConvo = async (user, conversationId) => {
}
};
const deleteNullOrEmptyConversations = async () => {
try {
const filter = {
$or: [
{ conversationId: null },
{ conversationId: '' },
{ conversationId: { $exists: false } },
],
};
const result = await Conversation.deleteMany(filter);
// Delete associated messages
const messageDeleteResult = await deleteMessages(filter);
logger.info(
`[deleteNullOrEmptyConversations] Deleted ${result.deletedCount} conversations and ${messageDeleteResult.deletedCount} messages`,
);
return {
conversations: result,
messages: messageDeleteResult,
};
} catch (error) {
logger.error('[deleteNullOrEmptyConversations] Error deleting conversations', error);
throw new Error('Error deleting conversations with null or empty conversationId');
}
};
module.exports = {
Conversation,
searchConversation,
deleteNullOrEmptyConversations,
/**
* Saves a conversation to the database.
* @param {Object} req - The request object.

View File

@@ -38,7 +38,8 @@ module.exports = {
savePreset: async (user, { presetId, newPresetId, defaultPreset, ...preset }) => {
try {
const setter = { $set: {} };
const update = { presetId, ...preset };
const { user: _, ...cleanPreset } = preset;
const update = { presetId, ...cleanPreset };
if (preset.tools && Array.isArray(preset.tools)) {
update.tools =
preset.tools

View File

@@ -7,6 +7,7 @@ const {
removeGroupFromAllProjects,
} = require('./Project');
const { Prompt, PromptGroup } = require('./schema/promptSchema');
const { escapeRegExp } = require('~/server/utils');
const { logger } = require('~/config');
/**
@@ -106,7 +107,7 @@ const getAllPromptGroups = async (req, filter) => {
let searchShared = true;
let searchSharedOnly = false;
if (name) {
query.name = new RegExp(name, 'i');
query.name = new RegExp(escapeRegExp(name), 'i');
}
if (!query.category) {
delete query.category;
@@ -159,7 +160,7 @@ const getPromptGroups = async (req, filter) => {
let searchShared = true;
let searchSharedOnly = false;
if (name) {
query.name = new RegExp(name, 'i');
query.name = new RegExp(escapeRegExp(name), 'i');
}
if (!query.category) {
delete query.category;

View File

@@ -1,6 +1,5 @@
const crypto = require('crypto');
const bcrypt = require('bcryptjs');
const mongoose = require('mongoose');
const { getRandomValues, hashToken } = require('~/server/utils/crypto');
const { createToken, findToken } = require('./Token');
const logger = require('~/config/winston');
@@ -18,8 +17,8 @@ const logger = require('~/config/winston');
*/
const createInvite = async (email) => {
try {
let token = crypto.randomBytes(32).toString('hex');
const hash = bcrypt.hashSync(token, 10);
const token = await getRandomValues(32);
const hash = await hashToken(token);
const encodedToken = encodeURIComponent(token);
const fakeUserId = new mongoose.Types.ObjectId();
@@ -50,7 +49,7 @@ const createInvite = async (email) => {
const getInvite = async (encodedToken, email) => {
try {
const token = decodeURIComponent(encodedToken);
const hash = bcrypt.hashSync(token, 10);
const hash = await hashToken(token);
const invite = await findToken({ token: hash, email });
if (!invite) {
@@ -59,7 +58,7 @@ const getInvite = async (encodedToken, email) => {
return invite;
} catch (error) {
logger.error('[getInvite] Error getting invite', error);
logger.error('[getInvite] Error getting invite:', error);
return { error: true, message: error.message };
}
};

View File

@@ -5,6 +5,7 @@ const agentSchema = mongoose.Schema(
id: {
type: String,
index: true,
unique: true,
required: true,
},
name: {
@@ -44,10 +45,6 @@ const agentSchema = mongoose.Schema(
tool_kwargs: {
type: [{ type: mongoose.Schema.Types.Mixed }],
},
file_ids: {
type: [String],
default: undefined,
},
actions: {
type: [String],
default: undefined,
@@ -57,6 +54,22 @@ const agentSchema = mongoose.Schema(
ref: 'User',
required: true,
},
authorName: {
type: String,
default: undefined,
},
isCollaborative: {
type: Boolean,
default: undefined,
},
conversation_starters: {
type: [String],
default: [],
},
tool_resources: {
type: mongoose.Schema.Types.Mixed,
default: {},
},
projectIds: {
type: [mongoose.Schema.Types.ObjectId],
ref: 'Project',

View File

@@ -0,0 +1,36 @@
const mongoose = require('mongoose');
const bannerSchema = mongoose.Schema(
{
bannerId: {
type: String,
required: true,
},
message: {
type: String,
required: true,
},
displayFrom: {
type: Date,
required: true,
default: Date.now,
},
displayTo: {
type: Date,
},
type: {
type: String,
enum: ['banner', 'popup'],
default: 'banner',
},
isPublic: {
type: Boolean,
default: false,
},
},
{ timestamps: true },
);
const Banner = mongoose.model('Banner', bannerSchema);
module.exports = Banner;

View File

@@ -21,6 +21,7 @@ const conversationTagSchema = mongoose.Schema(
position: {
type: Number,
default: 0,
index: true,
},
},
{ timestamps: true },

View File

@@ -115,6 +115,29 @@ const messageSchema = mongoose.Schema(
iconURL: {
type: String,
},
attachments: { type: [{ type: mongoose.Schema.Types.Mixed }], default: undefined },
/*
attachments: {
type: [
{
file_id: String,
filename: String,
filepath: String,
expiresAt: Date,
width: Number,
height: Number,
type: String,
conversationId: String,
messageId: {
type: String,
required: true,
},
toolCallId: String,
},
],
default: undefined,
},
*/
},
{ timestamps: true },
);

View File

@@ -37,9 +37,12 @@ const tokenValues = Object.assign(
'4k': { prompt: 1.5, completion: 2 },
'16k': { prompt: 3, completion: 4 },
'gpt-3.5-turbo-1106': { prompt: 1, completion: 2 },
'gpt-4o-2024-08-06': { prompt: 2.5, completion: 10 },
'o1-preview': { prompt: 15, completion: 60 },
'o1-mini': { prompt: 3, completion: 12 },
o1: { prompt: 15, completion: 60 },
'gpt-4o-mini': { prompt: 0.15, completion: 0.6 },
'gpt-4o': { prompt: 5, completion: 15 },
'gpt-4o': { prompt: 2.5, completion: 10 },
'gpt-4o-2024-05-13': { prompt: 5, completion: 15 },
'gpt-4-1106': { prompt: 10, completion: 30 },
'gpt-3.5-turbo-0125': { prompt: 0.5, completion: 1.5 },
'claude-3-opus': { prompt: 15, completion: 75 },
@@ -95,8 +98,14 @@ const getValueKey = (model, endpoint) => {
return 'gpt-3.5-turbo-1106';
} else if (modelName.includes('gpt-3.5')) {
return '4k';
} else if (modelName.includes('gpt-4o-2024-08-06')) {
return 'gpt-4o-2024-08-06';
} else if (modelName.includes('o1-preview')) {
return 'o1-preview';
} else if (modelName.includes('o1-mini')) {
return 'o1-mini';
} else if (modelName.includes('o1')) {
return 'o1';
} else if (modelName.includes('gpt-4o-2024-05-13')) {
return 'gpt-4o-2024-05-13';
} else if (modelName.includes('gpt-4o-mini')) {
return 'gpt-4o-mini';
} else if (modelName.includes('gpt-4o')) {

View File

@@ -50,8 +50,10 @@ describe('getValueKey', () => {
});
it('should return "gpt-4o" for model type of "gpt-4o"', () => {
expect(getValueKey('gpt-4o-2024-05-13')).toBe('gpt-4o');
expect(getValueKey('gpt-4o-2024-08-06')).toBe('gpt-4o');
expect(getValueKey('gpt-4o-2024-08-06-0718')).toBe('gpt-4o');
expect(getValueKey('openai/gpt-4o')).toBe('gpt-4o');
expect(getValueKey('openai/gpt-4o-2024-08-06')).toBe('gpt-4o');
expect(getValueKey('gpt-4o-turbo')).toBe('gpt-4o');
expect(getValueKey('gpt-4o-0125')).toBe('gpt-4o');
});
@@ -60,14 +62,14 @@ describe('getValueKey', () => {
expect(getValueKey('gpt-4o-mini-2024-07-18')).toBe('gpt-4o-mini');
expect(getValueKey('openai/gpt-4o-mini')).toBe('gpt-4o-mini');
expect(getValueKey('gpt-4o-mini-0718')).toBe('gpt-4o-mini');
expect(getValueKey('gpt-4o-2024-08-06-0718')).not.toBe('gpt-4o');
expect(getValueKey('gpt-4o-2024-08-06-0718')).not.toBe('gpt-4o-mini');
});
it('should return "gpt-4o-2024-08-06" for model type of "gpt-4o-2024-08-06"', () => {
expect(getValueKey('gpt-4o-2024-08-06-2024-07-18')).toBe('gpt-4o-2024-08-06');
expect(getValueKey('openai/gpt-4o-2024-08-06')).toBe('gpt-4o-2024-08-06');
expect(getValueKey('gpt-4o-2024-08-06-0718')).toBe('gpt-4o-2024-08-06');
expect(getValueKey('gpt-4o-2024-08-06-0718')).not.toBe('gpt-4o');
it('should return "gpt-4o-2024-05-13" for model type of "gpt-4o-2024-05-13"', () => {
expect(getValueKey('gpt-4o-2024-05-13')).toBe('gpt-4o-2024-05-13');
expect(getValueKey('openai/gpt-4o-2024-05-13')).toBe('gpt-4o-2024-05-13');
expect(getValueKey('gpt-4o-2024-05-13-0718')).toBe('gpt-4o-2024-05-13');
expect(getValueKey('gpt-4o-2024-05-13-0718')).not.toBe('gpt-4o');
});
it('should return "gpt-4o" for model type of "chatgpt-4o"', () => {
@@ -134,7 +136,7 @@ describe('getMultiplier', () => {
});
it('should return the correct multiplier for gpt-4o', () => {
const valueKey = getValueKey('gpt-4o-2024-05-13');
const valueKey = getValueKey('gpt-4o-2024-08-06');
expect(getMultiplier({ valueKey, tokenType: 'prompt' })).toBe(tokenValues['gpt-4o'].prompt);
expect(getMultiplier({ valueKey, tokenType: 'completion' })).toBe(
tokenValues['gpt-4o'].completion,

View File

@@ -1,6 +1,6 @@
{
"name": "@librechat/backend",
"version": "v0.7.5-rc2",
"version": "v0.7.5",
"description": "",
"scripts": {
"start": "echo 'please run this from the root directory'",
@@ -43,22 +43,22 @@
"@langchain/core": "^0.2.18",
"@langchain/google-genai": "^0.0.11",
"@langchain/google-vertexai": "^0.0.17",
"@librechat/agents": "^1.5.2",
"axios": "^1.3.4",
"@librechat/agents": "^1.6.9",
"axios": "^1.7.7",
"bcryptjs": "^2.4.3",
"cheerio": "^1.0.0-rc.12",
"cohere-ai": "^7.9.1",
"compression": "^1.7.4",
"connect-redis": "^7.1.0",
"cookie": "^0.5.0",
"cookie-parser": "^1.4.6",
"cookie": "^0.7.2",
"cookie-parser": "^1.4.7",
"cors": "^2.8.5",
"dedent": "^1.5.3",
"dotenv": "^16.0.3",
"express": "^4.18.2",
"express": "^4.21.1",
"express-mongo-sanitize": "^2.2.0",
"express-rate-limit": "^6.9.0",
"express-session": "^1.17.3",
"express-rate-limit": "^7.4.1",
"express-session": "^1.18.1",
"file-type": "^18.7.0",
"firebase": "^10.6.0",
"googleapis": "^126.0.1",
@@ -76,11 +76,11 @@
"meilisearch": "^0.38.0",
"mime": "^3.0.0",
"module-alias": "^2.2.3",
"mongoose": "^7.1.1",
"mongoose": "^7.3.3",
"multer": "^1.4.5-lts.1",
"nanoid": "^3.3.7",
"nodejs-gpt": "^1.37.4",
"nodemailer": "^6.9.4",
"nodemailer": "^6.9.15",
"ollama": "^0.5.0",
"openai": "^4.47.1",
"openai-chat-tokens": "^0.2.8",
@@ -101,7 +101,6 @@
"ua-parser-js": "^1.0.36",
"winston": "^3.11.0",
"winston-daily-rotate-file": "^4.7.1",
"ws": "^8.17.0",
"zod": "^3.22.4"
},
"devDependencies": {

View File

@@ -16,7 +16,12 @@ const AskController = async (req, res, next, initializeClient, addTitle) => {
overrideParentMessageId = null,
} = req.body;
logger.debug('[AskController]', { text, conversationId, ...endpointOption });
logger.debug('[AskController]', {
text,
conversationId,
...endpointOption,
modelsConfig: endpointOption.modelsConfig ? 'exists' : '',
});
let userMessage;
let userMessagePromise;

View File

@@ -25,6 +25,7 @@ const EditController = async (req, res, next, initializeClient) => {
isContinued,
conversationId,
...endpointOption,
modelsConfig: endpointOption.modelsConfig ? 'exists' : '',
});
let userMessage;

View File

@@ -1,8 +1,13 @@
const { Tools } = require('librechat-data-provider');
const { GraphEvents, ToolEndHandler, ChatModelStreamHandler } = require('@librechat/agents');
const { processCodeOutput } = require('~/server/services/Files/Code/process');
const { logger } = require('~/config');
/** @typedef {import('@librechat/agents').Graph} Graph */
/** @typedef {import('@librechat/agents').EventHandler} EventHandler */
/** @typedef {import('@librechat/agents').ModelEndData} ModelEndData */
/** @typedef {import('@librechat/agents').ToolEndData} ToolEndData */
/** @typedef {import('@librechat/agents').ToolEndCallback} ToolEndCallback */
/** @typedef {import('@librechat/agents').ChatModelStreamHandler} ChatModelStreamHandler */
/** @typedef {import('@librechat/agents').ContentAggregatorResult['aggregateContent']} ContentAggregator */
/** @typedef {import('@librechat/agents').GraphEvents} GraphEvents */
@@ -58,11 +63,12 @@ class ModelEndHandler {
* @param {Object} options - The options object.
* @param {ServerResponse} options.res - The options object.
* @param {ContentAggregator} options.aggregateContent - The options object.
* @param {ToolEndCallback} options.toolEndCallback - Callback to use when tool ends.
* @param {Array<UsageMetadata>} options.collectedUsage - The list of collected usage metadata.
* @returns {Record<string, t.EventHandler>} The default handlers.
* @throws {Error} If the request is not found.
*/
function getDefaultHandlers({ res, aggregateContent, collectedUsage }) {
function getDefaultHandlers({ res, aggregateContent, toolEndCallback, collectedUsage }) {
if (!res || !aggregateContent) {
throw new Error(
`[getDefaultHandlers] Missing required options: res: ${!res}, aggregateContent: ${!aggregateContent}`,
@@ -70,7 +76,7 @@ function getDefaultHandlers({ res, aggregateContent, collectedUsage }) {
}
const handlers = {
[GraphEvents.CHAT_MODEL_END]: new ModelEndHandler(collectedUsage),
[GraphEvents.TOOL_END]: new ToolEndHandler(),
[GraphEvents.TOOL_END]: new ToolEndHandler(toolEndCallback),
[GraphEvents.CHAT_MODEL_STREAM]: new ChatModelStreamHandler(),
[GraphEvents.ON_RUN_STEP]: {
/**
@@ -121,7 +127,67 @@ function getDefaultHandlers({ res, aggregateContent, collectedUsage }) {
return handlers;
}
/**
*
* @param {Object} params
* @param {ServerRequest} params.req
* @param {ServerResponse} params.res
* @param {Promise<MongoFile | { filename: string; filepath: string; expires: number;} | null>[]} params.artifactPromises
* @returns {ToolEndCallback} The tool end callback.
*/
function createToolEndCallback({ req, res, artifactPromises }) {
/**
* @type {ToolEndCallback}
*/
return async (data, metadata) => {
const output = data?.output;
if (!output) {
return;
}
if (output.name !== Tools.execute_code) {
return;
}
const { tool_call_id, artifact } = output;
if (!artifact.files) {
return;
}
for (const file of artifact.files) {
const { id, name } = file;
artifactPromises.push(
(async () => {
const fileMetadata = await processCodeOutput({
req,
id,
name,
toolCallId: tool_call_id,
messageId: metadata.run_id,
sessionId: artifact.session_id,
conversationId: metadata.thread_id,
});
if (!res.headersSent) {
return fileMetadata;
}
if (!fileMetadata) {
return null;
}
res.write(`event: attachment\ndata: ${JSON.stringify(fileMetadata)}\n\n`);
return fileMetadata;
})().catch((error) => {
logger.error('Error processing code output:', error);
return null;
}),
);
}
};
}
module.exports = {
sendEvent,
getDefaultHandlers,
createToolEndCallback,
};

View File

@@ -10,7 +10,10 @@
const { Callback, createMetadataAggregator } = require('@librechat/agents');
const {
Constants,
VisionModes,
openAISchema,
EModelEndpoint,
anthropicSchema,
bedrockOutputParser,
providerEndpointMap,
removeNullishValues,
@@ -35,11 +38,10 @@ const { logger } = require('~/config');
/** @typedef {import('@librechat/agents').MessageContentComplex} MessageContentComplex */
// const providerSchemas = {
// [EModelEndpoint.bedrock]: true,
// };
const providerParsers = {
[EModelEndpoint.openAI]: openAISchema,
[EModelEndpoint.azureOpenAI]: openAISchema,
[EModelEndpoint.anthropic]: anthropicSchema,
[EModelEndpoint.bedrock]: bedrockOutputParser,
};
@@ -57,10 +59,11 @@ class AgentClient extends BaseClient {
this.run;
const {
maxContextTokens,
modelOptions = {},
contentParts,
collectedUsage,
artifactPromises,
maxContextTokens,
modelOptions = {},
...clientOptions
} = options;
@@ -70,6 +73,8 @@ class AgentClient extends BaseClient {
this.contentParts = contentParts;
/** @type {Array<UsageMetadata>} */
this.collectedUsage = collectedUsage;
/** @type {ArtifactPromises} */
this.artifactPromises = artifactPromises;
this.options = Object.assign({ endpoint: options.endpoint }, clientOptions);
}
@@ -180,10 +185,10 @@ class AgentClient extends BaseClient {
);
}
getBuildMessagesOptions(opts) {
getBuildMessagesOptions() {
return {
instructions: opts.instructions,
additional_instructions: opts.additional_instructions,
instructions: this.options.agent.instructions,
additional_instructions: this.options.agent.additional_instructions,
};
}
@@ -192,6 +197,7 @@ class AgentClient extends BaseClient {
this.options.req,
attachments,
this.options.agent.provider,
VisionModes.agents,
);
message.image_urls = image_urls.length ? image_urls : undefined;
return files;
@@ -210,8 +216,6 @@ class AgentClient extends BaseClient {
});
let payload;
/** @type {{ role: string; name: string; content: string } | undefined} */
let systemMessage;
/** @type {number | undefined} */
let promptTokens;
@@ -259,21 +263,21 @@ class AgentClient extends BaseClient {
}
/* If message has files, calculate image token cost */
// if (this.message_file_map && this.message_file_map[message.messageId]) {
// const attachments = this.message_file_map[message.messageId];
// for (const file of attachments) {
// if (file.embedded) {
// this.contextHandlers?.processFile(file);
// continue;
// }
if (this.message_file_map && this.message_file_map[message.messageId]) {
const attachments = this.message_file_map[message.messageId];
for (const file of attachments) {
if (file.embedded) {
this.contextHandlers?.processFile(file);
continue;
}
// orderedMessages[i].tokenCount += this.calculateImageTokenCost({
// width: file.width,
// height: file.height,
// detail: this.options.imageDetail ?? ImageDetail.auto,
// });
// }
// }
// orderedMessages[i].tokenCount += this.calculateImageTokenCost({
// width: file.width,
// height: file.height,
// detail: this.options.imageDetail ?? ImageDetail.auto,
// });
}
}
return formattedMessage;
});
@@ -284,20 +288,7 @@ class AgentClient extends BaseClient {
}
if (systemContent) {
systemContent = `${systemContent.trim()}`;
systemMessage = {
role: 'system',
name: 'instructions',
content: systemContent,
};
if (this.contextStrategy) {
const instructionTokens = this.getTokenCountForMessage(systemMessage);
if (instructionTokens >= 0) {
const firstMessageTokens = orderedMessages[0].tokenCount ?? 0;
orderedMessages[0].tokenCount = firstMessageTokens + instructionTokens;
}
}
this.options.agent.instructions = systemContent;
}
if (this.contextStrategy) {
@@ -477,7 +468,6 @@ class AgentClient extends BaseClient {
provider: providerEndpointMap[this.options.agent.provider],
thread_id: this.conversationId,
},
run_id: this.responseMessageId,
signal: abortController.signal,
streamMode: 'values',
version: 'v2',

View File

@@ -45,10 +45,9 @@ async function createRun({
);
const graphConfig = {
runId,
llmConfig,
tools,
toolMap,
llmConfig,
instructions: agent.instructions,
additional_instructions: agent.additional_instructions,
};
@@ -59,6 +58,7 @@ async function createRun({
}
return Run.create({
runId,
graphConfig,
customHandlers,
});

View File

@@ -1,5 +1,5 @@
const { nanoid } = require('nanoid');
const { FileContext, Constants } = require('librechat-data-provider');
const { FileContext, Constants, Tools, SystemRoles } = require('librechat-data-provider');
const {
getAgent,
createAgent,
@@ -14,6 +14,11 @@ const { updateAgentProjects } = require('~/models/Agent');
const { deleteFileByFilter } = require('~/models/File');
const { logger } = require('~/config');
const systemTools = {
[Tools.execute_code]: true,
[Tools.file_search]: true,
};
/**
* Creates an Agent.
* @route POST /Agents
@@ -27,9 +32,17 @@ const createAgentHandler = async (req, res) => {
const { tools = [], provider, name, description, instructions, model, ...agentData } = req.body;
const { id: userId } = req.user;
agentData.tools = tools
.map((tool) => (typeof tool === 'string' ? req.app.locals.availableTools[tool] : tool))
.filter(Boolean);
agentData.tools = [];
for (const tool of tools) {
if (req.app.locals.availableTools[tool]) {
agentData.tools.push(tool);
}
if (systemTools[tool]) {
agentData.tools.push(tool);
}
}
Object.assign(agentData, {
author: userId,
@@ -80,10 +93,24 @@ const getAgentHandler = async (req, res) => {
return res.status(404).json({ error: 'Agent not found' });
}
agent.author = agent.author.toString();
agent.isCollaborative = !!agent.isCollaborative;
if (agent.author !== author) {
delete agent.author;
}
if (!agent.isCollaborative && agent.author !== author && req.user.role !== SystemRoles.ADMIN) {
return res.status(200).json({
id: agent.id,
name: agent.name,
avatar: agent.avatar,
author: agent.author,
projectIds: agent.projectIds,
isCollaborative: agent.isCollaborative,
});
}
return res.status(200).json(agent);
} catch (error) {
logger.error('[/Agents/:id] Error retrieving agent', error);
@@ -106,12 +133,29 @@ const updateAgentHandler = async (req, res) => {
const { projectIds, removeProjectIds, ...updateData } = req.body;
let updatedAgent;
const query = { id, author: req.user.id };
if (req.user.role === SystemRoles.ADMIN) {
delete query.author;
}
if (Object.keys(updateData).length > 0) {
updatedAgent = await updateAgent({ id, author: req.user.id }, updateData);
updatedAgent = await updateAgent(query, updateData);
}
if (projectIds || removeProjectIds) {
updatedAgent = await updateAgentProjects(id, projectIds, removeProjectIds);
updatedAgent = await updateAgentProjects({
user: req.user,
agentId: id,
projectIds,
removeProjectIds,
});
}
if (updatedAgent.author) {
updatedAgent.author = updatedAgent.author.toString();
}
if (updatedAgent.author !== req.user.id) {
delete updatedAgent.author;
}
return res.json(updatedAgent);
@@ -182,8 +226,6 @@ const uploadAgentAvatarHandler = async (req, res) => {
return res.status(400).json({ message: 'Agent ID is required' });
}
let { avatar: _avatar = '{}' } = req.body;
const image = await uploadImageBuffer({
req,
context: FileContext.avatar,
@@ -192,10 +234,12 @@ const uploadAgentAvatarHandler = async (req, res) => {
},
});
let _avatar;
try {
_avatar = JSON.parse(_avatar);
const agent = await getAgent({ id: agent_id });
_avatar = agent.avatar;
} catch (error) {
logger.error('[/avatar/:agent_id] Error parsing avatar', error);
logger.error('[/avatar/:agent_id] Error fetching agent', error);
_avatar = {};
}
@@ -203,7 +247,7 @@ const uploadAgentAvatarHandler = async (req, res) => {
const { deleteFile } = getStrategyFunctions(_avatar.source);
try {
await deleteFile(req, { filepath: _avatar.filepath });
await deleteFileByFilter({ filepath: _avatar.filepath });
await deleteFileByFilter({ user: req.user.id, filepath: _avatar.filepath });
} catch (error) {
logger.error('[/avatar/:agent_id] Error deleting old avatar', error);
}

View File

@@ -314,7 +314,9 @@ const chatV1 = async (req, res) => {
}
if (typeof endpointOption.artifactsPrompt === 'string' && endpointOption.artifactsPrompt) {
body.additional_instructions = `${body.additional_instructions ?? ''}\n${endpointOption.artifactsPrompt}`.trim();
body.additional_instructions = `${body.additional_instructions ?? ''}\n${
endpointOption.artifactsPrompt
}`.trim();
}
if (instructions) {
@@ -371,11 +373,14 @@ const chatV1 = async (req, res) => {
visionMessage.content = createVisionPrompt(plural);
visionMessage = formatMessage({ message: visionMessage, endpoint: EModelEndpoint.openAI });
visionPromise = openai.chat.completions.create({
model: 'gpt-4-vision-preview',
messages: [visionMessage],
max_tokens: 4000,
});
visionPromise = openai.chat.completions
.create({
messages: [visionMessage],
max_tokens: 4000,
})
.catch((error) => {
logger.error('[/assistants/chat/] Error creating vision prompt', error);
});
const pluralized = plural ? 's' : '';
body.additional_instructions = `${

View File

@@ -241,7 +241,6 @@ const getAssistantDocuments = async (req, res) => {
* @param {string} req.params.assistant_id - The ID of the assistant.
* @param {Express.Multer.File} req.file - The avatar image file.
* @param {object} req.body - Request body
* @param {string} [req.body.metadata] - Optional metadata for the assistant's avatar.
* @returns {Object} 200 - success response - application/json
*/
const uploadAssistantAvatar = async (req, res) => {
@@ -251,7 +250,6 @@ const uploadAssistantAvatar = async (req, res) => {
return res.status(400).json({ message: 'Assistant ID is required' });
}
let { metadata: _metadata = '{}' } = req.body;
const { openai } = await getOpenAIClient({ req, res });
await validateAuthor({ req, openai });
@@ -263,10 +261,15 @@ const uploadAssistantAvatar = async (req, res) => {
},
});
let _metadata;
try {
_metadata = JSON.parse(_metadata);
const assistant = await openai.beta.assistants.retrieve(assistant_id);
if (assistant) {
_metadata = assistant.metadata;
}
} catch (error) {
logger.error('[/avatar/:assistant_id] Error parsing metadata', error);
logger.error('[/avatar/:assistant_id] Error fetching assistant', error);
_metadata = {};
}
@@ -274,7 +277,7 @@ const uploadAssistantAvatar = async (req, res) => {
const { deleteFile } = getStrategyFunctions(_metadata.avatar_source);
try {
await deleteFile(req, { filepath: _metadata.avatar });
await deleteFileByFilter({ filepath: _metadata.avatar });
await deleteFileByFilter({ user: req.user.id, filepath: _metadata.avatar });
} catch (error) {
logger.error('[/avatar/:assistant_id] Error deleting old avatar', error);
}

View File

@@ -149,7 +149,6 @@ const updateAssistant = async ({ req, openai, assistant_id, updateData }) => {
* @param {string} params.assistant_id
* @param {string} params.tool_resource
* @param {string} params.file_id
* @param {AssistantUpdateParams} params.updateData
* @returns {Promise<Assistant>} The updated assistant.
*/
const addResourceFileId = async ({ req, openai, assistant_id, tool_resource, file_id }) => {

View File

@@ -106,6 +106,7 @@ const startServer = async () => {
app.use('/api/share', routes.share);
app.use('/api/roles', routes.roles);
app.use('/api/agents', routes.agents);
app.use('/api/banner', routes.banner);
app.use('/api/bedrock', routes.bedrock);
app.use('/api/tags', routes.tags);
@@ -113,7 +114,8 @@ const startServer = async () => {
app.use((req, res) => {
// Replace lang attribute in index.html with lang from cookies or accept-language header
const lang = req.cookies.lang || req.headers['accept-language']?.split(',')[0] || 'en-US';
const updatedIndexHtml = indexHTML.replace(/lang="en-US"/g, `lang="${lang}"`);
const saneLang = lang.replace(/"/g, '&quot;'); // sanitize untrusted user input
const updatedIndexHtml = indexHTML.replace(/lang="en-US"/g, `lang="${saneLang}"`);
res.send(updatedIndexHtml);
});

View File

@@ -173,6 +173,10 @@ const handleAbortError = async (res, req, error, data) => {
errorText = `{"type":"${ErrorTypes.INVALID_REQUEST}"}`;
}
if (error?.message?.includes('does not support \'system\'')) {
errorText = `{"type":"${ErrorTypes.NO_SYSTEM_MESSAGES}"}`;
}
const respondWithError = async (partialText) => {
let options = {
sender,

View File

@@ -10,7 +10,6 @@ const openAI = require('~/server/services/Endpoints/openAI');
const agents = require('~/server/services/Endpoints/agents');
const custom = require('~/server/services/Endpoints/custom');
const google = require('~/server/services/Endpoints/google');
const enforceModelSpec = require('./enforceModelSpec');
const { handleError } = require('~/server/utils');
const buildFunction = {
@@ -28,7 +27,7 @@ const buildFunction = {
async function buildEndpointOption(req, res, next) {
const { endpoint, endpointType } = req.body;
const parsedBody = parseCompactConvo({ endpoint, endpointType, conversation: req.body });
let parsedBody = parseCompactConvo({ endpoint, endpointType, conversation: req.body });
if (req.app.locals.modelSpecs?.list && req.app.locals.modelSpecs?.enforce) {
/** @type {{ list: TModelSpec[] }}*/
@@ -57,10 +56,11 @@ async function buildEndpointOption(req, res, next) {
});
}
const isValidModelSpec = enforceModelSpec(currentModelSpec, parsedBody);
if (!isValidModelSpec) {
return handleError(res, { text: 'Model spec mismatch' });
}
parsedBody = parseCompactConvo({
endpoint,
endpointType,
conversation: currentModelSpec.preset,
});
}
const endpointFn = buildFunction[endpointType ?? endpoint];

View File

@@ -1,58 +0,0 @@
const interchangeableKeys = new Map([
['chatGptLabel', ['modelLabel']],
['modelLabel', ['chatGptLabel']],
]);
/**
* Middleware to enforce the model spec for a conversation
* @param {TModelSpec} modelSpec - The model spec to enforce
* @param {TConversation} parsedBody - The parsed body of the conversation
* @returns {boolean} - Whether the model spec is enforced
*/
const enforceModelSpec = (modelSpec, parsedBody) => {
for (const [key, value] of Object.entries(modelSpec.preset)) {
if (key === 'endpoint') {
continue;
}
if (!checkMatch(key, value, parsedBody)) {
return false;
}
}
return true;
};
/**
* Checks if there is a match for the given key and value in the parsed body
* or any of its interchangeable keys, including deep comparison for objects and arrays.
* @param {string} key
* @param {any} value
* @param {object} parsedBody
* @returns {boolean}
*/
const checkMatch = (key, value, parsedBody) => {
const isEqual = (a, b) => {
if (Array.isArray(a) && Array.isArray(b)) {
return a.length === b.length && a.every((val, index) => isEqual(val, b[index]));
} else if (typeof a === 'object' && typeof b === 'object' && a !== null && b !== null) {
const keysA = Object.keys(a);
const keysB = Object.keys(b);
return keysA.length === keysB.length && keysA.every((k) => isEqual(a[k], b[k]));
}
return a === b;
};
if (isEqual(parsedBody[key], value)) {
return true;
}
if (interchangeableKeys.has(key)) {
return interchangeableKeys
.get(key)
.some((interchangeableKey) => isEqual(parsedBody[interchangeableKey], value));
}
return false;
};
module.exports = enforceModelSpec;

View File

@@ -1,47 +0,0 @@
// enforceModelSpec.test.js
const enforceModelSpec = require('./enforceModelSpec');
describe('enforceModelSpec function', () => {
test('returns true when all model specs match parsed body directly', () => {
const modelSpec = { preset: { title: 'Dialog', status: 'Active' } };
const parsedBody = { title: 'Dialog', status: 'Active' };
expect(enforceModelSpec(modelSpec, parsedBody)).toBe(true);
});
test('returns true when model specs match via interchangeable keys', () => {
const modelSpec = { preset: { chatGptLabel: 'GPT-4' } };
const parsedBody = { modelLabel: 'GPT-4' };
expect(enforceModelSpec(modelSpec, parsedBody)).toBe(true);
});
test('returns false if any key value does not match', () => {
const modelSpec = { preset: { language: 'English', level: 'Advanced' } };
const parsedBody = { language: 'Spanish', level: 'Advanced' };
expect(enforceModelSpec(modelSpec, parsedBody)).toBe(false);
});
test('ignores the \'endpoint\' key in model spec', () => {
const modelSpec = { preset: { endpoint: 'ignored', feature: 'Special' } };
const parsedBody = { feature: 'Special' };
expect(enforceModelSpec(modelSpec, parsedBody)).toBe(true);
});
test('handles nested objects correctly', () => {
const modelSpec = { preset: { details: { time: 'noon', location: 'park' } } };
const parsedBody = { details: { time: 'noon', location: 'park' } };
expect(enforceModelSpec(modelSpec, parsedBody)).toBe(true);
});
test('handles arrays within objects', () => {
const modelSpec = { preset: { tags: ['urgent', 'important'] } };
const parsedBody = { tags: ['urgent', 'important'] };
expect(enforceModelSpec(modelSpec, parsedBody)).toBe(true);
});
test('fails when arrays in objects do not match', () => {
const modelSpec = { preset: { tags: ['urgent', 'important'] } };
const parsedBody = { tags: ['important', 'urgent'] }; // Different order
expect(enforceModelSpec(modelSpec, parsedBody)).toBe(false);
});
});

View File

@@ -0,0 +1,17 @@
const passport = require('passport');
// This middleware does not require authentication,
// but if the user is authenticated, it will set the user object.
const optionalJwtAuth = (req, res, next) => {
passport.authenticate('jwt', { session: false }, (err, user) => {
if (err) {
return next(err);
}
if (user) {
req.user = user;
}
next();
})(req, res, next);
};
module.exports = optionalJwtAuth;

View File

@@ -3,6 +3,7 @@ const validateImageRequest = require('~/server/middleware/validateImageRequest')
describe('validateImageRequest middleware', () => {
let req, res, next;
const validObjectId = '65cfb246f7ecadb8b1e8036b';
beforeEach(() => {
req = {
@@ -43,7 +44,7 @@ describe('validateImageRequest middleware', () => {
test('should return 403 if refresh token is expired', () => {
const expiredToken = jwt.sign(
{ id: '123', exp: Math.floor(Date.now() / 1000) - 3600 },
{ id: validObjectId, exp: Math.floor(Date.now() / 1000) - 3600 },
process.env.JWT_REFRESH_SECRET,
);
req.headers.cookie = `refreshToken=${expiredToken}`;
@@ -54,22 +55,34 @@ describe('validateImageRequest middleware', () => {
test('should call next() for valid image path', () => {
const validToken = jwt.sign(
{ id: '123', exp: Math.floor(Date.now() / 1000) + 3600 },
{ id: validObjectId, exp: Math.floor(Date.now() / 1000) + 3600 },
process.env.JWT_REFRESH_SECRET,
);
req.headers.cookie = `refreshToken=${validToken}`;
req.originalUrl = '/images/123/example.jpg';
req.originalUrl = `/images/${validObjectId}/example.jpg`;
validateImageRequest(req, res, next);
expect(next).toHaveBeenCalled();
});
test('should return 403 for invalid image path', () => {
const validToken = jwt.sign(
{ id: '123', exp: Math.floor(Date.now() / 1000) + 3600 },
{ id: validObjectId, exp: Math.floor(Date.now() / 1000) + 3600 },
process.env.JWT_REFRESH_SECRET,
);
req.headers.cookie = `refreshToken=${validToken}`;
req.originalUrl = '/images/456/example.jpg';
req.originalUrl = '/images/65cfb246f7ecadb8b1e8036c/example.jpg'; // Different ObjectId
validateImageRequest(req, res, next);
expect(res.status).toHaveBeenCalledWith(403);
expect(res.send).toHaveBeenCalledWith('Access Denied');
});
test('should return 403 for invalid ObjectId format', () => {
const validToken = jwt.sign(
{ id: validObjectId, exp: Math.floor(Date.now() / 1000) + 3600 },
process.env.JWT_REFRESH_SECRET,
);
req.headers.cookie = `refreshToken=${validToken}`;
req.originalUrl = '/images/123/example.jpg'; // Invalid ObjectId
validateImageRequest(req, res, next);
expect(res.status).toHaveBeenCalledWith(403);
expect(res.send).toHaveBeenCalledWith('Access Denied');
@@ -78,16 +91,16 @@ describe('validateImageRequest middleware', () => {
// File traversal tests
test('should prevent file traversal attempts', () => {
const validToken = jwt.sign(
{ id: '123', exp: Math.floor(Date.now() / 1000) + 3600 },
{ id: validObjectId, exp: Math.floor(Date.now() / 1000) + 3600 },
process.env.JWT_REFRESH_SECRET,
);
req.headers.cookie = `refreshToken=${validToken}`;
const traversalAttempts = [
'/images/123/../../../etc/passwd',
'/images/123/..%2F..%2F..%2Fetc%2Fpasswd',
'/images/123/image.jpg/../../../etc/passwd',
'/images/123/%2e%2e%2f%2e%2e%2f%2e%2e%2fetc%2fpasswd',
`/images/${validObjectId}/../../../etc/passwd`,
`/images/${validObjectId}/..%2F..%2F..%2Fetc%2Fpasswd`,
`/images/${validObjectId}/image.jpg/../../../etc/passwd`,
`/images/${validObjectId}/%2e%2e%2f%2e%2e%2f%2e%2e%2fetc%2fpasswd`,
];
traversalAttempts.forEach((attempt) => {
@@ -101,11 +114,11 @@ describe('validateImageRequest middleware', () => {
test('should handle URL encoded characters in valid paths', () => {
const validToken = jwt.sign(
{ id: '123', exp: Math.floor(Date.now() / 1000) + 3600 },
{ id: validObjectId, exp: Math.floor(Date.now() / 1000) + 3600 },
process.env.JWT_REFRESH_SECRET,
);
req.headers.cookie = `refreshToken=${validToken}`;
req.originalUrl = '/images/123/image%20with%20spaces.jpg';
req.originalUrl = `/images/${validObjectId}/image%20with%20spaces.jpg`;
validateImageRequest(req, res, next);
expect(next).toHaveBeenCalled();
});

View File

@@ -2,6 +2,24 @@ const cookies = require('cookie');
const jwt = require('jsonwebtoken');
const { logger } = require('~/config');
const OBJECT_ID_LENGTH = 24;
const OBJECT_ID_PATTERN = /^[0-9a-f]{24}$/i;
/**
* Validates if a string is a valid MongoDB ObjectId
* @param {string} id - String to validate
* @returns {boolean} - Whether string is a valid ObjectId format
*/
function isValidObjectId(id) {
if (typeof id !== 'string') {
return false;
}
if (id.length !== OBJECT_ID_LENGTH) {
return false;
}
return OBJECT_ID_PATTERN.test(id);
}
/**
* Middleware to validate image request.
* Must be set by `secureImageLinks` via custom config file.
@@ -25,6 +43,11 @@ function validateImageRequest(req, res, next) {
return res.status(403).send('Access Denied');
}
if (!isValidObjectId(payload.id)) {
logger.warn('[validateImageRequest] Invalid User ID');
return res.status(403).send('Access Denied');
}
const currentTimeInSeconds = Math.floor(Date.now() / 1000);
if (payload.exp < currentTimeInSeconds) {
logger.warn('[validateImageRequest] Refresh token expired');

View File

@@ -10,6 +10,7 @@ const {
} = require('~/server/middleware');
const { initializeClient } = require('~/server/services/Endpoints/agents');
const AgentController = require('~/server/controllers/agents/request');
const addTitle = require('~/server/services/Endpoints/agents/title');
router.post('/abort', handleAbort());
@@ -28,7 +29,7 @@ router.post(
buildEndpointOption,
setHeaders,
async (req, res, next) => {
await AgentController(req, res, next, initializeClient);
await AgentController(req, res, next, initializeClient, addTitle);
},
);

View File

@@ -2,6 +2,7 @@ const multer = require('multer');
const express = require('express');
const { PermissionTypes, Permissions } = require('librechat-data-provider');
const { requireJwtAuth, generateCheckAccess } = require('~/server/middleware');
const { getAvailableTools } = require('~/server/controllers/PluginController');
const v1 = require('~/server/controllers/agents/v1');
const actions = require('./actions');
@@ -36,9 +37,7 @@ router.use('/actions', actions);
* @route GET /agents/tools
* @returns {TPlugin[]} 200 - application/json
*/
router.use('/tools', (req, res) => {
res.json([]);
});
router.use('/tools', getAvailableTools);
/**
* Creates an agent.

View File

@@ -0,0 +1,15 @@
const express = require('express');
const { getBanner } = require('~/models/Banner');
const optionalJwtAuth = require('~/server/middleware/optionalJwtAuth');
const router = express.Router();
router.get('/', optionalJwtAuth, async (req, res) => {
try {
res.status(200).send(await getBanner(req.user));
} catch (error) {
res.status(500).json({ message: 'Error getting banner' });
}
});
module.exports = router;

View File

@@ -10,7 +10,7 @@ const {
} = require('~/server/middleware');
const { initializeClient } = require('~/server/services/Endpoints/bedrock');
const AgentController = require('~/server/controllers/agents/request');
const addTitle = require('~/server/services/Endpoints/bedrock/title');
const addTitle = require('~/server/services/Endpoints/agents/title');
router.post('/abort', handleAbort());

View File

@@ -109,8 +109,14 @@ router.post('/clear', async (req, res) => {
router.post('/update', async (req, res) => {
const update = req.body.arg;
if (!update.conversationId) {
return res.status(400).json({ error: 'conversationId is required' });
}
try {
const dbResponse = await saveConvo(req, update, { context: 'POST /api/convos/update' });
const dbResponse = await saveConvo(req, update, {
context: `POST /api/convos/update ${update.conversationId}`,
});
res.status(201).json(dbResponse);
} catch (error) {
logger.error('Error updating conversation', error);

View File

@@ -1,18 +1,22 @@
const fs = require('fs').promises;
const express = require('express');
const { EnvVar } = require('@librechat/agents');
const {
isUUID,
checkOpenAIStorage,
FileSources,
EModelEndpoint,
isAgentsEndpoint,
checkOpenAIStorage,
} = require('librechat-data-provider');
const {
filterFile,
processFileUpload,
processDeleteRequest,
processAgentFileUpload,
} = require('~/server/services/Files/process');
const { getStrategyFunctions } = require('~/server/services/Files/strategies');
const { getOpenAIClient } = require('~/server/controllers/assistants/helpers');
const { loadAuthValues } = require('~/app/clients/tools/util');
const { getFiles } = require('~/models/File');
const { logger } = require('~/config');
@@ -62,8 +66,20 @@ router.delete('/', async (req, res) => {
return;
}
await processDeleteRequest({ req, files });
const fileIds = files.map((file) => file.file_id);
const userFiles = await getFiles({ file_id: { $in: fileIds }, user: req.user.id });
if (userFiles.length !== files.length) {
return res.status(403).json({ message: 'You can only delete your own files' });
}
await processDeleteRequest({ req, files: userFiles });
logger.debug(
`[/files] Files deleted successfully: ${files
.filter((f) => f.file_id)
.map((f) => f.file_id)
.join(', ')}`,
);
res.status(200).json({ message: 'Files deleted successfully' });
} catch (error) {
logger.error('[/files] Error deleting files:', error);
@@ -71,6 +87,36 @@ router.delete('/', async (req, res) => {
}
});
router.get('/code/download/:sessionId/:fileId', async (req, res) => {
try {
const { sessionId, fileId } = req.params;
const logPrefix = `Session ID: ${sessionId} | File ID: ${fileId} | Code output download requested by user `;
logger.debug(logPrefix);
if (!sessionId || !fileId) {
return res.status(400).send('Bad request');
}
const { getDownloadStream } = getStrategyFunctions(FileSources.execute_code);
if (!getDownloadStream) {
logger.warn(
`${logPrefix} has no stream method implemented for ${FileSources.execute_code} source`,
);
return res.status(501).send('Not Implemented');
}
const result = await loadAuthValues({ userId: req.user.id, authFields: [EnvVar.CODE_API_KEY] });
/** @type {AxiosResponse<ReadableStream> | undefined} */
const response = await getDownloadStream(`${sessionId}/${fileId}`, result[EnvVar.CODE_API_KEY]);
res.set(response.headers);
response.data.pipe(res);
} catch (error) {
logger.error('Error downloading file:', error);
res.status(500).send('Error downloading file');
}
});
router.get('/download/:userId/:file_id', async (req, res) => {
try {
const { userId, file_id } = req.params;
@@ -154,6 +200,10 @@ router.post('/', async (req, res) => {
metadata.temp_file_id = metadata.file_id;
metadata.file_id = req.file_id;
if (isAgentsEndpoint(metadata.endpoint)) {
return await processAgentFileUpload({ req, res, file, metadata });
}
await processFileUpload({ req, res, file, metadata });
} catch (error) {
let message = 'Error processing file';
@@ -177,7 +227,7 @@ router.post('/', async (req, res) => {
try {
await fs.unlink(file.path);
} catch (error) {
logger.error('[/files/images] Error deleting file after file processing:', error);
logger.error('[/files] Error deleting file after file processing:', error);
}
}
});

View File

@@ -24,6 +24,7 @@ const edit = require('./edit');
const keys = require('./keys');
const user = require('./user');
const ask = require('./ask');
const banner = require('./banner');
module.exports = {
ask,
@@ -52,4 +53,5 @@ module.exports = {
assistants,
categories,
staticRoute,
banner,
};

View File

@@ -134,7 +134,7 @@ const createPrompt = async (req, res) => {
}
};
router.post('/', createPrompt);
router.post('/', checkPromptCreate, createPrompt);
/**
* Updates a prompt group

View File

@@ -61,7 +61,8 @@ router.post('/', async (req, res) => {
*/
router.put('/:tag', async (req, res) => {
try {
const tag = await updateConversationTag(req.user.id, req.params.tag, req.body);
const decodedTag = decodeURIComponent(req.params.tag);
const tag = await updateConversationTag(req.user.id, decodedTag, req.body);
if (tag) {
res.status(200).json(tag);
} else {
@@ -81,7 +82,8 @@ router.put('/:tag', async (req, res) => {
*/
router.delete('/:tag', async (req, res) => {
try {
const tag = await deleteConversationTag(req.user.id, req.params.tag);
const decodedTag = decodeURIComponent(req.params.tag);
const tag = await deleteConversationTag(req.user.id, decodedTag);
if (tag) {
res.status(200).json(tag);
} else {

View File

@@ -8,6 +8,7 @@ const { loadDefaultInterface } = require('./start/interface');
const { azureConfigSetup } = require('./start/azureOpenAI');
const { loadAndFormatTools } = require('./ToolService');
const { initializeRoles } = require('~/models/Role');
const { cleanup } = require('./cleanup');
const paths = require('~/config/paths');
/**
@@ -17,6 +18,7 @@ const paths = require('~/config/paths');
* @param {Express.Application} app - The Express application object.
*/
const AppService = async (app) => {
cleanup();
await initializeRoles();
/** @type {TCustomConfig}*/
const config = (await loadCustomConfig()) ?? {};

View File

@@ -11,7 +11,7 @@ const {
deleteUserById,
} = require('~/models/userMethods');
const { createToken, findToken, deleteTokens, Session } = require('~/models');
const { sendEmail, checkEmailConfig } = require('~/server/utils');
const { isEnabled, checkEmailConfig, sendEmail } = require('~/server/utils');
const { registerSchema } = require('~/strategies/validators');
const { hashToken } = require('~/server/utils/crypto');
const isDomainAllowed = require('./isDomainAllowed');
@@ -188,7 +188,8 @@ const registerUser = async (user, additionalData = {}) => {
};
const emailEnabled = checkEmailConfig();
const newUser = await createUser(newUserData, false, true);
const disableTTL = isEnabled(process.env.ALLOW_UNVERIFIED_EMAIL_LOGIN);
const newUser = await createUser(newUserData, disableTTL, true);
newUserId = newUser._id;
if (emailEnabled && !newUser.emailVerified) {
await sendVerificationEmail({

View File

@@ -45,8 +45,14 @@ module.exports = {
AZURE_ASSISTANTS_BASE_URL,
EModelEndpoint.azureAssistants,
),
[EModelEndpoint.bedrock]: generateConfig(process.env.BEDROCK_AWS_SECRET_ACCESS_KEY),
[EModelEndpoint.bedrock]: generateConfig(
process.env.BEDROCK_AWS_SECRET_ACCESS_KEY ?? process.env.BEDROCK_AWS_DEFAULT_REGION,
),
/* key will be part of separate config */
[EModelEndpoint.agents]: generateConfig(process.env.I_AM_A_TEAPOT),
[EModelEndpoint.agents]: generateConfig(
process.env.EXPERIMENTAL_AGENTS,
undefined,
EModelEndpoint.agents,
),
},
};

View File

@@ -1,13 +1,12 @@
const { getAgent } = require('~/models/Agent');
const { loadAgent } = require('~/models/Agent');
const { logger } = require('~/config');
const buildOptions = (req, endpoint, parsedBody) => {
const { agent_id, instructions, spec, ...model_parameters } = parsedBody;
const agentPromise = getAgent({
id: agent_id,
// TODO: better author handling
author: req.user.id,
const agentPromise = loadAgent({
req,
agent_id,
}).catch((error) => {
logger.error(`[/agents/:${agent_id}] Error retrieving agent during build options step`, error);
return undefined;

View File

@@ -14,14 +14,16 @@ const { tool } = require('@langchain/core/tools');
const { createContentAggregator } = require('@librechat/agents');
const {
EModelEndpoint,
providerEndpointMap,
getResponseSender,
providerEndpointMap,
} = require('librechat-data-provider');
const { getDefaultHandlers } = require('~/server/controllers/agents/callbacks');
// for testing purposes
// const createTavilySearchTool = require('~/app/clients/tools/structured/TavilySearch');
const initAnthropic = require('~/server/services/Endpoints/anthropic/initializeClient');
const initOpenAI = require('~/server/services/Endpoints/openAI/initializeClient');
const {
getDefaultHandlers,
createToolEndCallback,
} = require('~/server/controllers/agents/callbacks');
const initAnthropic = require('~/server/services/Endpoints/anthropic/initialize');
const initOpenAI = require('~/server/services/Endpoints/openAI/initialize');
const getBedrockOptions = require('~/server/services/Endpoints/bedrock/options');
const { loadAgentTools } = require('~/server/services/ToolService');
const AgentClient = require('~/server/controllers/agents/client');
const { getModelMaxTokens } = require('~/utils');
@@ -50,6 +52,7 @@ const providerConfigMap = {
[EModelEndpoint.openAI]: initOpenAI,
[EModelEndpoint.azureOpenAI]: initOpenAI,
[EModelEndpoint.anthropic]: initAnthropic,
[EModelEndpoint.bedrock]: getBedrockOptions,
};
const initializeClient = async ({ req, res, endpointOption }) => {
@@ -58,34 +61,33 @@ const initializeClient = async ({ req, res, endpointOption }) => {
}
// TODO: use endpointOption to determine options/modelOptions
/** @type {Array<UsageMetadata>} */
const collectedUsage = [];
/** @type {ArtifactPromises} */
const artifactPromises = [];
const { contentParts, aggregateContent } = createContentAggregator();
const eventHandlers = getDefaultHandlers({ res, aggregateContent });
// const tools = [createTavilySearchTool()];
// const tools = [_getWeather];
// const tool_calls = [{ name: 'getPeople_action_swapi---dev' }];
// const tool_calls = [{ name: 'dalle' }];
// const tool_calls = [{ name: 'getItmOptions_action_YWlhcGkzLn' }];
// const tool_calls = [{ name: 'tavily_search_results_json' }];
// const tool_calls = [
// { name: 'searchListings_action_emlsbG93NT' },
// { name: 'searchAddress_action_emlsbG93NT' },
// { name: 'searchMLS_action_emlsbG93NT' },
// { name: 'searchCoordinates_action_emlsbG93NT' },
// { name: 'searchUrl_action_emlsbG93NT' },
// { name: 'getPropertyDetails_action_emlsbG93NT' },
// ];
const toolEndCallback = createToolEndCallback({ req, res, artifactPromises });
const eventHandlers = getDefaultHandlers({
res,
aggregateContent,
toolEndCallback,
collectedUsage,
});
if (!endpointOption.agent) {
throw new Error('No agent promise provided');
}
/** @type {Agent} */
/** @type {Agent | null} */
const agent = await endpointOption.agent;
if (!agent) {
throw new Error('Agent not found');
}
const { tools, toolMap } = await loadAgentTools({
req,
tools: agent.tools,
agent_id: agent.id,
tool_resources: agent.tool_resources,
// openAIApiKey: process.env.OPENAI_API_KEY,
});
@@ -121,8 +123,11 @@ const initializeClient = async ({ req, res, endpointOption }) => {
contentParts,
modelOptions,
eventHandlers,
collectedUsage,
artifactPromises,
endpoint: EModelEndpoint.agents,
configOptions: options.configOptions,
attachments: endpointOption.attachments,
maxContextTokens:
agent.max_context_tokens ??
getModelMaxTokens(modelOptions.model, providerEndpointMap[agent.provider]),

View File

@@ -33,7 +33,7 @@ const addTitle = async (req, { text, response, client }) => {
conversationId: response.conversationId,
title,
},
{ context: 'api/server/services/Endpoints/bedrock/title.js' },
{ context: 'api/server/services/Endpoints/agents/title.js' },
);
};

View File

@@ -1,6 +1,6 @@
const addTitle = require('./addTitle');
const buildOptions = require('./buildOptions');
const initializeClient = require('./initializeClient');
const addTitle = require('./title');
const buildOptions = require('./build');
const initializeClient = require('./initialize');
module.exports = {
addTitle,

View File

@@ -1,6 +1,6 @@
const addTitle = require('./addTitle');
const buildOptions = require('./buildOptions');
const initializeClient = require('./initializeClient');
const addTitle = require('./title');
const buildOptions = require('./build');
const initializeClient = require('./initalize');
module.exports = {
addTitle,

View File

@@ -2,7 +2,7 @@
const { HttpsProxyAgent } = require('https-proxy-agent');
const { ErrorTypes } = require('librechat-data-provider');
const { getUserKey, getUserKeyExpiry, getUserKeyValues } = require('~/server/services/UserService');
const initializeClient = require('./initializeClient');
const initializeClient = require('./initalize');
// const { OpenAIClient } = require('~/app');
jest.mock('~/server/services/UserService', () => ({

View File

@@ -1,5 +1,5 @@
const buildOptions = require('./buildOptions');
const initializeClient = require('./initializeClient');
const buildOptions = require('./build');
const initializeClient = require('./initialize');
module.exports = {
buildOptions,

View File

@@ -2,7 +2,7 @@
const { HttpsProxyAgent } = require('https-proxy-agent');
const { ErrorTypes } = require('librechat-data-provider');
const { getUserKey, getUserKeyExpiry, getUserKeyValues } = require('~/server/services/UserService');
const initializeClient = require('./initializeClient');
const initializeClient = require('./initialize');
// const { OpenAIClient } = require('~/app');
jest.mock('~/server/services/UserService', () => ({

View File

@@ -19,7 +19,7 @@ const getOptions = async ({ req, endpointOption }) => {
const expiresAt = req.body.key;
const isUserProvided = BEDROCK_AWS_SECRET_ACCESS_KEY === AuthType.USER_PROVIDED;
const credentials = isUserProvided
let credentials = isUserProvided
? await getUserKey({ userId: req.user.id, name: EModelEndpoint.bedrock })
: {
accessKeyId: BEDROCK_AWS_ACCESS_KEY_ID,
@@ -30,6 +30,14 @@ const getOptions = async ({ req, endpointOption }) => {
throw new Error('Bedrock credentials not provided. Please provide them again.');
}
if (
!isUserProvided &&
(credentials.accessKeyId === undefined || credentials.accessKeyId === '') &&
(credentials.secretAccessKey === undefined || credentials.secretAccessKey === '')
) {
credentials = undefined;
}
if (expiresAt && isUserProvided) {
checkUserKeyExpiry(expiresAt, EModelEndpoint.bedrock);
}
@@ -53,7 +61,6 @@ const getOptions = async ({ req, endpointOption }) => {
/** @type {import('@librechat/agents').BedrockConverseClientOptions} */
const requestOptions = Object.assign(
{
credentials,
model: endpointOption.model,
region: BEDROCK_AWS_DEFAULT_REGION,
streaming: true,
@@ -72,6 +79,10 @@ const getOptions = async ({ req, endpointOption }) => {
endpointOption.model_parameters,
);
if (credentials) {
requestOptions.credentials = credentials;
}
const configOptions = {};
if (PROXY) {
configOptions.httpAgent = new HttpsProxyAgent(PROXY);

View File

@@ -3,6 +3,7 @@ const generateArtifactsPrompt = require('~/app/clients/prompts/artifacts');
const buildOptions = (endpoint, parsedBody, endpointType) => {
const {
modelLabel,
chatGptLabel,
promptPrefix,
maxContextTokens,
@@ -17,6 +18,7 @@ const buildOptions = (endpoint, parsedBody, endpointType) => {
const endpointOption = removeNullishValues({
endpoint,
endpointType,
modelLabel,
chatGptLabel,
promptPrefix,
resendFiles,

View File

@@ -1,5 +1,5 @@
const initializeClient = require('./initializeClient');
const buildOptions = require('./buildOptions');
const initializeClient = require('./initialize');
const buildOptions = require('./build');
module.exports = {
initializeClient,

View File

@@ -1,6 +1,6 @@
const addTitle = require('./addTitle');
const buildOptions = require('./buildOptions');
const initializeClient = require('./initializeClient');
const addTitle = require('./title');
const buildOptions = require('./build');
const initializeClient = require('./initialize');
module.exports = {
addTitle,

View File

@@ -1,6 +1,6 @@
// file deepcode ignore HardcodedNonCryptoSecret: No hardcoded secrets
const { getUserKey } = require('~/server/services/UserService');
const initializeClient = require('./initializeClient');
const initializeClient = require('./initialize');
const { GoogleClient } = require('~/app');
jest.mock('~/server/services/UserService', () => ({

View File

@@ -3,7 +3,7 @@ const getLogStores = require('~/cache/getLogStores');
const { isEnabled } = require('~/server/utils');
const { saveConvo } = require('~/models');
const { logger } = require('~/config');
const initializeClient = require('./initializeClient');
const initializeClient = require('./initialize');
const addTitle = async (req, { text, response, client }) => {
const { TITLE_CONVO = 'true' } = process.env ?? {};

View File

@@ -3,6 +3,7 @@ const generateArtifactsPrompt = require('~/app/clients/prompts/artifacts');
const buildOptions = (endpoint, parsedBody) => {
const {
modelLabel,
chatGptLabel,
promptPrefix,
agentOptions,
@@ -16,10 +17,10 @@ const buildOptions = (endpoint, parsedBody) => {
} = parsedBody;
const endpointOption = removeNullishValues({
endpoint,
tools:
tools
.map((tool) => tool?.pluginKey ?? tool)
.filter((toolName) => typeof toolName === 'string'),
tools: tools
.map((tool) => tool?.pluginKey ?? tool)
.filter((toolName) => typeof toolName === 'string'),
modelLabel,
chatGptLabel,
promptPrefix,
agentOptions,

View File

@@ -1,5 +1,5 @@
const buildOptions = require('./buildOptions');
const initializeClient = require('./initializeClient');
const buildOptions = require('./build');
const initializeClient = require('./initialize');
module.exports = {
buildOptions,

View File

@@ -1,7 +1,7 @@
// gptPlugins/initializeClient.spec.js
const { EModelEndpoint, ErrorTypes, validateAzureGroups } = require('librechat-data-provider');
const { getUserKey, getUserKeyValues } = require('~/server/services/UserService');
const initializeClient = require('./initializeClient');
const initializeClient = require('./initialize');
const { PluginsClient } = require('~/app');
// Mock getUserKey since it's the only function we want to mock

View File

@@ -3,6 +3,7 @@ const generateArtifactsPrompt = require('~/app/clients/prompts/artifacts');
const buildOptions = (endpoint, parsedBody) => {
const {
modelLabel,
chatGptLabel,
promptPrefix,
maxContextTokens,
@@ -17,6 +18,7 @@ const buildOptions = (endpoint, parsedBody) => {
const endpointOption = removeNullishValues({
endpoint,
modelLabel,
chatGptLabel,
promptPrefix,
resendFiles,

View File

@@ -1,6 +1,6 @@
const addTitle = require('./addTitle');
const buildOptions = require('./buildOptions');
const initializeClient = require('./initializeClient');
const addTitle = require('./title');
const buildOptions = require('./build');
const initializeClient = require('./initialize');
module.exports = {
addTitle,

View File

@@ -130,7 +130,7 @@ const initializeClient = async ({
if (optionsOnly) {
const requestOptions = Object.assign(
{
modelOptions: endpointOption.modelOptions,
modelOptions: endpointOption.model_parameters,
},
clientOptions,
);

Some files were not shown because too many files have changed in this diff Show More