Compare commits

..

83 Commits

Author SHA1 Message Date
Danny Avila
020995514e v0.7.5-rc2 (#3976)
*  v0.7.5-rc2

* docs: update README

* refactor(settings): Update rememberForkOption default value

* a11y: proper screen reader announcements for content blocks

* Update version to 0.7.423 in package-lock.json and packages/data-provider/package.json

* chore: rename rememberForkOption -> rememberDefaultFork to apply new default value

* fix: headlessui menu stealing focus from Settings Dialog when pressing Enter
2024-09-10 19:00:27 -04:00
Marco Beretta
d6c0121b19 ⌨️ a11y(Settings): Improved Keyboard Navigation & Consistent Styling (#3975)
* feat: settings tba accessible

* refactor: cleanup unused code

* refactor: improve accessibility and user experience in ChatDirection component

* style: focus ring primary class

* improve a11y of avatar dialog

* style: a11y improvements for Settings

* style: focus ring primary class in OriginalDialog component

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2024-09-10 15:11:39 -04:00
Danny Avila
1a1e6850a3 🪨 fix: Minor AWS Bedrock/Misc. Improvements (#3974)
* refactor(EditMessage): avoid manipulation of native paste handling, leverage react-hook-form for textarea changes

* style: apply better theming for MinimalIcon

* fix(useVoicesQuery/useCustomConfigSpeechQuery): make sure to only try request once per render

* feat: edit message content parts

* fix(useCopyToClipboard): handle both assistants and agents content blocks

* refactor: remove save & submit and update text content correctly

* chore(.env.example/config): exclude unsupported bedrock models

* feat: artifacts for aws bedrock

* fix: export options for bedrock conversations
2024-09-10 12:56:19 -04:00
Danny Avila
341e086d70 🛠️ fix: Completion Edge Cases & Improve Error Handling UX (#3968)
* fix: edge cases concerning completion response as an array

* refactor: improve invalid request error UX
2024-09-09 20:58:15 -04:00
Danny Avila
0148b9b097 🔒 refactor: Apply interface settings to all Roles (#3967) 2024-09-09 20:15:08 -04:00
Danny Avila
748b41eda4 🔒 feat: RBAC for Multi-Convo Feature (#3964)
* fix: remove duplicate keys in German language translations

* wip: multi-convo role permissions

* ci: Update loadDefaultInterface tests due to MULTI_CONVO

* ci: update Role.spec.js with tests for MULTI_CONVO permission type

* fix: Update ContentParts component to handle undefined content array

* feat: render Multi-Convo based on UI permissions
2024-09-09 16:29:24 -04:00
Danny Avila
d59b62174f 🪨 feat: AWS Bedrock support (#3935)
* feat: Add BedrockIcon component to SVG library

* feat: EModelEndpoint.bedrock

* feat: first pass, bedrock chat. note: AgentClient is returning `agents` as conversation.endpoint

* fix: declare endpoint in initialization step

* chore: Update @librechat/agents dependency to version 1.4.5

* feat: backend content aggregation for agents/bedrock

* feat: abort agent requests

* feat: AWS Bedrock icons

* WIP: agent provider schema parsing

* chore: Update EditIcon props type

* refactor(useGenerationsByLatest): make agents and bedrock editable

* refactor: non-assistant message content, parts

* fix: Bedrock response `sender`

* fix: use endpointOption.model_parameters not endpointOption.modelOptions

* fix: types for step handler

* refactor: Update Agents.ToolCallDelta type

* refactor: Remove unnecessary assignment of parentMessageId in AskController

* refactor: remove unnecessary assignment of parentMessageId (agent request handler)

* fix(bedrock/agents): message regeneration

* refactor: dynamic form elements using react-hook-form Controllers

* fix: agent icons/labels for messages

* fix: agent actions

* fix: use of new dynamic tags causing application crash

* refactor: dynamic settings touch-ups

* refactor: update Slider component to allow custom track class name

* refactor: update DynamicSlider component styles

* refactor: use Constants value for GLOBAL_PROJECT_NAME (enum)

* feat: agent share global methods/controllers

* fix: agents query

* fix: `getResponseModel`

* fix: share prompt a11y issue

* refactor: update SharePrompt dialog theme styles

* refactor: explicit typing for SharePrompt

* feat: add agent roles/permissions

* chore: update @librechat/agents dependency to version 1.4.7 for tool_call_ids edge case

* fix(Anthropic): messages.X.content.Y.tool_use.input: Input should be a valid dictionary

* fix: handle text parts with tool_call_ids and empty text

* fix: role initialization

* refactor: don't make instructions required

* refactor: improve typing of Text part

* fix: setShowStopButton for agents route

* chore: remove params for now

* fix: add streamBuffer and streamRate to help prevent 'Overloaded' errors from Anthropic API

* refactor: remove console.log statement in ContentRender component

* chore: typing, rename Context to Delete Button

* chore(DeleteButton): logging

* refactor(Action): make accessible

* style(Action): improve a11y again

* refactor: remove use/mention of mongoose sessions

* feat: first pass, sharing agents

* feat: visual indicator for global agent, remove author when serving to non-author

* wip: params

* chore: fix typing issues

* fix(schemas): typing

* refactor: improve accessibility of ListCard component and fix console React warning

* wip: reset templates for non-legacy new convos

* Revert "wip: params"

This reverts commit f8067e91d4.

* Revert "refactor: dynamic form elements using react-hook-form Controllers"

This reverts commit 2150c4815d.

* fix(Parameters): types and parameter effect update to only update local state to parameters

* refactor: optimize useDebouncedInput hook for better performance

* feat: first pass, anthropic bedrock params

* chore: paramEndpoints check for endpointType too

* fix: maxTokens to use coerceNumber.optional(),

* feat: extra chat model params

* chore: reduce code repetition

* refactor: improve preset title handling in SaveAsPresetDialog component

* refactor: improve preset handling in HeaderOptions component

* chore: improve typing, replace legacy dialog for SaveAsPresetDialog

* feat: save as preset from parameters panel

* fix: multi-search in select dropdown when using Option type

* refactor: update default showDefault value to false in Dynamic components

* feat: Bedrock presets settings

* chore: config, fix agents schema, update config version

* refactor: update AWS region variable name in bedrock options endpoint to BEDROCK_AWS_DEFAULT_REGION

* refactor: update baseEndpointSchema in config.ts to include baseURL property

* refactor: update createRun function to include req parameter and set streamRate based on provider

* feat: availableRegions via config

* refactor: remove unused demo agent controller file

* WIP: title

* Update @librechat/agents to version 1.5.0

* chore: addTitle.js to handle empty responseText

* feat: support images and titles

* feat: context token updates

* Refactor BaseClient test to use expect.objectContaining

* refactor: add model select, remove header options params, move side panel params below prompts

* chore: update models list, catch title error

* feat: model service for bedrock models (env)

* chore: Remove verbose debug log in AgentClient class following stream

* feat(bedrock): track token spend; fix: token rates, value key mapping for AWS models

* refactor: handle streamRate in `handleLLMNewToken` callback

* chore: AWS Bedrock example config in `.env.example`

* refactor: Rename bedrockMeta to bedrockGeneral in settings.ts and use for AI21 and Amazon Bedrock providers

* refactor: Update `.env.example` with AWS Bedrock model IDs URL and additional notes

* feat: titleModel support for bedrock

* refactor: Update `.env.example` with additional notes for AWS Bedrock model IDs
2024-09-09 12:06:59 -04:00
Raí Santos
8c14360263 🌍 i18n: Improved Portuguese language translations (#3947)
Co-authored-by: RaiSantos <itzraisu@gmail.com>
2024-09-07 08:18:56 -04:00
Marlon
d0dc858e2d 🌍 i18n: Improved German language translations (#3924) 2024-09-05 21:06:20 -04:00
hide361
9cf390c657 🌍 i18n: Update Japanese translation (#3877) 2024-09-05 21:06:01 -04:00
Hervey
14199d5521 🌍 i18n: Updated Chinese Translation (#3871)
* 🌍 : Updated Chinese Translation

* 🌍 : Updated Chinese Translation
2024-09-05 21:05:40 -04:00
Vesna Tan
b9197f90c6 👐 a11y: Misc. Improvements (#3910)
* fix focus for cancel button in convo delete modal window #3829

* add aria-hidden and aria-label to X and Check svg/button respectively and updated OGDialogClose focus color

* update rename, newchat, newchat icon, ConvoOptions icon
2024-09-05 14:30:17 -04:00
Danny Avila
9ec665dd2c 🪟 fix: Windows Build (npm) (#3889)
* chore: package-lock.json

* chore: remove shadcn files (temp)

* refactor: language comparisons script

* fix: resolve package-lock file for windows compatibility

* chore: Enable Windows unit tests for frontend

* refactor: move shadcn components to data-provider
2024-09-02 10:01:09 -04:00
Danny Avila
136599081c 🧩 fix: plugins build options, prevent undefined tools error (#3876) 2024-09-01 08:35:05 -04:00
Danny Avila
a0291ed155 🚧 chore: merge latest dev build to main repo (#3844)
* agents - phase 1 (#30)

* chore: copy assistant files

* feat: frontend and data-provider

* feat: backend get endpoint test

* fix(MessageEndpointIcon): switched to AgentName and AgentAvatar

* fix: small fixes

* fix: agent endpoint config

* fix: show Agent Builder

* chore: install agentus

* chore: initial scaffolding for agents

* fix: updated Assistant logic to Agent Logic for some Agent components

* WIP first pass, demo of agent package

* WIP: initial backend infra for agents

* fix: agent list error

* wip: agents routing

* chore: Refactor useSSE hook to handle different data events

* wip: correctly emit events

* chore: Update @librechat/agentus npm dependency to version 1.0.9

* remove comment

* first pass: streaming agent text

* chore: Remove @librechat/agentus root-level workspace npm dependency

* feat: Agent Schema and Model

* fix: content handling fixes

* fix: content message save

* WIP: new content data

* fix: run step issue with tool calls

* chore: Update @librechat/agentus npm dependency to version 1.1.5

* feat: update controller and agent routes

* wip: initial backend tool and tool error handling support

* wip: tool chunks

* chore: Update @librechat/agentus npm dependency to version 1.1.7

* chore: update tool_call typing, add test conditions and logs

* fix: create agent

* fix: create agent

* first pass: render completed content parts

* fix: remove logging, fix step handler typing

* chore: Update @librechat/agentus npm dependency to version 1.1.9

* refactor: cleanup maps on unmount

* chore: Update BaseClient.js to safely count tokens for string, number, and boolean values

* fix: support subsequent messages with tool_calls

* chore: export order

* fix: select agent

* fix: tool call types and handling

* chore: switch to anthropic for testing

* fix: AgentSelect

* refactor: experimental: OpenAIClient to use array for intermediateReply

* fix(useSSE): revert old condition for streaming legacy client tokens

* fix: lint

* revert `agent_id` to `id`

* chore: update localization keys for agent-related components

* feat: zod schema handling for actions

* refactor(actions): if no params, no zodSchema

* chore: Update @librechat/agentus npm dependency to version 1.2.1

* feat: first pass, actions

* refactor: empty schema for actions without params

* feat: Update createRun function to accept additional options

* fix: message payload formatting; feat: add more client options

* fix: ToolCall component rendering when action has no args but has output

* refactor(ToolCall): allow non-stringy args

* WIP: first pass, correctly formatted tool_calls between providers

* refactor: Remove duplicate import of 'roles' module

* refactor: Exclude 'vite.config.ts' from TypeScript compilation

* refactor: fix agent related types
> - no need to use endpoint/model fields for identifying agent metadata
> - add `provider` distinction for agent-configured 'endpoint'
- no need for agent-endpoint map
- reduce complexity of tools as functions into tools as string[]
- fix types related to above changes
- reduce unnecessary variables for queries/mutations and corresponding react-query keys

* refactor: Add tools and tool_kwargs fields to agent schema

* refactor: Remove unused code and update dependencies

* refactor: Update updateAgentHandler to use req.body directly

* refactor: Update AgentSelect component to use localized hooks

* refactor: Update agent schema to include tools and provider fields

* refactor(AgentPanel): add scrollbar gutter, add provider field to form, fix agent schema required values

* refactor: Update AgentSwitcher component to use selectedAgentId instead of selectedAgent

* refactor: Update AgentPanel component to include alternateName import and defaultAgentFormValues

* refactor(SelectDropDown): allow setting value as option while still supporting legacy usage (string values only)

* refactor: SelectDropdown changes - Only necessary when the available values are objects with label/value fields and the selected value is expected to be a string.

* refactor: TypeError issues and handle provider as option

* feat: Add placeholder for provider selection in AgentPanel component

* refactor: Update agent schema to include author and provider fields

* fix: show expected 'create agent' placeholder when creating agent

* chore: fix localization strings, hide capabilities form for now

* chore: typing

* refactor: import order and use compact agents schema for now

* chore: typing

* refactor: Update AgentForm type to use AgentCapabilities

* fix agent form agent selection issues

* feat: responsive agent selection

* fix: Handle cancelled fetch in useSelectAgent hook

* fix: reset agent form on accordion close/open

* feat: Add agent_id to default conversation for agents endpoint

* feat: agents endpoint request handling

* refactor: reset conversation model on agent select

* refactor: add `additional_instructions` to conversation schema, organize other fields

* chore: casing

* chore: types

* refactor(loadAgentTools): explicitly pass agent_id, do not pass `model` to loadAgentTools for now, load action sets by agent_id

* WIP: initial draft of real agent client initialization

* WIP: first pass, anthropic agent requests

* feat: remember last selected agent

* feat: openai and azure connected

* fix: prioritize agent model for runs unless an explicit override model is passed from client

* feat: Agent Actions

* fix: save agent id to convo

* feat: model panel (#29)

* feat: model panel

* bring back comments

* fix: method still null

* fix: AgentPanel FormContext

* feat: add more parameters

* fix: style issues; refactor: Agent Controller

* fix: cherry-pick

* fix: Update AgentAvatar component to use AssistantIcon instead of BrainCircuit

* feat: OGDialog for delete agent; feat(assistant): update Agent types, introduced `model_parameters`

* feat: icon and general `model_parameters` update

* feat: use react-hook-form better

* fix: agent builder form reset issue when switching panels

* refactor: modularize agent builder form

---------

Co-authored-by: Danny Avila <danny@librechat.ai>

* fix: AgentPanel and ModelPanel type issues and use `useFormContext` and `watch` instead of `methods` directly and `useWatch`.

* fix: tool call issues due to invalid input (anthropic) of empty string

* fix: handle empty text in Part component

---------

Co-authored-by: Marco Beretta <81851188+berry-13@users.noreply.github.com>

* refactor: remove form ModelPanel and fixed nested ternary expressions in AgentConfig

* fix: Model Parameters not saved correctly

* refactor: remove console log

* feat: avatar upload and get for Agents (#36)

Co-authored-by: Marco Beretta <81851188+berry-13@users.noreply.github.com>

* chore: update to public package

* fix: typing, optional chaining

* fix: cursor not showing for content parts

* chore: conditionally enable agents

* ci: fix azure test

* ci: fix frontend tests, fix eslint api

* refactor: Remove unused errorContentPart variable

* continue of the agent message PR (#40)

* last fixes

* fix: agentMap

* pr merge test  (#41)

* fix: model icon not fetching correctly

* remove console logs

* feat: agent name

* refactor: pass documentsMap as a prop to allow re-render of assistant form

* refactor: pass documentsMap as a prop to allow re-render of assistant form

* chore: Bump version to 0.7.419

* fix: TypeError: Cannot read properties of undefined (reading 'id')

* refactor: update AgentSwitcher component to use ControlCombobox instead of Combobox

---------

Co-authored-by: Marco Beretta <81851188+berry-13@users.noreply.github.com>
2024-08-31 16:33:51 -04:00
Max Sanna
618be4bf2b ⚖️ feat: Terms and Conditions Dialog (#3712)
* Added UI for Terms and Conditions Modal Dialogue

* Handled the logout on not accepting

* Added logic for terms acceptance

* Add terms and conditions modal

* Fixed bug on terms and conditions modal, clicking out of it won't close it now

* Added acceptance of Terms to Database

* Removed unnecessary api endpoints from index.js

* Added NPM script to reset terms acceptance

* Added translations, markdown terms and samples

* Merged terms and conditions modal feature

* feat/Modal Terms and Conditions Dialog

* Amendments as requested by maintainers

* Reset package-lock (again)
2024-08-31 16:08:04 -04:00
Marco Beretta
79f9cd5a4d 💬 feat: assistant conversation starter (#3699)
* feat: initial UI convoStart

* fix: ConvoStarter UI

* fix: convoStarters bug

* feat: Add input field focus on conversation starters

* style: conversation starter UI update

* feat: apply fixes for starters

* style: update conversationStarters UI and fixed typo

* general UI update

* feat: Add onClick functionality to ConvoStarter component

* fix: quick fix test

* fix(AssistantSelect): remove object check

* fix: updateAssistant `conversation_starters` var

* chore: remove starter autofocus

* fix: no empty conversation starters, always show input, use Constants value for max count

* style: Update defaultTextPropsLabel styles, for a11y placeholder

* refactor: Update ConvoStarter component styles and class names for a11y and theme

* refactor: convostarter, move plus button to within persistent element

* fix: types

* chore: Update landing page assistant description styling with theming

* chore: assistant types

* refactor: documents routes

* refactor: optimize conversation starter mutations/queries

* refactor: Update listAllAssistants return type to Promise<Array<Assistant>>

* feat: edit existing starters

* feat(convo-starters): enhance ConvoStarter component and add animations

    - Update ConvoStarter component styling for better visual appeal
    - Implement fade-in animation for smoother appearance
    - Add hover effect with background color change
    - Improve text overflow handling with line-clamp and text-balance
    - Ensure responsive design for various screen sizes

* feat(assistant): add conversation starters to assistant builder

- Add localization strings for conversation starters
- Update mobile.css with shake animation for max starters reached
- Enhance user experience with tooltips and dynamic input handling

* refactor: select specific fields for assistant documents fetch

* refactor: remove endpoint query key, fetch all assistant docs for now, add conversation_starters to v1 methods

* refactor: add document filters based on endpoint config

* fix: starters not applied during creation

* refactor: update AssistantSelect component to handle undefined lastSelectedModels

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2024-08-31 13:42:20 -04:00
Max Sanna
63b80c3067 🗣️ fix: Azure OpenAI STT (#3731)
* Fix for Azure OpenAI STT

* chore(STTService): imports order

---------

Co-authored-by: Danny Avila <danacordially@gmail.com>
2024-08-30 15:11:15 -04:00
Danny Avila
7536e649d4 🚨 feat: Implement INPUT_LENGTH Error Type (#3866)
* feat: CONTEXT_LENGTH error type

* chore: rename error type

* chore: import order
2024-08-30 15:01:29 -04:00
Danny Avila
6936d0059f 🎨 refactor: Prevent Font Asset Hashing in Vite Config (#3865) 2024-08-30 13:56:49 -04:00
Danny Avila
0a359aa705 👐 a11y: Accessible Conversation Menu Options (#3864)
* fix: type issues

* feat: Fix document title setting in Conversation component

* style: new chat theme

* fix: No keyboard access to chat menus in the chat history #3788

* fix: Menu button in the chat history area does not indicate its state #3823

* refactor: use ariakit for DropdownPopup

* style: update sticky z-index in NewChat component

* style: update ConvoOptions menu button styling
2024-08-30 13:39:30 -04:00
Jacob Colyvan
2ce4f66218 🎙️ a11y: update html lang attribute (#3636)
* refactor: remove duplicate localStorage lang call

* refactor: use cookies to handle langcode

* feat: override index.html lang w/ cookie pref

* refactor: only read index on server start

* refactor: rename lang cookie & localstorage as backup

* refactor: use atomWithLocalStorage in language store

* fix: forced reflow warning in language select
2024-08-30 06:57:29 -04:00
Danny Avila
a0042317b2 📝 docs: Update README.md 2024-08-30 06:46:46 -04:00
Danny Avila
dc40e577af 🔊 refactor: Optimize Aria-Live Announcements for macOS VoiceOver (#3851) 2024-08-30 00:14:37 -04:00
Danny Avila
757b6d3275 🔨 refactor: Add Cache Busting to index.html (#3824) 2024-08-28 13:52:14 -04:00
Danny Avila
3b61322459 🚀 feat: Enhance PWA and asset caching strategy (#3822)
* chore: Update VitePWA registerType to 'autoUpdate', add cache busting to static file outputs

* chore: disable windows frontend workflow for now
2024-08-28 12:13:14 -04:00
Danny Avila
7c1ee242eb 🪄 feat: Code Artifacts (#3798)
* feat: Add CodeArtifacts component to Beta settings tab

* chore: Update npm dependency to @codesandbox/sandpack-react@2.18.2

* WIP: artifacts first pass

* WIP first pass remark-directive

* chore: revert markdown to original component + new artifacts rendering

* refactor: first pass rewrite

* refactor: add throttling

* first pass styling

* style: Add Radix Tabs, more styling changes

* feat: second pass

* style: code styling

* fix: package markdown fixes

* feat: Add useEffect hook to Artifacts component for visibility control, slide in animation

* fix: only set artifact if there is content

* refactor: typing and make latest artifact active if the number of artifacts changed

* feat: artifacts + shadcnui

* feat: Add Copy Code button to Artifacts component

* feat: first pass streaming updates

* refactor: optimize ordering of artifacts in Artifacts component

* refactor: optimize ordering of artifacts and add latest artifact activation in Artifacts component

* refactor: add order prop to Artifact

* feat: update to latest, use update time for ordering

* refactor: optimize ordering of artifacts and activate latest artifact in Artifacts component

* wip: remove thinking text and artifact formatting if empty

* refactor: optimize Markdown rendering and add support for code artifacts

* feat: global state for current artifact Id and set on artifact preview click

* refactor: Rename CodePreview component to ArtifactButton

* refactor: apply growth to artifact frame so artifact preview can take full space

* refactor: remove artifactIdsState

* refactor: nullify artifact state and reset on empty conversation

* feat: reset artifact state on conversation change

* feat: artifacts system prompt in backend

* refactor: update UI artifact toggle label to match localization key

* style: remove ArtifactButton inline-block styling

* feat: memoize ArtifactPreview, add html support

* refactor: abstract out components

* chore: bump react-resizable-panel

* refactor: resizable panel order props

* fix: side panel resizing crashes

* style: temporarily remove scrolling, add better styling

* chore: remove thinking for now

* chore: preprocess artifacts for now

* feat: Add auto scrolling to CodeMarkdown (artifacts)

* feat: autoswitch to preview

* feat: auto switch to code, adjust prompt, remove unused code

* feat: refresh button

* feat: open/close artifacts

* wip: mermaid

* refactor: w-fit Artifact button

* chore: organize code

* feat: first pass mermaid

* refactor: improve panning logic in MermaidDiagram component

* feat: center/zoom on first render

* refactor: add centering with reset button

* style: mermaid styling

* refactor: add back MermaidDiagram

* fix: static/html template

* fix: mermaid

* add examples to artifacts prompt

* refactor: fix CodeBar plugin prop logic

* refactor: remove unnecessary mention of artifacts when not requested

* fix: remove preprocessCodeArtifacts function and fix imports

* feat: improve artifacts guidelines and remove unnecessary mentions

* refactor: improve artifacts guidelines and remove unnecessary mentions

* chore: uninstall unused packages

* chore: bump vite

* chore: update three dependency to version 0.167.1

* refactor: move beta settings, add additional artifacts toggles

* feat: artifacts mode toggles

* refactor: adjust prompt

* feat: shadcnui instructions

* feat: code artifacts custom prompt mode

* chore: Update artifacts UI labels and instructions localizations

* refactor: Remove unused code in Markdown component
2024-08-27 17:03:16 -04:00
Danny Avila
80e1bdc282 v0.7.5-rc1 (#3807) 2024-08-27 15:01:41 -04:00
Danny Avila
a267f6e0da 🔍 fix: USE_REDIS condition, Markdown list counter, code highlights (#3806)
* fix: markdown rehype highlight accidental removal

* fix: USE_REDIS condition check

* fix: markdown list counter
2024-08-27 14:49:43 -04:00
Danny Avila
f742b9972e 🏷️ chore: Add Unofficial Naming Variation for Claude-3.5-Sonnet (#3800) 2024-08-27 09:07:04 -04:00
Danny Avila
bbaa0ee1cf 🚚 chore: Remove client-dist volume from deploy-compose.yml (#3799) 2024-08-27 07:18:47 -04:00
Marco Beretta
62881fee54 🔧 fix: handle missing custom config speech (#3790)
* feat: Update speech settings retrieval logic to handle missing custom configuration

This commit updates the logic in the Speech component and the getCustomConfigSpeech function to handle the case where the custom configuration is missing. Previously, if no custom configuration was found, an error would occur. Now, the code checks for the presence of the custom configuration and returns a message indicating that no custom configuration was found. This improves the robustness of the application and provides a better user experience.

* refactor: changed response message when no custom config is found
2024-08-27 06:09:04 -04:00
Danny Avila
34fd960d54 🧹 chore: remove legacy markdown code (#3789) 2024-08-26 17:53:19 -04:00
Danny Avila
5694ad4e55 🧠 feat: Prompt caching switch, prompt query params; refactor: static cache, prompt/markdown styling, trim copied code, switch new chat to convo URL (#3784)
* refactor: Update staticCache to use oneDayInSeconds for sMaxAge and maxAge

* refactor: role updates

* style: first pass cursor

* style: Update nested list styles in style.css

* feat: setIsSubmitting to true in message handler to prevent edge case where submitting turns false during message stream

* feat: Add logic to redirect to conversation page after creating a new conversation

* refactor: Trim code string before copying in CodeBlock component

* feat: configSchema bookmarks and presets defaults

* feat: Update loadDefaultInterface to handle undefined config

* refactor: use  for compression check

* feat: first pass, query params

* fix: styling issues for prompt cards

* feat: anthropic prompt caching UI switch

* chore: Update static file cache control defaults/comments in .env.example

* ci: fix tests

* ci: fix tests

* chore:  use "submitting" class server error connection suspense fallback
2024-08-26 15:34:46 -04:00
Fuegovic
bd701c197e 🌀 feat: Known Endpoints - Unify (#3778)
Signed-off-by: Fuegovic <fueg@live.ca>
2024-08-25 19:10:25 -04:00
Danny Avila
9bfe40bfcd ⏺️ style: Better Markdown Lists (#3777)
* style: markdown lists

* style: markdown lists, second pass

* style: update nested list styles in style.css
2024-08-25 16:33:25 -04:00
Fuegovic
07f520100d 🐋 feat: Known Endpoints: DeepSeek (#3776)
Signed-off-by: Fuegovic <fueg@live.ca>
2024-08-25 15:33:03 -04:00
Danny Avila
d52e81bde6 🐋 ci: Dockerfile.multi rewrite, maintain package integrity pt. 2 2024-08-24 15:09:38 -04:00
Danny Avila
88d8706757 🐋 ci: Dockerfile.multi rewrite, maintain package integrity (#3772)
* chore: touch server

* chore: dockerfile.multi first pass

* refactor: remove prod-stage build, share dist files instead
2024-08-24 15:00:51 -04:00
Danny Avila
d54458b3a6 🧮 feat: Improve structured token spending and testing; fix: Anthropic Cache Spend (#3766)
* chore: update jest and mongodb-memory-server dependencies

* fix: Anthropic edge case of increasing balance

* refactor: Update token calculations in Transaction model/spec

* refactor: `spendTokens` always record transactions, add Tx related tests

* feat: Add error handling for CHECK_BALANCE environment variable

* feat: Update import path for Balance model in Balance controller

* chore: remove logging

* refactor: Improve structured token spend logging in spendTokens.js

* ci: add unit tests for spend token

* ci: Improve structured token spend unit testing

* chore: improve logging phrase for no tx spent from balance
2024-08-24 04:36:08 -04:00
Danny Avila
ea5140ff0f 🧮 feat: Improve LaTeX rendering consistency (#3763)
* refactor: simplify LaTeX pre-processing for more consistent rendering, disables `singleDollarTextMath`

* refactor: disable singleDollarTextMath in all markdown components

* wip: first pass

* refactor: preserve code blocks and convert rather than preserve LaTeX delimiters

* refactor: remove unused escapeDollarNumber function from latex.ts
2024-08-23 13:45:27 -04:00
Danny Avila
967e8a1e92 🍎 refactor(a11y): Optimize Live Region Announcements for Apple VoiceOver (#3762)
* refactor: first pass rewrite

* refactor: update CLEAR_DELAY to 5000 milliseconds in LiveAnnouncer.tsx

* refactor: assertive messages to clear queue immediately, fix circular useCallback dependency issue

* chore: comment
2024-08-23 10:22:16 -04:00
Danny Avila
f86e9dd04c 🔖 feat: Enhance Bookmarks UX, add RBAC, toggle via librechat.yaml (#3747)
* chore: update package version to 0.7.416

* chore: Update Role.js imports order

* refactor: move updateTagsInConvo to tags route, add RBAC for tags

* refactor: add updateTagsInConvoOptions

* fix: loading state for bookmark form

* refactor: update primaryText class in TitleButton component

* refactor: remove duplicate bookmarks and theming

* refactor: update EditIcon component to use React.forwardRef

* refactor: add _id field to tConversationTagSchema

* refactor: remove promises

* refactor: move mutation logic from BookmarkForm -> BookmarkEditDialog

* refactor: update button class in BookmarkForm component

* fix: conversation mutations and add better logging to useConversationTagMutation

* refactor: update logger message in BookmarkEditDialog component

* refactor: improve UI consistency in BookmarkNav and NewChat components

* refactor: update logger message in BookmarkEditDialog component

* refactor: Add tags prop to BookmarkForm component

* refactor: Update BookmarkForm to avoid tag mutation if the tag already exists; also close dialog on submission programmatically

* refactor: general role helper function to support updating access permissions for different permission types

* refactor: Update getLatestText function to handle undefined values in message.content

* refactor: Update useHasAccess hook to handle null role values for authenticated users

* feat: toggle bookmarks access

* refactor: Update PromptsCommand to handle access permissions for prompts

* feat: updateConversationSelector

* refactor: rename `vars` to `tagToDelete` for clarity

* fix: prevent recreation of deleted tags in BookmarkMenu on Item Click

* ci: mock updateBookmarksAccess function

* ci: mock updateBookmarksAccess function
2024-08-22 17:09:05 -04:00
Danny Avila
366e4c5adb 🔧 fix: EndpointIcon crash when using @ mention command (#3742) 2024-08-22 10:49:51 -04:00
Danny Avila
596ecc6969 🔐 feat: Toggle Access to Prompts via librechat.yaml (#3735)
* chore: update CONFIG_VERSION to '1.1.6'

* chore: update package version to 0.7.415

* feat: toggle USER role access to prompts via librechat.yaml

* refactor: set prompts to true when loadDefaultInterface returns true

* ci(AppService): mock updatePromptsAccess
2024-08-21 19:57:34 -04:00
Marco Beretta
0c5568b80b ⌨️ style(a11y): kb access for LLM endpoint menu; refactor: style (#3714) 2024-08-21 18:19:33 -04:00
Danny Avila
98b437edd5 🖱️ fix: Message Scrolling UX; refactor: Frontend UX/DX Optimizations (#3733)
* refactor(DropdownPopup): set MenuButton `as` prop to `div` to prevent React warning: validateDOMNesting(...): <button> cannot appear as a descendant of <button>

* refactor: memoize ChatGroupItem and ControlCombobox components

* refactor(OpenAIClient): await stream process finish before finalCompletion event handling

* refactor: update useSSE.ts typing to handle null and undefined values in data properties

* refactor: set abort scroll to false on SSE connection open

* refactor: improve logger functionality with filter support

* refactor: update handleScroll typing in MessageContainer component

* refactor: update logger.dir call in useChatFunctions to log 'message_stream' tag format instead of the entire submission object as first arg

* refactor: fix null check for message object in Message component

* refactor: throttle handleScroll to help prevent auto-scrolling issues on new message requests; fix type issues within useMessageProcess

* refactor: add abortScrollByIndex logging effect

* refactor: update MessageIcon and Icon components to use React.memo for performance optimization

* refactor: memoize ConvoIconURL component for performance optimization

* chore: type issues

* chore: update package version to 0.7.414
2024-08-21 18:18:45 -04:00
Marco Beretta
ba9c351435 🔧 fix: add clear all button to bookmark navigation items (#3721) 2024-08-20 19:07:51 -04:00
Konstantin
58334dffea 🌏i18n: Improve Russian translation (#3718)
* Small correction

* Small correction 1

* Small corrections 2
2024-08-20 18:51:02 -04:00
Danny Avila
45479d468e fix(staticCache): NODE_ENV check 2024-08-20 07:32:30 -04:00
Danny Avila
14ddfec9b7 🧹 chore: address minor issues (#3710)
* refactor(OpenAIClient): improve error handling in chat completion

Improve error handling in the chat completion process to handle scenarios where the response has no choices or the message is undefined. This ensures that the intermediate reply is returned in such cases. Also, add logging statements for better debugging.

* refactor(Markdown): only show blinking cursor for latest message

* refactor: add submitting selector to prevent blinking cursor for empty, non-streaming, non-latest messages
2024-08-20 07:00:43 -04:00
Marco Beretta
5de3b8a148 🏷️ a11y: add aria-label to close button and aria-hidden to SVG (#3698)
Fixes #3641

Add accessible name to the close button for added conversations and hide SVG from assistive technology.

* Add `aria-label="Close added conversation"` to the `button` element.
* Add `aria-hidden="true"` to the `svg` element.

---

For more details, open the [Copilot Workspace session](https://copilot-workspace.githubnext.com/danny-avila/LibreChat/issues/3641?shareId=XXXX-XXXX-XXXX-XXXX).
2024-08-19 18:08:29 -04:00
Marco Beretta
d4c0f7267a 🔑 fix(AuthService): properly handle reading and deletion of password reset token (#3697) 2024-08-19 17:55:33 -04:00
Danny Avila
cebb3751c1 📣 a11y: Better Screen Reader Announcements (#3693)
* refactor: Improve LiveAnnouncer component

The LiveAnnouncer component in the client/src/a11y/LiveAnnouncer.tsx file has been refactored to improve its functionality. The component now processes text in chunks for better performance and adds a minimum announcement delay to prevent overlapping announcements. Additionally, the component now properly clears the announcement message and ID before setting a new one. These changes enhance the accessibility and user experience of the LiveAnnouncer component.

* refactor: manage only 2 LiveAnnouncer aria-live elements, queue assertive/polite together

* refactor: use localizations for event announcements

* refactor: update minimum announcement delay in LiveAnnouncer component

* refactor: replace *`_

* chore(useContentHandler): typing

* chore: more type fixes and safely announce final message
2024-08-19 03:51:17 -04:00
Danny Avila
598e2be225 🗨️ refactor(VariableForm): use InputCombobox, fix Dropdown Variables (#3692)
* feat: Add SimpleCombobox component

* feat: Add labelClassName and add manual focus handling

* feat: Update VariableForm component to use SimpleCombobox

The VariableForm component in the client/src/components/Prompts/Groups/VariableForm.tsx file has been updated to use the SimpleCombobox component instead of the InputWithDropdown component. This change improves the functionality and styling of the form.

* chore: Update VariableForm component placeholder text

* refactor: Improve VariableForm component

The VariableForm component in the client/src/components/Prompts/Groups/VariableForm.tsx file has been refactored to improve its functionality. The `parseFieldConfig` function now trims the `variable` string before processing it. Additionally, the `onSubmit` function now properly escapes potential regex special chars that may cause issues when replacing text

* refactor: Improve VariableForm using ariakit helpers/custom fields, open menu on input focus

* refactor: rename SimpleCombobox to InputCombobox
2024-08-18 22:23:19 -04:00
Danny Avila
8ca1e4f94f 🐛 fix: Anthropic Prompt Caching Edge Case (#3690) 2024-08-18 19:03:03 -04:00
Danny Avila
87d95a9d82 📱 fix: Resolve Android Device and Accessibility Issues of Sidebar Combobox (#3689)
* chore: Update @ariakit/react dependency to version 0.4.8

* refactor: Fix Combobox Android issue with radix-ui

* fix: Improve scrolling behavior by setting abort scroll state to false after scrolling to end

* wip: first pass switcher rewrite

* feat: Add button width calculation for ComboboxComponent

* refactor: Update ComboboxComponent styling for improved layout and appearance

* refactor: Update AssistantSwitcher component to handle null values for assistant names and avatar URLs

* refactor: Update ModelSwitcher component to use SimpleCombobox for improved functionality and styling

* refactor: Update Switcher Separator styling for improved layout and appearance

* refactor: Improve accessibility by adding aria-label to ComboboxComponent select items

* refactor: rename SimpleCombobox -> ControlCombobox
2024-08-18 19:02:46 -04:00
Konstantin
b22f1c166e 🌏 i18n: Improve Russian translation (#3674) 2024-08-18 13:23:13 -04:00
Danny Avila
683702d555 🤖 refactor: Remove Default Model Params for All Endpoints (#3682)
* refactor: use parseCompactConvo in buildOptions, and generate no default values for the API to avoid weird model behavior with defaults

* refactor: OTHER - always show cursor when markdown component is empty (preferable to not)

* refactor(OpenAISettings): use config object for setting defaults app-wide

* refactor: Use removeNullishValues in buildOptions for ALL  endpoints

* fix: add missing conversationId to title methods for transactions; refactor(GoogleClient): model options, set no default, add todo note for recording token usage

* fix: at minimum set a model default, as is required by API (edge case)
2024-08-18 06:00:03 -04:00
Danny Avila
d3a20357e9 🧪 feat: Prompt Dropdown Variable; style: Add Markdown Support (#3681)
* feat: Add extended inputs for promts library variables

* feat: Add maxRows prop to VariableForm input field

* 📩 feat: invite user (#3012)

* feat: basic invite-user script

* feat: add invite user functionality and registration validation middleware

* fix: invite user fixes

* refactor: consolidate direct model access to a central place of functions

* style(Registration): add spinner to continue button

* refactor: import ordrer

* feat: improve invite user script and error handling

* fix: merge conflict

* refactor: remove `console.log` and use `logger`

* fix: token operation and checkinvite issues

* bring back comment and remove console log

* fix: return invalid token when token is not found

* fix: getInvite fix

* refactor: Update Token.js to use async/await syntax for update and delete operations

* feat: Refactor Token.js to use async/await syntax for createToken and findToken functions

* refactor(inviteUser): define functions outside of module.exports

* Update AuthService.js

---------

Co-authored-by: Danny Avila <danny@librechat.ai>

* style: improve OpenAI.tsx input field focus styling

* refactor: update import statement in Input.tsx

* refactor: remove multi-line

* refactor:  update placeholder text to use localization

* style: new dropdown variable info and markdown styling for info

* Add ReactMarkdown

* chore: styling, import order

* refactor: update ReactMarkdown usage in VariableForm

* style: remove markdown class

* refactor: update mobile styling and use code renderer

* style(InputWithDropDown): update focus trigger style

* style(OptionsPopover): update Save As Preset `focus` and `dark:bg`

---------

Co-authored-by: Konstantin Meshcheryakov <kmeshcheryakov@klika-tech.com>
Co-authored-by: Marco Beretta <81851188+berry-13@users.noreply.github.com>
Co-authored-by: bsu3338 <bsu3338@users.noreply.github.com>
2024-08-18 05:52:05 -04:00
Marco Beretta
bbb9324447 📩 feat: invite user (#3012)
* feat: basic invite-user script

* feat: add invite user functionality and registration validation middleware

* fix: invite user fixes

* refactor: consolidate direct model access to a central place of functions

* style(Registration): add spinner to continue button

* refactor: import ordrer

* feat: improve invite user script and error handling

* fix: merge conflict

* refactor: remove `console.log` and use `logger`

* fix: token operation and checkinvite issues

* bring back comment and remove console log

* fix: return invalid token when token is not found

* fix: getInvite fix

* refactor: Update Token.js to use async/await syntax for update and delete operations

* feat: Refactor Token.js to use async/await syntax for createToken and findToken functions

* refactor(inviteUser): define functions outside of module.exports

* Update AuthService.js

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2024-08-18 00:23:38 -04:00
Danny Avila
a45b384bbc 💾 feat: Anthropic Prompt Caching (#3670)
* wip: initial cache control implementation, add typing for transactions handling

* feat: first pass of Anthropic Prompt Caching

* feat: standardize stream usage as pass in when calculating token counts

* feat: Add getCacheMultiplier function to calculate cache multiplier for different valueKeys and cacheTypes

* chore: imports order

* refactor: token usage recording in AnthropicClient, no need to "correct" as we have the correct amount

* feat: more accurate token counting using stream usage data

* feat: Improve token counting accuracy with stream usage data

* refactor: ensure more accurate than not token estimations if custom instructions or files are not being resent with every request

* refactor: cleanup updateUserMessageTokenCount to allow transactions to be as accurate as possible even if we shouldn't update user message token counts

* ci: fix tests
2024-08-17 03:24:09 -04:00
Marco Beretta
9f4c516615 fix: Export Button Content Shift; chore: bump axios and add logging (#3668)
* chore: bump axios version

* fix: export button glitch

* fix: remove console logs
2024-08-16 18:03:27 -04:00
Danny Avila
16c9aed1bb 🤖 feat: Recognize chatgpt-4o-latest, update default OpenAI Models (#3667)
* 🤖 feat: recognize chatgpt-4o-latest, update default OpenAI models

* chore: Bump data-provider version to 0.7.412
2024-08-16 15:28:17 -04:00
Danny Avila
3826af5909 🤲 a11y: Keyboard Accessibility for Account Settings (#3666)
* refactor(NavLinks -> AccountSettings): add theming, fix minor type issues, and rename component

* fix: closes #3499 - Account settings menu is not keyboard accessible

* fix(DataTable): type issue causing application crash, update contextMap key for storage
2024-08-16 15:09:03 -04:00
Arthur Barrett
6ad65ff065 🔧 fix: Delete Archived Chat z-index issue (#3643) 2024-08-16 13:53:38 -04:00
Danny Avila
91fc89076f 🤲 a11y: Sidebar Text Contrast (#3665)
* fix: use theme class for valid contrast for light/dark, conversation group names, fix type issues

* fix: searchbar text contrast styling, closes #3469

* style: add theming for prompt cards, fix a11y contrast issues

* a11y: address placeholder text contrast issues in LLM controls section

* chore: Add conditional check for pull request repository in a11y workflow

* style: Update text color contrast to WCAG standard for placeholder and focus states in AssistantPanel components
2024-08-16 13:50:47 -04:00
Danny Avila
f8a5dad265 🦙 fix: Update Title Message Role for Ollama if None Provided (#3663) 2024-08-16 04:54:37 -04:00
Marco Beretta
96581d56df 🖼️ style: Conversation Menu and Dialogs update (#3601)
* feat: new dropdown

* fix: maintain popover active when open

* fix: update DeleteButton and ShareButton component to use useState for managing dialog state

* BREAKING: style improvement of base Button component

* style: update export button

* a11y: ExportAndShareButton

* add border

* quick style fix

* fix: flick issue on convo

* fix: DropDown opens when renaming

* chore: update radix-ui/react-dropdown-menu to latest

* small fix

* style: bookmarks update

* reorder export modal

* feat: imporved dropdowns

* style: a lot of changes; header, bookmarks, export, nav, convo, convoOptions

* fix: small style issues

* fix: button

* fix: bookmarks header menu

* fix: dropdown close glitch

* feat: Improve accessibility and keyboard navigation in ModelSpec component

* fix: Nav related type issues

* style: ConvoOptions theming and focus ring

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2024-08-16 04:30:14 -04:00
Vesna Tan
7f50d2f7c0 🤲 a11y: Add landmark to footer, labels updates, and sidebar text contrast (#3630)
* add landmark contentinfo to footer

* fix aria-label ai model menuitem for issue #3587

* changed group/today text color to #535353
2024-08-16 03:27:56 -04:00
Akash
93cf7bab02 🤲 a11y: Update Chat History aria-label & Controls role (#3604) 2024-08-16 03:12:09 -04:00
Yuichi Oneda
e47c3f40f0 🔧 fix: Bookmark Order Adjustment When Moving Up (#3634) 2024-08-16 03:01:35 -04:00
Danny Avila
0ba08b15de chore: Add conditional check for pull request repository in a11y workflow 2024-08-16 02:50:23 -04:00
Danny Avila
16e1f74a6c fix(useTextToSpeechBrowser): handle undefined window.speechSynthesis on some android devices 2024-08-16 02:49:59 -04:00
Danny Avila
dba704079c 🔀 refactor: Modularize TTS Logic for Improved Browser support (#3657)
* WIP: message audio refactor

* WIP: use MessageAudio by provider

* fix: Update MessageAudio component to use TTSEndpoints enum

* feat: Update useTextToSpeechBrowser hook to handle errors and improve error logging

* feat: Add voice dropdown components for different TTS engines

* docs: update incorrect `voices` example

changed `voice: ''` to `voices: ['alloy']`

* feat: Add brwoser support check for Edge TTS engine component with error toast if not supported

---------

Co-authored-by: Marco Beretta <81851188+berry-13@users.noreply.github.com>
2024-08-15 11:34:25 -04:00
Danny Avila
bcde0beb47 🛠️ feat: Azure OpenAI Assistants File Downloads (#3653) 2024-08-15 09:10:12 -04:00
Marco Beretta
741e3e8395 🚀 Update README - v0.7.4 2024-08-15 00:40:59 -04:00
Danny Avila
dc8d30ad90 🎧 fix(TTS): Improve State of audio playback, hook patterns, and fix undefined MediaSource (#3632) 2024-08-13 12:08:55 -04:00
Danny Avila
e3ebcfd2b1 🎙️ fix: Optimize and Fix Browser TTS Incompatibility (firefox) (#3627)
* fix: 'disable' MsEdgeTTS on unsupported browser (firefox)

* refactor: only pass necessary props to HoverButton MessageAudio

* refactor: Fix conditional comparison operators in MessageAudio component

* refactor: Remove console.log statement in MessageAudio component
2024-08-13 04:14:37 -04:00
Danny Avila
6655304753 🎙️ a11y: Screen Reader Support for Dynamic Content Updates (#3625)
* WIP: first pass, hooks

* wip: isStream arg

* feat: first pass, dynamic content updates, screen reader announcements

* chore: unrelated, styling redundancy
2024-08-13 03:04:27 -04:00
Danny Avila
05696233a9 🎛️ fix: Improve Frontend Practices for Audio Settings (#3624)
* refactor: do not call await inside useCallbacks, rely on updates for dropdown

* fix: remember last selected voice

* refactor: Update Speech component to use TypeScript in useCallback

* refactor: Update Dropdown component styles to match header theme
2024-08-13 02:42:49 -04:00
Marco Beretta
8cbb6ba166 📧 fix: @/+command timing for click selections - closes #3613 (#3617) 2024-08-12 09:06:23 -04:00
Danny Avila
02847af580 📜 refactor: Optimize Longer Message Thread Performance (#3610) 2024-08-11 06:08:08 -04:00
539 changed files with 36812 additions and 9998 deletions

View File

@@ -65,6 +65,7 @@ PROXY=
# ANYSCALE_API_KEY=
# APIPIE_API_KEY=
# COHERE_API_KEY=
# DEEPSEEK_API_KEY=
# DATABRICKS_API_KEY=
# FIREWORKS_API_KEY=
# GROQ_API_KEY=
@@ -74,6 +75,7 @@ PROXY=
# PERPLEXITY_API_KEY=
# SHUTTLEAI_API_KEY=
# TOGETHERAI_API_KEY=
# UNIFY_API_KEY=
#============#
# Anthropic #
@@ -109,6 +111,26 @@ ANTHROPIC_API_KEY=user_provided
BINGAI_TOKEN=user_provided
# BINGAI_HOST=https://cn.bing.com
#=================#
# AWS Bedrock #
#=================#
# BEDROCK_AWS_DEFAULT_REGION=us-east-1 # A default region must be provided
# BEDROCK_AWS_ACCESS_KEY_ID=someAccessKey
# BEDROCK_AWS_SECRET_ACCESS_KEY=someSecretAccessKey
# Note: This example list is not meant to be exhaustive. If omitted, all known, supported model IDs will be included for you.
# BEDROCK_AWS_MODELS=anthropic.claude-3-5-sonnet-20240620-v1:0,meta.llama3-1-8b-instruct-v1:0
# See all Bedrock model IDs here: https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html#model-ids-arns
# Notes on specific models:
# The following models are not support due to not supporting streaming:
# ai21.j2-mid-v1
# The following models are not support due to not supporting conversation history:
# ai21.j2-ultra-v1, cohere.command-text-v14, cohere.command-light-text-v14
#============#
# Google #
#============#
@@ -147,7 +169,7 @@ GOOGLE_KEY=user_provided
#============#
OPENAI_API_KEY=user_provided
# OPENAI_MODELS=gpt-4o,gpt-4o-mini,gpt-3.5-turbo-0125,gpt-3.5-turbo-0301,gpt-3.5-turbo,gpt-4,gpt-4-0613,gpt-4-vision-preview,gpt-3.5-turbo-0613,gpt-3.5-turbo-16k-0613,gpt-4-0125-preview,gpt-4-turbo-preview,gpt-4-1106-preview,gpt-3.5-turbo-1106,gpt-3.5-turbo-instruct,gpt-3.5-turbo-instruct-0914,gpt-3.5-turbo-16k
# OPENAI_MODELS=gpt-4o,chatgpt-4o-latest,gpt-4o-mini,gpt-3.5-turbo-0125,gpt-3.5-turbo-0301,gpt-3.5-turbo,gpt-4,gpt-4-0613,gpt-4-vision-preview,gpt-3.5-turbo-0613,gpt-3.5-turbo-16k-0613,gpt-4-0125-preview,gpt-4-turbo-preview,gpt-4-1106-preview,gpt-3.5-turbo-1106,gpt-3.5-turbo-instruct,gpt-3.5-turbo-instruct-0914,gpt-3.5-turbo-16k
DEBUG_OPENAI=false
@@ -429,10 +451,10 @@ ALLOW_SHARED_LINKS_PUBLIC=true
# Static File Cache Control #
#==============================#
# Leave commented out to use default of 1 month for max-age and 1 week for s-maxage
# Leave commented out to use defaults: 1 day (86400 seconds) for s-maxage and 2 days (172800 seconds) for max-age
# NODE_ENV must be set to production for these to take effect
# STATIC_CACHE_MAX_AGE=604800
# STATIC_CACHE_S_MAX_AGE=259200
# STATIC_CACHE_MAX_AGE=172800
# STATIC_CACHE_S_MAX_AGE=86400
# If you have another service in front of your LibreChat doing compression, disable express based compression here
# DISABLE_COMPRESSION=true

View File

@@ -4,14 +4,23 @@ on:
pull_request:
paths:
- 'client/src/**'
workflow_dispatch:
inputs:
run_workflow:
description: 'Set to true to run this workflow'
required: true
default: 'false'
jobs:
axe-linter:
runs-on: ubuntu-latest
if: >
(github.event_name == 'pull_request' && github.event.pull_request.head.repo.full_name == 'danny-avila/LibreChat') ||
(github.event_name == 'workflow_dispatch' && github.event.inputs.run_workflow == 'true')
steps:
- uses: actions/checkout@v4
- uses: dequelabs/axe-linter-action@v1
with:
api_key: ${{ secrets.AXE_LINTER_API_KEY }}
github_token: ${{ secrets.GITHUB_TOKEN }}
github_token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -53,4 +53,4 @@ jobs:
- name: Run unit tests
run: npm run test:ci --verbose
working-directory: client
working-directory: client

16
.vscode/launch.json vendored Normal file
View File

@@ -0,0 +1,16 @@
{
"version": "0.2.0",
"configurations": [
{
"type": "node",
"request": "launch",
"name": "Launch LibreChat (debug)",
"skipFiles": ["<node_internals>/**"],
"program": "${workspaceFolder}/api/server/index.js",
"env": {
"NODE_ENV": "production"
},
"console": "integratedTerminal"
}
]
}

View File

@@ -1,4 +1,4 @@
# v0.7.4
# v0.7.5-rc2
# Base node image
FROM node:20-alpine AS node

View File

@@ -1,43 +1,44 @@
# v0.7.4
# Dockerfile.multi
# v0.7.5-rc2
# Build API, Client and Data Provider
# Base for all builds
FROM node:20-alpine AS base
WORKDIR /app
RUN apk --no-cache add curl
RUN npm config set fetch-retry-maxtimeout 600000 && \
npm config set fetch-retries 5 && \
npm config set fetch-retry-mintimeout 15000
COPY package*.json ./
COPY packages/data-provider/package*.json ./packages/data-provider/
COPY client/package*.json ./client/
COPY api/package*.json ./api/
RUN npm ci
# Build data-provider
FROM base AS data-provider-build
WORKDIR /app/packages/data-provider
COPY ./packages/data-provider ./
RUN npm install; npm cache clean --force
COPY packages/data-provider ./
RUN npm run build
RUN npm prune --production
# React client build
# Client build
FROM base AS client-build
WORKDIR /app/client
COPY ./client/package*.json ./
# Copy data-provider to client's node_modules
COPY --from=data-provider-build /app/packages/data-provider/ /app/client/node_modules/librechat-data-provider/
RUN npm install; npm cache clean --force
COPY ./client/ ./
COPY client ./
COPY --from=data-provider-build /app/packages/data-provider/dist /app/packages/data-provider/dist
ENV NODE_OPTIONS="--max-old-space-size=2048"
RUN npm run build
RUN npm prune --production
# Node API setup
# API setup (including client dist)
FROM base AS api-build
WORKDIR /app
COPY api ./api
COPY config ./config
COPY --from=data-provider-build /app/packages/data-provider/dist ./packages/data-provider/dist
COPY --from=client-build /app/client/dist ./client/dist
WORKDIR /app/api
COPY api/package*.json ./
COPY api/ ./
# Copy helper scripts
COPY config/ ./
# Copy data-provider to API's node_modules
COPY --from=data-provider-build /app/packages/data-provider/ /app/api/node_modules/librechat-data-provider/
RUN npm install --include prod; npm cache clean --force
COPY --from=client-build /app/client/dist /app/client/dist
RUN npm prune --production
EXPOSE 3080
ENV HOST=0.0.0.0
CMD ["node", "server/index.js"]
# Nginx setup
FROM nginx:1.27.0-alpine AS prod-stage
COPY ./client/nginx.conf /etc/nginx/conf.d/default.conf
CMD ["nginx", "-g", "daemon off;"]

View File

@@ -42,9 +42,11 @@
- 🖥️ UI matching ChatGPT, including Dark mode, Streaming, and latest updates
- 🤖 AI model selection:
- OpenAI, Azure OpenAI, BingAI, ChatGPT, Google Vertex AI, Anthropic (Claude), Plugins, Assistants API (including Azure Assistants)
- Anthropic (Claude), AWS Bedrock, OpenAI, Azure OpenAI, BingAI, ChatGPT, Google Vertex AI, Plugins, Assistants API (including Azure Assistants)
- ✅ Compatible across both **[Remote & Local AI services](https://www.librechat.ai/docs/configuration/librechat_yaml/ai_endpoints):**
- groq, Ollama, Cohere, Mistral AI, Apple MLX, koboldcpp, OpenRouter, together.ai, Perplexity, ShuttleAI, and more
- 🪄 Generative UI with **[Code Artifacts](https://youtu.be/GfTj7O4gmd0?si=WJbdnemZpJzBrJo3)**
- Create React, HTML code, and Mermaid diagrams right in chat
- 💾 Create, Save, & Share Custom Presets
- 🔀 Switch between AI Endpoints and Presets, mid-chat
- 🔄 Edit, Resubmit, and Continue Messages with Conversation branching
@@ -81,7 +83,7 @@ LibreChat brings together the future of assistant AIs with the revolutionary tec
With LibreChat, you no longer need to opt for ChatGPT Plus and can instead use free or pay-per-call APIs. We welcome contributions, cloning, and forking to enhance the capabilities of this advanced chatbot platform.
[![Watch the video](https://img.youtube.com/vi/bSVHEbVPNl4/maxresdefault.jpg)](https://www.youtube.com/watch?v=bSVHEbVPNl4)
[![Watch the video](https://raw.githubusercontent.com/LibreChat-AI/librechat.ai/main/public/images/changelog/v0.7.4.png)](https://www.youtube.com/watch?v=cvosUxogdpI)
Click on the thumbnail to open the video☝
---

View File

@@ -12,12 +12,13 @@ const { encodeAndFormat } = require('~/server/services/Files/images/encode');
const {
truncateText,
formatMessage,
addCacheControl,
titleFunctionPrompt,
parseParamFromPrompt,
createContextHandlers,
} = require('./prompts');
const spendTokens = require('~/models/spendTokens');
const { getModelMaxTokens } = require('~/utils');
const { spendTokens, spendStructuredTokens } = require('~/models/spendTokens');
const { getModelMaxTokens, matchModelName } = require('~/utils');
const { sleep } = require('~/server/utils');
const BaseClient = require('./BaseClient');
const { logger } = require('~/config');
@@ -32,6 +33,7 @@ function delayBeforeRetry(attempts, baseDelay = 1000) {
return new Promise((resolve) => setTimeout(resolve, baseDelay * attempts));
}
const tokenEventTypes = new Set(['message_start', 'message_delta']);
const { legacy } = anthropicSettings;
class AnthropicClient extends BaseClient {
@@ -44,6 +46,24 @@ class AnthropicClient extends BaseClient {
? options.contextStrategy.toLowerCase()
: 'discard';
this.setOptions(options);
/** @type {string | undefined} */
this.systemMessage;
/** @type {AnthropicMessageStartEvent| undefined} */
this.message_start;
/** @type {AnthropicMessageDeltaEvent| undefined} */
this.message_delta;
/** Whether the model is part of the Claude 3 Family
* @type {boolean} */
this.isClaude3;
/** Whether to use Messages API or Completions API
* @type {boolean} */
this.useMessages;
/** Whether or not the model is limited to the legacy amount of output tokens
* @type {boolean} */
this.isLegacyOutput;
/** Whether or not the model supports Prompt Caching
* @type {boolean} */
this.supportsCacheControl;
}
setOptions(options) {
@@ -63,14 +83,19 @@ class AnthropicClient extends BaseClient {
this.options = options;
}
const modelOptions = this.options.modelOptions || {};
this.modelOptions = {
...modelOptions,
model: modelOptions.model || anthropicSettings.model.default,
};
this.modelOptions = Object.assign(
{
model: anthropicSettings.model.default,
},
this.modelOptions,
this.options.modelOptions,
);
this.isClaude3 = this.modelOptions.model.includes('claude-3');
this.isLegacyOutput = !this.modelOptions.model.includes('claude-3-5-sonnet');
const modelMatch = matchModelName(this.modelOptions.model, EModelEndpoint.anthropic);
this.isClaude3 = modelMatch.startsWith('claude-3');
this.isLegacyOutput = !modelMatch.startsWith('claude-3-5-sonnet');
this.supportsCacheControl =
this.options.promptCache && this.checkPromptCacheSupport(modelMatch);
if (
this.isLegacyOutput &&
@@ -147,19 +172,74 @@ class AnthropicClient extends BaseClient {
options.baseURL = this.options.reverseProxyUrl;
}
if (requestOptions?.model && requestOptions.model.includes('claude-3-5-sonnet')) {
if (
this.supportsCacheControl &&
requestOptions?.model &&
requestOptions.model.includes('claude-3-5-sonnet')
) {
options.defaultHeaders = {
'anthropic-beta': 'max-tokens-3-5-sonnet-2024-07-15',
'anthropic-beta': 'max-tokens-3-5-sonnet-2024-07-15,prompt-caching-2024-07-31',
};
} else if (this.supportsCacheControl) {
options.defaultHeaders = {
'anthropic-beta': 'prompt-caching-2024-07-31',
};
}
return new Anthropic(options);
}
getTokenCountForResponse(response) {
/**
* Get stream usage as returned by this client's API response.
* @returns {AnthropicStreamUsage} The stream usage object.
*/
getStreamUsage() {
const inputUsage = this.message_start?.message?.usage ?? {};
const outputUsage = this.message_delta?.usage ?? {};
return Object.assign({}, inputUsage, outputUsage);
}
/**
* Calculates the correct token count for the current message based on the token count map and API usage.
* Edge case: If the calculation results in a negative value, it returns the original estimate.
* If revisiting a conversation with a chat history entirely composed of token estimates,
* the cumulative token count going forward should become more accurate as the conversation progresses.
* @param {Object} params - The parameters for the calculation.
* @param {Record<string, number>} params.tokenCountMap - A map of message IDs to their token counts.
* @param {string} params.currentMessageId - The ID of the current message to calculate.
* @param {AnthropicStreamUsage} params.usage - The usage object returned by the API.
* @returns {number} The correct token count for the current message.
*/
calculateCurrentTokenCount({ tokenCountMap, currentMessageId, usage }) {
const originalEstimate = tokenCountMap[currentMessageId] || 0;
if (!usage || typeof usage.input_tokens !== 'number') {
return originalEstimate;
}
tokenCountMap[currentMessageId] = 0;
const totalTokensFromMap = Object.values(tokenCountMap).reduce((sum, count) => {
const numCount = Number(count);
return sum + (isNaN(numCount) ? 0 : numCount);
}, 0);
const totalInputTokens =
(usage.input_tokens ?? 0) +
(usage.cache_creation_input_tokens ?? 0) +
(usage.cache_read_input_tokens ?? 0);
const currentMessageTokens = totalInputTokens - totalTokensFromMap;
return currentMessageTokens > 0 ? currentMessageTokens : originalEstimate;
}
/**
* Get Token Count for LibreChat Message
* @param {TMessage} responseMessage
* @returns {number}
*/
getTokenCountForResponse(responseMessage) {
return this.getTokenCountForMessage({
role: 'assistant',
content: response.text,
content: responseMessage.text,
});
}
@@ -212,7 +292,38 @@ class AnthropicClient extends BaseClient {
return files;
}
async recordTokenUsage({ promptTokens, completionTokens, model, context = 'message' }) {
/**
* @param {object} params
* @param {number} params.promptTokens
* @param {number} params.completionTokens
* @param {AnthropicStreamUsage} [params.usage]
* @param {string} [params.model]
* @param {string} [params.context='message']
* @returns {Promise<void>}
*/
async recordTokenUsage({ promptTokens, completionTokens, usage, model, context = 'message' }) {
if (usage != null && usage?.input_tokens != null) {
const input = usage.input_tokens ?? 0;
const write = usage.cache_creation_input_tokens ?? 0;
const read = usage.cache_read_input_tokens ?? 0;
await spendStructuredTokens(
{
context,
user: this.user,
conversationId: this.conversationId,
model: model ?? this.modelOptions.model,
endpointTokenConfig: this.options.endpointTokenConfig,
},
{
promptTokens: { input, write, read },
completionTokens,
},
);
return;
}
await spendTokens(
{
context,
@@ -381,7 +492,10 @@ class AnthropicClient extends BaseClient {
identityPrefix = `${identityPrefix}\nYou are ${this.options.modelLabel}`;
}
let promptPrefix = (this.options.promptPrefix || '').trim();
let promptPrefix = (this.options.promptPrefix ?? '').trim();
if (typeof this.options.artifactsPrompt === 'string' && this.options.artifactsPrompt) {
promptPrefix = `${promptPrefix ?? ''}\n${this.options.artifactsPrompt}`.trim();
}
if (promptPrefix) {
// If the prompt prefix doesn't end with the end token, add it.
if (!promptPrefix.endsWith(`${this.endToken}`)) {
@@ -560,6 +674,18 @@ class AnthropicClient extends BaseClient {
: await client.completions.create(options);
}
/**
* @param {string} modelName
* @returns {boolean}
*/
checkPromptCacheSupport(modelName) {
const modelMatch = matchModelName(modelName, EModelEndpoint.anthropic);
if (modelMatch === 'claude-3-5-sonnet' || modelMatch === 'claude-3-haiku') {
return true;
}
return false;
}
async sendCompletion(payload, { onProgress, abortController }) {
if (!abortController) {
abortController = new AbortController();
@@ -606,10 +732,22 @@ class AnthropicClient extends BaseClient {
requestOptions.max_tokens_to_sample = maxOutputTokens || 1500;
}
if (this.systemMessage) {
if (this.systemMessage && this.supportsCacheControl === true) {
requestOptions.system = [
{
type: 'text',
text: this.systemMessage,
cache_control: { type: 'ephemeral' },
},
];
} else if (this.systemMessage) {
requestOptions.system = this.systemMessage;
}
if (this.supportsCacheControl === true && this.useMessages) {
requestOptions.messages = addCacheControl(requestOptions.messages);
}
logger.debug('[AnthropicClient]', { ...requestOptions });
const handleChunk = (currentChunk) => {
@@ -639,6 +777,11 @@ class AnthropicClient extends BaseClient {
for await (const completion of response) {
// Handle each completion as before
const type = completion?.type ?? '';
if (tokenEventTypes.has(type)) {
logger.debug(`[AnthropicClient] ${type}`, completion);
this[type] = completion;
}
if (completion?.delta?.text) {
handleChunk(completion.delta.text);
} else if (completion.completion) {
@@ -680,8 +823,10 @@ class AnthropicClient extends BaseClient {
getSaveOptions() {
return {
maxContextTokens: this.options.maxContextTokens,
artifacts: this.options.artifacts,
promptPrefix: this.options.promptPrefix,
modelLabel: this.options.modelLabel,
promptCache: this.options.promptCache,
resendFiles: this.options.resendFiles,
iconURL: this.options.iconURL,
greeting: this.options.greeting,
@@ -727,6 +872,8 @@ class AnthropicClient extends BaseClient {
*/
async titleConvo({ text, responseText = '' }) {
let title = 'New Chat';
this.message_delta = undefined;
this.message_start = undefined;
const convo = `<initial_message>
${truncateText(text)}
</initial_message>

View File

@@ -1,6 +1,14 @@
const crypto = require('crypto');
const fetch = require('node-fetch');
const { supportsBalanceCheck, Constants, CacheKeys, Time } = require('librechat-data-provider');
const {
supportsBalanceCheck,
isAgentsEndpoint,
paramEndpoints,
ErrorTypes,
Constants,
CacheKeys,
Time,
} = require('librechat-data-provider');
const { getMessages, saveMessage, updateMessage, saveConvo } = require('~/models');
const { addSpaceIfNeeded, isEnabled } = require('~/server/utils');
const checkBalance = require('~/models/checkBalance');
@@ -28,6 +36,12 @@ class BaseClient {
this.userMessagePromise;
/** @type {ClientDatabaseSavePromise} */
this.responsePromise;
/** @type {string} */
this.user;
/** @type {string} */
this.conversationId;
/** @type {string} */
this.responseMessageId;
}
setOptions() {
@@ -54,10 +68,33 @@ class BaseClient {
throw new Error('Subclasses attempted to call summarizeMessages without implementing it');
}
async getTokenCountForResponse(response) {
logger.debug('`[BaseClient] recordTokenUsage` not implemented.', response);
/**
* @returns {string}
*/
getResponseModel() {
if (isAgentsEndpoint(this.options.endpoint) && this.options.agent && this.options.agent.id) {
return this.options.agent.id;
}
return this.modelOptions.model;
}
/**
* Abstract method to get the token count for a message. Subclasses must implement this method.
* @param {TMessage} responseMessage
* @returns {number}
*/
getTokenCountForResponse(responseMessage) {
logger.debug('`[BaseClient] recordTokenUsage` not implemented.', responseMessage);
}
/**
* Abstract method to record token usage. Subclasses must implement this method.
* If a correction to the token usage is needed, the method should return an object with the corrected token counts.
* @param {number} promptTokens
* @param {number} completionTokens
* @returns {Promise<void>}
*/
async recordTokenUsage({ promptTokens, completionTokens }) {
logger.debug('`[BaseClient] recordTokenUsage` not implemented.', {
promptTokens,
@@ -143,6 +180,8 @@ class BaseClient {
this.currentMessages[this.currentMessages.length - 1].messageId = head;
}
this.responseMessageId = responseMessageId;
return {
...opts,
user,
@@ -191,6 +230,7 @@ class BaseClient {
userMessage,
conversationId,
responseMessageId,
sender: this.sender,
});
}
@@ -329,7 +369,12 @@ class BaseClient {
};
}
async handleContextStrategy({ instructions, orderedMessages, formattedMessages }) {
async handleContextStrategy({
instructions,
orderedMessages,
formattedMessages,
buildTokenMap = true,
}) {
let _instructions;
let tokenCount;
@@ -371,9 +416,10 @@ class BaseClient {
const latestMessage = orderedWithInstructions[orderedWithInstructions.length - 1];
if (payload.length === 0 && !shouldSummarize && latestMessage) {
throw new Error(
`Prompt token count of ${latestMessage.tokenCount} exceeds max token count of ${this.maxContextTokens}.`,
);
const info = `${latestMessage.tokenCount} / ${this.maxContextTokens}`;
const errorMessage = `{ "type": "${ErrorTypes.INPUT_LENGTH}", "info": "${info}" }`;
logger.warn(`Prompt token count exceeds max token count (${info}).`);
throw new Error(errorMessage);
}
if (usePrevSummary) {
@@ -398,19 +444,23 @@ class BaseClient {
maxContextTokens: this.maxContextTokens,
});
let tokenCountMap = orderedWithInstructions.reduce((map, message, index) => {
const { messageId } = message;
if (!messageId) {
/** @type {Record<string, number> | undefined} */
let tokenCountMap;
if (buildTokenMap) {
tokenCountMap = orderedWithInstructions.reduce((map, message, index) => {
const { messageId } = message;
if (!messageId) {
return map;
}
if (shouldSummarize && index === summaryIndex && !usePrevSummary) {
map.summaryMessage = { ...summaryMessage, messageId, tokenCount: summaryTokenCount };
}
map[messageId] = orderedWithInstructions[index].tokenCount;
return map;
}
if (shouldSummarize && index === summaryIndex && !usePrevSummary) {
map.summaryMessage = { ...summaryMessage, messageId, tokenCount: summaryTokenCount };
}
map[messageId] = orderedWithInstructions[index].tokenCount;
return map;
}, {});
}, {});
}
const promptTokens = this.maxContextTokens - remainingContextTokens;
@@ -512,6 +562,7 @@ class BaseClient {
});
}
/** @type {string|string[]|undefined} */
const completion = await this.sendCompletion(payload, opts);
this.abortController.requestCompleted = true;
@@ -521,28 +572,54 @@ class BaseClient {
parentMessageId: userMessage.messageId,
isCreatedByUser: false,
isEdited,
model: this.modelOptions.model,
model: this.getResponseModel(),
sender: this.sender,
text: addSpaceIfNeeded(generation) + completion,
promptTokens,
iconURL: this.options.iconURL,
endpoint: this.options.endpoint,
...(this.metadata ?? {}),
};
if (typeof completion === 'string') {
responseMessage.text = addSpaceIfNeeded(generation) + completion;
} else if (Array.isArray(completion) && paramEndpoints.has(this.options.endpoint)) {
responseMessage.text = '';
responseMessage.content = completion;
} else if (Array.isArray(completion)) {
responseMessage.text = addSpaceIfNeeded(generation) + completion.join('');
}
if (
tokenCountMap &&
this.recordTokenUsage &&
this.getTokenCountForResponse &&
this.getTokenCount
) {
responseMessage.tokenCount = this.getTokenCountForResponse(responseMessage);
const completionTokens = this.getTokenCount(completion);
await this.recordTokenUsage({ promptTokens, completionTokens });
let completionTokens;
/**
* Metadata about input/output costs for the current message. The client
* should provide a function to get the current stream usage metadata; if not,
* use the legacy token estimations.
* @type {StreamUsage | null} */
const usage = this.getStreamUsage != null ? this.getStreamUsage() : null;
if (usage != null && Number(usage.output_tokens) > 0) {
responseMessage.tokenCount = usage.output_tokens;
completionTokens = responseMessage.tokenCount;
await this.updateUserMessageTokenCount({ usage, tokenCountMap, userMessage, opts });
} else {
responseMessage.tokenCount = this.getTokenCountForResponse(responseMessage);
completionTokens = this.getTokenCount(completion);
}
await this.recordTokenUsage({ promptTokens, completionTokens, usage });
}
if (this.userMessagePromise) {
await this.userMessagePromise;
}
this.responsePromise = this.saveMessageToDatabase(responseMessage, saveOptions, user);
const messageCache = getLogStores(CacheKeys.MESSAGES);
messageCache.set(
@@ -557,6 +634,66 @@ class BaseClient {
return responseMessage;
}
/**
* Stream usage should only be used for user message token count re-calculation if:
* - The stream usage is available, with input tokens greater than 0,
* - the client provides a function to calculate the current token count,
* - files are being resent with every message (default behavior; or if `false`, with no attachments),
* - the `promptPrefix` (custom instructions) is not set.
*
* In these cases, the legacy token estimations would be more accurate.
*
* TODO: included system messages in the `orderedMessages` accounting, potentially as a
* separate message in the UI. ChatGPT does this through "hidden" system messages.
* @param {object} params
* @param {StreamUsage} params.usage
* @param {Record<string, number>} params.tokenCountMap
* @param {TMessage} params.userMessage
* @param {object} params.opts
*/
async updateUserMessageTokenCount({ usage, tokenCountMap, userMessage, opts }) {
/** @type {boolean} */
const shouldUpdateCount =
this.calculateCurrentTokenCount != null &&
Number(usage.input_tokens) > 0 &&
(this.options.resendFiles ||
(!this.options.resendFiles && !this.options.attachments?.length)) &&
!this.options.promptPrefix;
if (!shouldUpdateCount) {
return;
}
const userMessageTokenCount = this.calculateCurrentTokenCount({
currentMessageId: userMessage.messageId,
tokenCountMap,
usage,
});
if (userMessageTokenCount === userMessage.tokenCount) {
return;
}
userMessage.tokenCount = userMessageTokenCount;
/*
Note: `AskController` saves the user message, so we update the count of its `userMessage` reference
*/
if (typeof opts?.getReqData === 'function') {
opts.getReqData({
userMessage,
});
}
/*
Note: we update the user message to be sure it gets the calculated token count;
though `AskController` saves the user message, EditController does not
*/
await this.userMessagePromise;
await this.updateMessageInDatabase({
messageId: userMessage.messageId,
tokenCount: userMessageTokenCount,
});
}
async loadHistory(conversationId, parentMessageId = null) {
logger.debug('[BaseClient] Loading history:', { conversationId, parentMessageId });
@@ -644,6 +781,10 @@ class BaseClient {
return { message: savedMessage, conversation };
}
/**
* Update a message in the database.
* @param {Partial<TMessage>} message
*/
async updateMessageInDatabase(message) {
await updateMessage(this.options.req, message);
}
@@ -767,8 +908,12 @@ class BaseClient {
processValue(nestedValue);
}
} else {
} else if (typeof value === 'string') {
numTokens += this.getTokenCount(value);
} else if (typeof value === 'number') {
numTokens += this.getTokenCount(value.toString());
} else if (typeof value === 'boolean') {
numTokens += this.getTokenCount(value.toString());
}
};

View File

@@ -120,19 +120,7 @@ class GoogleClient extends BaseClient {
.filter((ex) => ex)
.filter((obj) => obj.input.content !== '' && obj.output.content !== '');
const modelOptions = this.options.modelOptions || {};
this.modelOptions = {
...modelOptions,
// set some good defaults (check for undefined in some cases because they may be 0)
model: modelOptions.model || settings.model.default,
temperature:
typeof modelOptions.temperature === 'undefined'
? settings.temperature.default
: modelOptions.temperature,
topP: typeof modelOptions.topP === 'undefined' ? settings.topP.default : modelOptions.topP,
topK: typeof modelOptions.topK === 'undefined' ? settings.topK.default : modelOptions.topK,
// stop: modelOptions.stop // no stop method for now
};
this.modelOptions = this.options.modelOptions || {};
this.options.attachments?.then((attachments) => this.checkVisionRequest(attachments));
@@ -402,8 +390,13 @@ class GoogleClient extends BaseClient {
parameters: this.modelOptions,
};
if (this.options.promptPrefix) {
payload.instances[0].context = this.options.promptPrefix;
let promptPrefix = (this.options.promptPrefix ?? '').trim();
if (typeof this.options.artifactsPrompt === 'string' && this.options.artifactsPrompt) {
promptPrefix = `${promptPrefix ?? ''}\n${this.options.artifactsPrompt}`.trim();
}
if (promptPrefix) {
payload.instances[0].context = promptPrefix;
}
if (this.options.examples.length > 0) {
@@ -457,7 +450,10 @@ class GoogleClient extends BaseClient {
identityPrefix = `${identityPrefix}\nYou are ${this.options.modelLabel}`;
}
let promptPrefix = (this.options.promptPrefix || '').trim();
let promptPrefix = (this.options.promptPrefix ?? '').trim();
if (typeof this.options.artifactsPrompt === 'string' && this.options.artifactsPrompt) {
promptPrefix = `${promptPrefix ?? ''}\n${this.options.artifactsPrompt}`.trim();
}
if (promptPrefix) {
// If the prompt prefix doesn't end with the end token, add it.
if (!promptPrefix.endsWith(`${this.endToken}`)) {
@@ -682,11 +678,16 @@ class GoogleClient extends BaseClient {
contents: _payload,
};
let promptPrefix = (this.options.promptPrefix ?? '').trim();
if (typeof this.options.artifactsPrompt === 'string' && this.options.artifactsPrompt) {
promptPrefix = `${promptPrefix ?? ''}\n${this.options.artifactsPrompt}`.trim();
}
if (this.options?.promptPrefix?.length) {
requestOptions.systemInstruction = {
parts: [
{
text: this.options.promptPrefix,
text: promptPrefix,
},
],
};
@@ -779,11 +780,16 @@ class GoogleClient extends BaseClient {
contents: _payload,
};
let promptPrefix = (this.options.promptPrefix ?? '').trim();
if (typeof this.options.artifactsPrompt === 'string' && this.options.artifactsPrompt) {
promptPrefix = `${promptPrefix ?? ''}\n${this.options.artifactsPrompt}`.trim();
}
if (this.options?.promptPrefix?.length) {
requestOptions.systemInstruction = {
parts: [
{
text: this.options.promptPrefix,
text: promptPrefix,
},
],
};
@@ -808,7 +814,7 @@ class GoogleClient extends BaseClient {
});
reply = titleResponse.content;
// TODO: RECORD TOKEN USAGE
return reply;
}
}
@@ -854,6 +860,7 @@ class GoogleClient extends BaseClient {
getSaveOptions() {
return {
artifacts: this.options.artifacts,
promptPrefix: this.options.promptPrefix,
modelLabel: this.options.modelLabel,
iconURL: this.options.iconURL,

View File

@@ -6,6 +6,7 @@ const {
ImageDetail,
EModelEndpoint,
resolveHeaders,
openAISettings,
ImageDetailCost,
CohereConstants,
getResponseSender,
@@ -27,9 +28,9 @@ const {
createContextHandlers,
} = require('./prompts');
const { encodeAndFormat } = require('~/server/services/Files/images/encode');
const { spendTokens } = require('~/models/spendTokens');
const { isEnabled, sleep } = require('~/server/utils');
const { handleOpenAIErrors } = require('./tools/util');
const spendTokens = require('~/models/spendTokens');
const { createLLM, RunManager } = require('./llm');
const ChatGPTClient = require('./ChatGPTClient');
const { summaryBuffer } = require('./memory');
@@ -85,26 +86,13 @@ class OpenAIClient extends BaseClient {
this.apiKey = this.options.openaiApiKey;
}
const modelOptions = this.options.modelOptions || {};
if (!this.modelOptions) {
this.modelOptions = {
...modelOptions,
model: modelOptions.model || 'gpt-3.5-turbo',
temperature:
typeof modelOptions.temperature === 'undefined' ? 0.8 : modelOptions.temperature,
top_p: typeof modelOptions.top_p === 'undefined' ? 1 : modelOptions.top_p,
presence_penalty:
typeof modelOptions.presence_penalty === 'undefined' ? 1 : modelOptions.presence_penalty,
stop: modelOptions.stop,
};
} else {
// Update the modelOptions if it already exists
this.modelOptions = {
...this.modelOptions,
...modelOptions,
};
}
this.modelOptions = Object.assign(
{
model: openAISettings.model.default,
},
this.modelOptions,
this.options.modelOptions,
);
this.defaultVisionModel = this.options.visionModel ?? 'gpt-4-vision-preview';
if (typeof this.options.attachments?.then === 'function') {
@@ -413,6 +401,7 @@ class OpenAIClient extends BaseClient {
getSaveOptions() {
return {
artifacts: this.options.artifacts,
maxContextTokens: this.options.maxContextTokens,
chatGptLabel: this.options.chatGptLabel,
promptPrefix: this.options.promptPrefix,
@@ -475,6 +464,9 @@ class OpenAIClient extends BaseClient {
let promptTokens;
promptPrefix = (promptPrefix || this.options.promptPrefix || '').trim();
if (typeof this.options.artifactsPrompt === 'string' && this.options.artifactsPrompt) {
promptPrefix = `${promptPrefix ?? ''}\n${this.options.artifactsPrompt}`.trim();
}
if (this.options.attachments) {
const attachments = await this.options.attachments;
@@ -827,7 +819,7 @@ class OpenAIClient extends BaseClient {
const instructionsPayload = [
{
role: this.options.titleMessageRole ?? 'system',
role: this.options.titleMessageRole ?? (this.isOllama ? 'user' : 'system'),
content: `Please generate ${titleInstruction}
${convo}
@@ -1031,7 +1023,7 @@ ${convo}
async chatCompletion({ payload, onProgress, abortController = null }) {
let error = null;
const errorCallback = (err) => (error = err);
let intermediateReply = '';
const intermediateReply = [];
try {
if (!abortController) {
abortController = new AbortController();
@@ -1194,7 +1186,15 @@ ${convo}
}
let UnexpectedRoleError = false;
/** @type {Promise<void>} */
let streamPromise;
/** @type {(value: void | PromiseLike<void>) => void} */
let streamResolve;
if (modelOptions.stream) {
streamPromise = new Promise((resolve) => {
streamResolve = resolve;
});
const stream = await openai.beta.chat.completions
.stream({
...modelOptions,
@@ -1206,26 +1206,30 @@ ${convo}
.on('error', (err) => {
handleOpenAIErrors(err, errorCallback, 'stream');
})
.on('finalChatCompletion', (finalChatCompletion) => {
.on('finalChatCompletion', async (finalChatCompletion) => {
const finalMessage = finalChatCompletion?.choices?.[0]?.message;
if (finalMessage && finalMessage?.role !== 'assistant') {
if (!finalMessage) {
return;
}
await streamPromise;
if (finalMessage?.role !== 'assistant') {
finalChatCompletion.choices[0].message.role = 'assistant';
}
if (finalMessage && !finalMessage?.content?.trim()) {
finalChatCompletion.choices[0].message.content = intermediateReply;
if (typeof finalMessage.content !== 'string' || finalMessage.content.trim() === '') {
finalChatCompletion.choices[0].message.content = intermediateReply.join('');
}
})
.on('finalMessage', (message) => {
if (message?.role !== 'assistant') {
stream.messages.push({ role: 'assistant', content: intermediateReply });
stream.messages.push({ role: 'assistant', content: intermediateReply.join('') });
UnexpectedRoleError = true;
}
});
for await (const chunk of stream) {
const token = chunk.choices[0]?.delta?.content || '';
intermediateReply += token;
intermediateReply.push(token);
onProgress(token);
if (abortController.signal.aborted) {
stream.controller.abort();
@@ -1235,6 +1239,8 @@ ${convo}
await sleep(streamRate);
}
streamResolve();
if (!UnexpectedRoleError) {
chatCompletion = await stream.finalChatCompletion().catch((err) => {
handleOpenAIErrors(err, errorCallback, 'finalChatCompletion');
@@ -1262,19 +1268,29 @@ ${convo}
throw new Error('Chat completion failed');
}
const { message, finish_reason } = chatCompletion.choices[0];
if (chatCompletion) {
this.metadata = { finish_reason };
const { choices } = chatCompletion;
if (!Array.isArray(choices) || choices.length === 0) {
logger.warn('[OpenAIClient] Chat completion response has no choices');
return intermediateReply.join('');
}
const { message, finish_reason } = choices[0] ?? {};
this.metadata = { finish_reason };
logger.debug('[OpenAIClient] chatCompletion response', chatCompletion);
if (!message?.content?.trim() && intermediateReply.length) {
if (!message) {
logger.warn('[OpenAIClient] Message is undefined in chatCompletion response');
return intermediateReply.join('');
}
if (typeof message.content !== 'string' || message.content.trim() === '') {
const reply = intermediateReply.join('');
logger.debug(
'[OpenAIClient] chatCompletion: using intermediateReply due to empty message.content',
{ intermediateReply },
{ intermediateReply: reply },
);
return intermediateReply;
return reply;
}
return message.content;
@@ -1283,7 +1299,7 @@ ${convo}
err?.message?.includes('abort') ||
(err instanceof OpenAI.APIError && err?.message?.includes('abort'))
) {
return intermediateReply;
return intermediateReply.join('');
}
if (
err?.message?.includes(
@@ -1298,10 +1314,10 @@ ${convo}
(err instanceof OpenAI.OpenAIError && err?.message?.includes('missing finish_reason'))
) {
logger.error('[OpenAIClient] Known OpenAI error:', err);
return intermediateReply;
return intermediateReply.join('');
} else if (err instanceof OpenAI.APIError) {
if (intermediateReply) {
return intermediateReply;
if (intermediateReply.length > 0) {
return intermediateReply.join('');
} else {
throw err;
}

View File

@@ -42,6 +42,7 @@ class PluginsClient extends OpenAIClient {
getSaveOptions() {
return {
artifacts: this.options.artifacts,
chatGptLabel: this.options.chatGptLabel,
promptPrefix: this.options.promptPrefix,
tools: this.options.tools,
@@ -145,16 +146,22 @@ class PluginsClient extends OpenAIClient {
// initialize agent
const initializer = this.functionsAgent ? initializeFunctionsAgent : initializeCustomAgent;
let customInstructions = (this.options.promptPrefix ?? '').trim();
if (typeof this.options.artifactsPrompt === 'string' && this.options.artifactsPrompt) {
customInstructions = `${customInstructions ?? ''}\n${this.options.artifactsPrompt}`.trim();
}
this.executor = await initializer({
model,
signal,
pastMessages,
tools: this.tools,
customInstructions,
verbose: this.options.debug,
returnIntermediateSteps: true,
customName: this.options.chatGptLabel,
currentDateString: this.currentDateString,
customInstructions: this.options.promptPrefix,
callbackManager: CallbackManager.fromHandlers({
async handleAgentAction(action, runId) {
handleAction(action, runId, onAgentAction);

View File

@@ -1,5 +1,5 @@
const { createStartHandler } = require('~/app/clients/callbacks');
const spendTokens = require('~/models/spendTokens');
const { spendTokens } = require('~/models/spendTokens');
const { logger } = require('~/config');
class RunManager {

View File

@@ -8,7 +8,7 @@ const { isEnabled } = require('~/server/utils');
* @param {Object} options - The options for creating the LLM.
* @param {ModelOptions} options.modelOptions - The options specific to the model, including modelName, temperature, presence_penalty, frequency_penalty, and other model-related settings.
* @param {ConfigOptions} options.configOptions - Configuration options for the API requests, including proxy settings and custom headers.
* @param {Callbacks} options.callbacks - Callback functions for managing the lifecycle of the LLM, including token buffers, context, and initial message count.
* @param {Callbacks} [options.callbacks] - Callback functions for managing the lifecycle of the LLM, including token buffers, context, and initial message count.
* @param {boolean} [options.streaming=false] - Determines if the LLM should operate in streaming mode.
* @param {string} options.openAIApiKey - The API key for OpenAI, used for authentication.
* @param {AzureOptions} [options.azure={}] - Optional Azure-specific configurations. If provided, Azure configurations take precedence over OpenAI configurations.

View File

@@ -0,0 +1,43 @@
/**
* Anthropic API: Adds cache control to the appropriate user messages in the payload.
* @param {Array<AnthropicMessage>} messages - The array of message objects.
* @returns {Array<AnthropicMessage>} - The updated array of message objects with cache control added.
*/
function addCacheControl(messages) {
if (!Array.isArray(messages) || messages.length < 2) {
return messages;
}
const updatedMessages = [...messages];
let userMessagesModified = 0;
for (let i = updatedMessages.length - 1; i >= 0 && userMessagesModified < 2; i--) {
const message = updatedMessages[i];
if (message.role !== 'user') {
continue;
}
if (typeof message.content === 'string') {
message.content = [
{
type: 'text',
text: message.content,
cache_control: { type: 'ephemeral' },
},
];
userMessagesModified++;
} else if (Array.isArray(message.content)) {
for (let j = message.content.length - 1; j >= 0; j--) {
if (message.content[j].type === 'text') {
message.content[j].cache_control = { type: 'ephemeral' };
userMessagesModified++;
break;
}
}
}
}
return updatedMessages;
}
module.exports = addCacheControl;

View File

@@ -0,0 +1,227 @@
const addCacheControl = require('./addCacheControl');
describe('addCacheControl', () => {
test('should add cache control to the last two user messages with array content', () => {
const messages = [
{ role: 'user', content: [{ type: 'text', text: 'Hello' }] },
{ role: 'assistant', content: [{ type: 'text', text: 'Hi there' }] },
{ role: 'user', content: [{ type: 'text', text: 'How are you?' }] },
{ role: 'assistant', content: [{ type: 'text', text: 'I\'m doing well, thanks!' }] },
{ role: 'user', content: [{ type: 'text', text: 'Great!' }] },
];
const result = addCacheControl(messages);
expect(result[0].content[0]).not.toHaveProperty('cache_control');
expect(result[2].content[0].cache_control).toEqual({ type: 'ephemeral' });
expect(result[4].content[0].cache_control).toEqual({ type: 'ephemeral' });
});
test('should add cache control to the last two user messages with string content', () => {
const messages = [
{ role: 'user', content: 'Hello' },
{ role: 'assistant', content: 'Hi there' },
{ role: 'user', content: 'How are you?' },
{ role: 'assistant', content: 'I\'m doing well, thanks!' },
{ role: 'user', content: 'Great!' },
];
const result = addCacheControl(messages);
expect(result[0].content).toBe('Hello');
expect(result[2].content[0]).toEqual({
type: 'text',
text: 'How are you?',
cache_control: { type: 'ephemeral' },
});
expect(result[4].content[0]).toEqual({
type: 'text',
text: 'Great!',
cache_control: { type: 'ephemeral' },
});
});
test('should handle mixed string and array content', () => {
const messages = [
{ role: 'user', content: 'Hello' },
{ role: 'assistant', content: 'Hi there' },
{ role: 'user', content: [{ type: 'text', text: 'How are you?' }] },
];
const result = addCacheControl(messages);
expect(result[0].content[0]).toEqual({
type: 'text',
text: 'Hello',
cache_control: { type: 'ephemeral' },
});
expect(result[2].content[0].cache_control).toEqual({ type: 'ephemeral' });
});
test('should handle less than two user messages', () => {
const messages = [
{ role: 'user', content: 'Hello' },
{ role: 'assistant', content: 'Hi there' },
];
const result = addCacheControl(messages);
expect(result[0].content[0]).toEqual({
type: 'text',
text: 'Hello',
cache_control: { type: 'ephemeral' },
});
expect(result[1].content).toBe('Hi there');
});
test('should return original array if no user messages', () => {
const messages = [
{ role: 'assistant', content: 'Hi there' },
{ role: 'assistant', content: 'How can I help?' },
];
const result = addCacheControl(messages);
expect(result).toEqual(messages);
});
test('should handle empty array', () => {
const messages = [];
const result = addCacheControl(messages);
expect(result).toEqual([]);
});
test('should handle non-array input', () => {
const messages = 'not an array';
const result = addCacheControl(messages);
expect(result).toBe('not an array');
});
test('should not modify assistant messages', () => {
const messages = [
{ role: 'user', content: 'Hello' },
{ role: 'assistant', content: 'Hi there' },
{ role: 'user', content: 'How are you?' },
];
const result = addCacheControl(messages);
expect(result[1].content).toBe('Hi there');
});
test('should handle multiple content items in user messages', () => {
const messages = [
{
role: 'user',
content: [
{ type: 'text', text: 'Hello' },
{ type: 'image', url: 'http://example.com/image.jpg' },
{ type: 'text', text: 'This is an image' },
],
},
{ role: 'assistant', content: 'Hi there' },
{ role: 'user', content: 'How are you?' },
];
const result = addCacheControl(messages);
expect(result[0].content[0]).not.toHaveProperty('cache_control');
expect(result[0].content[1]).not.toHaveProperty('cache_control');
expect(result[0].content[2].cache_control).toEqual({ type: 'ephemeral' });
expect(result[2].content[0]).toEqual({
type: 'text',
text: 'How are you?',
cache_control: { type: 'ephemeral' },
});
});
test('should handle an array with mixed content types', () => {
const messages = [
{ role: 'user', content: 'Hello' },
{ role: 'assistant', content: 'Hi there' },
{ role: 'user', content: [{ type: 'text', text: 'How are you?' }] },
{ role: 'assistant', content: 'I\'m doing well, thanks!' },
{ role: 'user', content: 'Great!' },
];
const result = addCacheControl(messages);
expect(result[0].content).toEqual('Hello');
expect(result[2].content[0]).toEqual({
type: 'text',
text: 'How are you?',
cache_control: { type: 'ephemeral' },
});
expect(result[4].content).toEqual([
{
type: 'text',
text: 'Great!',
cache_control: { type: 'ephemeral' },
},
]);
expect(result[1].content).toBe('Hi there');
expect(result[3].content).toBe('I\'m doing well, thanks!');
});
test('should handle edge case with multiple content types', () => {
const messages = [
{
role: 'user',
content: [
{
type: 'image',
source: { type: 'base64', media_type: 'image/png', data: 'some_base64_string' },
},
{
type: 'image',
source: { type: 'base64', media_type: 'image/png', data: 'another_base64_string' },
},
{ type: 'text', text: 'what do all these images have in common' },
],
},
{ role: 'assistant', content: 'I see multiple images.' },
{ role: 'user', content: 'Correct!' },
];
const result = addCacheControl(messages);
expect(result[0].content[0]).not.toHaveProperty('cache_control');
expect(result[0].content[1]).not.toHaveProperty('cache_control');
expect(result[0].content[2].cache_control).toEqual({ type: 'ephemeral' });
expect(result[2].content[0]).toEqual({
type: 'text',
text: 'Correct!',
cache_control: { type: 'ephemeral' },
});
});
test('should handle user message with no text block', () => {
const messages = [
{
role: 'user',
content: [
{
type: 'image',
source: { type: 'base64', media_type: 'image/png', data: 'some_base64_string' },
},
{
type: 'image',
source: { type: 'base64', media_type: 'image/png', data: 'another_base64_string' },
},
],
},
{ role: 'assistant', content: 'I see two images.' },
{ role: 'user', content: 'Correct!' },
];
const result = addCacheControl(messages);
expect(result[0].content[0]).not.toHaveProperty('cache_control');
expect(result[0].content[1]).not.toHaveProperty('cache_control');
expect(result[2].content[0]).toEqual({
type: 'text',
text: 'Correct!',
cache_control: { type: 'ephemeral' },
});
});
});

View File

@@ -0,0 +1,527 @@
const dedent = require('dedent');
const { EModelEndpoint, ArtifactModes } = require('librechat-data-provider');
const { generateShadcnPrompt } = require('~/app/clients/prompts/shadcn-docs/generate');
const { components } = require('~/app/clients/prompts/shadcn-docs/components');
// eslint-disable-next-line no-unused-vars
const artifactsPromptV1 = dedent`The assistant can create and reference artifacts during conversations.
Artifacts are for substantial, self-contained content that users might modify or reuse, displayed in a separate UI window for clarity.
# Good artifacts are...
- Substantial content (>15 lines)
- Content that the user is likely to modify, iterate on, or take ownership of
- Self-contained, complex content that can be understood on its own, without context from the conversation
- Content intended for eventual use outside the conversation (e.g., reports, emails, presentations)
- Content likely to be referenced or reused multiple times
# Don't use artifacts for...
- Simple, informational, or short content, such as brief code snippets, mathematical equations, or small examples
- Primarily explanatory, instructional, or illustrative content, such as examples provided to clarify a concept
- Suggestions, commentary, or feedback on existing artifacts
- Conversational or explanatory content that doesn't represent a standalone piece of work
- Content that is dependent on the current conversational context to be useful
- Content that is unlikely to be modified or iterated upon by the user
- Request from users that appears to be a one-off question
# Usage notes
- One artifact per message unless specifically requested
- Prefer in-line content (don't use artifacts) when possible. Unnecessary use of artifacts can be jarring for users.
- If a user asks the assistant to "draw an SVG" or "make a website," the assistant does not need to explain that it doesn't have these capabilities. Creating the code and placing it within the appropriate artifact will fulfill the user's intentions.
- If asked to generate an image, the assistant can offer an SVG instead. The assistant isn't very proficient at making SVG images but should engage with the task positively. Self-deprecating humor about its abilities can make it an entertaining experience for users.
- The assistant errs on the side of simplicity and avoids overusing artifacts for content that can be effectively presented within the conversation.
- Always provide complete, specific, and fully functional content without any placeholders, ellipses, or 'remains the same' comments.
<artifact_instructions>
When collaborating with the user on creating content that falls into compatible categories, the assistant should follow these steps:
1. Create the artifact using the following format:
:::artifact{identifier="unique-identifier" type="mime-type" title="Artifact Title"}
\`\`\`
Your artifact content here
\`\`\`
:::
2. Assign an identifier to the \`identifier\` attribute. For updates, reuse the prior identifier. For new artifacts, the identifier should be descriptive and relevant to the content, using kebab-case (e.g., "example-code-snippet"). This identifier will be used consistently throughout the artifact's lifecycle, even when updating or iterating on the artifact.
3. Include a \`title\` attribute to provide a brief title or description of the content.
4. Add a \`type\` attribute to specify the type of content the artifact represents. Assign one of the following values to the \`type\` attribute:
- HTML: "text/html"
- The user interface can render single file HTML pages placed within the artifact tags. HTML, JS, and CSS should be in a single file when using the \`text/html\` type.
- Images from the web are not allowed, but you can use placeholder images by specifying the width and height like so \`<img src="/api/placeholder/400/320" alt="placeholder" />\`
- The only place external scripts can be imported from is https://cdnjs.cloudflare.com
- Mermaid Diagrams: "application/vnd.mermaid"
- The user interface will render Mermaid diagrams placed within the artifact tags.
- React Components: "application/vnd.react"
- Use this for displaying either: React elements, e.g. \`<strong>Hello World!</strong>\`, React pure functional components, e.g. \`() => <strong>Hello World!</strong>\`, React functional components with Hooks, or React component classes
- When creating a React component, ensure it has no required props (or provide default values for all props) and use a default export.
- Use Tailwind classes for styling. DO NOT USE ARBITRARY VALUES (e.g. \`h-[600px]\`).
- Base React is available to be imported. To use hooks, first import it at the top of the artifact, e.g. \`import { useState } from "react"\`
- The lucide-react@0.263.1 library is available to be imported. e.g. \`import { Camera } from "lucide-react"\` & \`<Camera color="red" size={48} />\`
- The recharts charting library is available to be imported, e.g. \`import { LineChart, XAxis, ... } from "recharts"\` & \`<LineChart ...><XAxis dataKey="name"> ...\`
- The assistant can use prebuilt components from the \`shadcn/ui\` library after it is imported: \`import { Alert, AlertDescription, AlertTitle, AlertDialog, AlertDialogAction } from '/components/ui/alert';\`. If using components from the shadcn/ui library, the assistant mentions this to the user and offers to help them install the components if necessary.
- Components MUST be imported from \`/components/ui/name\` and NOT from \`/components/name\` or \`@/components/ui/name\`.
- NO OTHER LIBRARIES (e.g. zod, hookform) ARE INSTALLED OR ABLE TO BE IMPORTED.
- Images from the web are not allowed, but you can use placeholder images by specifying the width and height like so \`<img src="/api/placeholder/400/320" alt="placeholder" />\`
- If you are unable to follow the above requirements for any reason, don't use artifacts and use regular code blocks instead, which will not attempt to render the component.
5. Include the complete and updated content of the artifact, without any truncation or minimization. Don't use "// rest of the code remains the same...".
6. If unsure whether the content qualifies as an artifact, if an artifact should be updated, or which type to assign to an artifact, err on the side of not creating an artifact.
7. Always use triple backticks (\`\`\`) to enclose the content within the artifact, regardless of the content type.
</artifact_instructions>
Here are some examples of correct usage of artifacts:
<examples>
<example_docstring>
This example demonstrates how to create a Mermaid artifact for a simple flow chart.
</example_docstring>
<example>
<user_query>Can you create a simple flow chart showing the process of making tea using Mermaid?</user_query>
<assistant_response>
Sure! Here's a simple flow chart depicting the process of making tea using Mermaid syntax:
:::artifact{identifier="tea-making-flowchart" type="application/vnd.mermaid" title="Flow chart: Making Tea"}
\`\`\`mermaid
graph TD
A[Start] --> B{Water boiled?}
B -->|Yes| C[Add tea leaves to cup]
B -->|No| D[Boil water]
D --> B
C --> E[Pour boiling water into cup]
E --> F[Steep tea for desired time]
F --> G[Remove tea leaves]
G --> H[Add milk or sugar, if desired]
H --> I[Enjoy your tea!]
I --> J[End]
\`\`\`
:::
This flow chart uses Mermaid syntax to visualize the steps involved in making a cup of tea. Here's a brief explanation of the process:
1. Start
2. Check if water is boiled
3. If not boiled, boil the water
4. Once water is boiled, add tea leaves to the cup
5. Pour boiling water into the cup
6. Steep the tea for the desired time
7. Remove the tea leaves
8. Optionally add milk or sugar
9. Enjoy your tea!
10. End
This chart provides a clear visual representation of the tea-making process. You can easily modify or expand this chart if you want to add more details or steps to the process. Let me know if you'd like any changes or have any questions!
</assistant_response>
</example>
</examples>`;
const artifactsPrompt = dedent`The assistant can create and reference artifacts during conversations.
Artifacts are for substantial, self-contained content that users might modify or reuse, displayed in a separate UI window for clarity.
# Good artifacts are...
- Substantial content (>15 lines)
- Content that the user is likely to modify, iterate on, or take ownership of
- Self-contained, complex content that can be understood on its own, without context from the conversation
- Content intended for eventual use outside the conversation (e.g., reports, emails, presentations)
- Content likely to be referenced or reused multiple times
# Don't use artifacts for...
- Simple, informational, or short content, such as brief code snippets, mathematical equations, or small examples
- Primarily explanatory, instructional, or illustrative content, such as examples provided to clarify a concept
- Suggestions, commentary, or feedback on existing artifacts
- Conversational or explanatory content that doesn't represent a standalone piece of work
- Content that is dependent on the current conversational context to be useful
- Content that is unlikely to be modified or iterated upon by the user
- Request from users that appears to be a one-off question
# Usage notes
- One artifact per message unless specifically requested
- Prefer in-line content (don't use artifacts) when possible. Unnecessary use of artifacts can be jarring for users.
- If a user asks the assistant to "draw an SVG" or "make a website," the assistant does not need to explain that it doesn't have these capabilities. Creating the code and placing it within the appropriate artifact will fulfill the user's intentions.
- If asked to generate an image, the assistant can offer an SVG instead. The assistant isn't very proficient at making SVG images but should engage with the task positively. Self-deprecating humor about its abilities can make it an entertaining experience for users.
- The assistant errs on the side of simplicity and avoids overusing artifacts for content that can be effectively presented within the conversation.
- Always provide complete, specific, and fully functional content for artifacts without any snippets, placeholders, ellipses, or 'remains the same' comments.
- If an artifact is not necessary or requested, the assistant should not mention artifacts at all, and respond to the user accordingly.
<artifact_instructions>
When collaborating with the user on creating content that falls into compatible categories, the assistant should follow these steps:
1. Create the artifact using the following format:
:::artifact{identifier="unique-identifier" type="mime-type" title="Artifact Title"}
\`\`\`
Your artifact content here
\`\`\`
:::
2. Assign an identifier to the \`identifier\` attribute. For updates, reuse the prior identifier. For new artifacts, the identifier should be descriptive and relevant to the content, using kebab-case (e.g., "example-code-snippet"). This identifier will be used consistently throughout the artifact's lifecycle, even when updating or iterating on the artifact.
3. Include a \`title\` attribute to provide a brief title or description of the content.
4. Add a \`type\` attribute to specify the type of content the artifact represents. Assign one of the following values to the \`type\` attribute:
- HTML: "text/html"
- The user interface can render single file HTML pages placed within the artifact tags. HTML, JS, and CSS should be in a single file when using the \`text/html\` type.
- Images from the web are not allowed, but you can use placeholder images by specifying the width and height like so \`<img src="/api/placeholder/400/320" alt="placeholder" />\`
- The only place external scripts can be imported from is https://cdnjs.cloudflare.com
- SVG: "image/svg+xml"
- The user interface will render the Scalable Vector Graphics (SVG) image within the artifact tags.
- The assistant should specify the viewbox of the SVG rather than defining a width/height
- Mermaid Diagrams: "application/vnd.mermaid"
- The user interface will render Mermaid diagrams placed within the artifact tags.
- React Components: "application/vnd.react"
- Use this for displaying either: React elements, e.g. \`<strong>Hello World!</strong>\`, React pure functional components, e.g. \`() => <strong>Hello World!</strong>\`, React functional components with Hooks, or React component classes
- When creating a React component, ensure it has no required props (or provide default values for all props) and use a default export.
- Use Tailwind classes for styling. DO NOT USE ARBITRARY VALUES (e.g. \`h-[600px]\`).
- Base React is available to be imported. To use hooks, first import it at the top of the artifact, e.g. \`import { useState } from "react"\`
- The lucide-react@0.394.0 library is available to be imported. e.g. \`import { Camera } from "lucide-react"\` & \`<Camera color="red" size={48} />\`
- The recharts charting library is available to be imported, e.g. \`import { LineChart, XAxis, ... } from "recharts"\` & \`<LineChart ...><XAxis dataKey="name"> ...\`
- The three.js library is available to be imported, e.g. \`import * as THREE from "three";\`
- The date-fns library is available to be imported, e.g. \`import { compareAsc, format } from "date-fns";\`
- The react-day-picker library is available to be imported, e.g. \`import { DayPicker } from "react-day-picker";\`
- The assistant can use prebuilt components from the \`shadcn/ui\` library after it is imported: \`import { Alert, AlertDescription, AlertTitle, AlertDialog, AlertDialogAction } from '/components/ui/alert';\`. If using components from the shadcn/ui library, the assistant mentions this to the user and offers to help them install the components if necessary.
- Components MUST be imported from \`/components/ui/name\` and NOT from \`/components/name\` or \`@/components/ui/name\`.
- NO OTHER LIBRARIES (e.g. zod, hookform) ARE INSTALLED OR ABLE TO BE IMPORTED.
- Images from the web are not allowed, but you can use placeholder images by specifying the width and height like so \`<img src="/api/placeholder/400/320" alt="placeholder" />\`
- When iterating on code, ensure that the code is complete and functional without any snippets, placeholders, or ellipses.
- If you are unable to follow the above requirements for any reason, don't use artifacts and use regular code blocks instead, which will not attempt to render the component.
5. Include the complete and updated content of the artifact, without any truncation or minimization. Don't use "// rest of the code remains the same...".
6. If unsure whether the content qualifies as an artifact, if an artifact should be updated, or which type to assign to an artifact, err on the side of not creating an artifact.
7. Always use triple backticks (\`\`\`) to enclose the content within the artifact, regardless of the content type.
</artifact_instructions>
Here are some examples of correct usage of artifacts:
<examples>
<example_docstring>
This example demonstrates how to create a Mermaid artifact for a simple flow chart.
</example_docstring>
<example>
<user_query>Can you create a simple flow chart showing the process of making tea using Mermaid?</user_query>
<assistant_response>
Sure! Here's a simple flow chart depicting the process of making tea using Mermaid syntax:
:::artifact{identifier="tea-making-flowchart" type="application/vnd.mermaid" title="Flow chart: Making Tea"}
\`\`\`mermaid
graph TD
A[Start] --> B{Water boiled?}
B -->|Yes| C[Add tea leaves to cup]
B -->|No| D[Boil water]
D --> B
C --> E[Pour boiling water into cup]
E --> F[Steep tea for desired time]
F --> G[Remove tea leaves]
G --> H[Add milk or sugar, if desired]
H --> I[Enjoy your tea!]
I --> J[End]
\`\`\`
:::
This flow chart uses Mermaid syntax to visualize the steps involved in making a cup of tea. Here's a brief explanation of the process:
1. Start
2. Check if water is boiled
3. If not boiled, boil the water
4. Once water is boiled, add tea leaves to the cup
5. Pour boiling water into the cup
6. Steep the tea for the desired time
7. Remove the tea leaves
8. Optionally add milk or sugar
9. Enjoy your tea!
10. End
This chart provides a clear visual representation of the tea-making process. You can easily modify or expand this chart if you want to add more details or steps to the process. Let me know if you'd like any changes or have any questions!
</assistant_response>
</example>
<example>
<user_query>Create a simple React counter component</user_query>
<assistant_response>
Here's a simple React counter component:
:::artifact{identifier="react-counter" type="application/vnd.react" title="React Counter"}
\`\`\`
import { useState } from 'react';
export default function Counter() {
const [count, setCount] = useState(0);
return (
<div className="p-4">
<p className="mb-2">Count: {count}</p>
<button className="bg-blue-500 text-white px-4 py-2 rounded" onClick={() => setCount(count + 1)}>
Increment
</button>
</div>
);
}
\`\`\`
:::
This component creates a simple counter with an increment button.
</assistant_response>
</example>
<example>
<user_query>Create a basic HTML structure for a blog post</user_query>
<assistant_response>
Here's a basic HTML structure for a blog post:
:::artifact{identifier="blog-post-html" type="text/html" title="Blog Post HTML"}
\`\`\`
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>My Blog Post</title>
<style>
body { font-family: Arial, sans-serif; line-height: 1.6; max-width: 800px; margin: 0 auto; padding: 20px; }
h1 { color: #333; }
p { margin-bottom: 15px; }
</style>
</head>
<body>
<header>
<h1>My First Blog Post</h1>
</header>
<main>
<article>
<p>This is the content of my blog post. It's short and sweet!</p>
</article>
</main>
<footer>
<p>&copy; 2023 My Blog</p>
</footer>
</body>
</html>
\`\`\`
:::
This HTML structure provides a simple layout for a blog post.
</assistant_response>
</example>
</examples>`;
const artifactsOpenAIPrompt = dedent`The assistant can create and reference artifacts during conversations.
Artifacts are for substantial, self-contained content that users might modify or reuse, displayed in a separate UI window for clarity.
# Good artifacts are...
- Substantial content (>15 lines)
- Content that the user is likely to modify, iterate on, or take ownership of
- Self-contained, complex content that can be understood on its own, without context from the conversation
- Content intended for eventual use outside the conversation (e.g., reports, emails, presentations)
- Content likely to be referenced or reused multiple times
# Don't use artifacts for...
- Simple, informational, or short content, such as brief code snippets, mathematical equations, or small examples
- Primarily explanatory, instructional, or illustrative content, such as examples provided to clarify a concept
- Suggestions, commentary, or feedback on existing artifacts
- Conversational or explanatory content that doesn't represent a standalone piece of work
- Content that is dependent on the current conversational context to be useful
- Content that is unlikely to be modified or iterated upon by the user
- Request from users that appears to be a one-off question
# Usage notes
- One artifact per message unless specifically requested
- Prefer in-line content (don't use artifacts) when possible. Unnecessary use of artifacts can be jarring for users.
- If a user asks the assistant to "draw an SVG" or "make a website," the assistant does not need to explain that it doesn't have these capabilities. Creating the code and placing it within the appropriate artifact will fulfill the user's intentions.
- If asked to generate an image, the assistant can offer an SVG instead. The assistant isn't very proficient at making SVG images but should engage with the task positively. Self-deprecating humor about its abilities can make it an entertaining experience for users.
- The assistant errs on the side of simplicity and avoids overusing artifacts for content that can be effectively presented within the conversation.
- Always provide complete, specific, and fully functional content for artifacts without any snippets, placeholders, ellipses, or 'remains the same' comments.
- If an artifact is not necessary or requested, the assistant should not mention artifacts at all, and respond to the user accordingly.
## Artifact Instructions
When collaborating with the user on creating content that falls into compatible categories, the assistant should follow these steps:
1. Create the artifact using the following remark-directive markdown format:
:::artifact{identifier="unique-identifier" type="mime-type" title="Artifact Title"}
\`\`\`
Your artifact content here
\`\`\`
:::
a. Example of correct format:
:::artifact{identifier="example-artifact" type="text/plain" title="Example Artifact"}
\`\`\`
This is the content of the artifact.
It can span multiple lines.
\`\`\`
:::
b. Common mistakes to avoid:
- Don't split the opening ::: line
- Don't add extra backticks outside the artifact structure
- Don't omit the closing :::
2. Assign an identifier to the \`identifier\` attribute. For updates, reuse the prior identifier. For new artifacts, the identifier should be descriptive and relevant to the content, using kebab-case (e.g., "example-code-snippet"). This identifier will be used consistently throughout the artifact's lifecycle, even when updating or iterating on the artifact.
3. Include a \`title\` attribute to provide a brief title or description of the content.
4. Add a \`type\` attribute to specify the type of content the artifact represents. Assign one of the following values to the \`type\` attribute:
- HTML: "text/html"
- The user interface can render single file HTML pages placed within the artifact tags. HTML, JS, and CSS should be in a single file when using the \`text/html\` type.
- Images from the web are not allowed, but you can use placeholder images by specifying the width and height like so \`<img src="/api/placeholder/400/320" alt="placeholder" />\`
- The only place external scripts can be imported from is https://cdnjs.cloudflare.com
- SVG: "image/svg+xml"
- The user interface will render the Scalable Vector Graphics (SVG) image within the artifact tags.
- The assistant should specify the viewbox of the SVG rather than defining a width/height
- Mermaid Diagrams: "application/vnd.mermaid"
- The user interface will render Mermaid diagrams placed within the artifact tags.
- React Components: "application/vnd.react"
- Use this for displaying either: React elements, e.g. \`<strong>Hello World!</strong>\`, React pure functional components, e.g. \`() => <strong>Hello World!</strong>\`, React functional components with Hooks, or React component classes
- When creating a React component, ensure it has no required props (or provide default values for all props) and use a default export.
- Use Tailwind classes for styling. DO NOT USE ARBITRARY VALUES (e.g. \`h-[600px]\`).
- Base React is available to be imported. To use hooks, first import it at the top of the artifact, e.g. \`import { useState } from "react"\`
- The lucide-react@0.394.0 library is available to be imported. e.g. \`import { Camera } from "lucide-react"\` & \`<Camera color="red" size={48} />\`
- The recharts charting library is available to be imported, e.g. \`import { LineChart, XAxis, ... } from "recharts"\` & \`<LineChart ...><XAxis dataKey="name"> ...\`
- The three.js library is available to be imported, e.g. \`import * as THREE from "three";\`
- The date-fns library is available to be imported, e.g. \`import { compareAsc, format } from "date-fns";\`
- The react-day-picker library is available to be imported, e.g. \`import { DayPicker } from "react-day-picker";\`
- The assistant can use prebuilt components from the \`shadcn/ui\` library after it is imported: \`import { Alert, AlertDescription, AlertTitle, AlertDialog, AlertDialogAction } from '/components/ui/alert';\`. If using components from the shadcn/ui library, the assistant mentions this to the user and offers to help them install the components if necessary.
- Components MUST be imported from \`/components/ui/name\` and NOT from \`/components/name\` or \`@/components/ui/name\`.
- NO OTHER LIBRARIES (e.g. zod, hookform) ARE INSTALLED OR ABLE TO BE IMPORTED.
- Images from the web are not allowed, but you can use placeholder images by specifying the width and height like so \`<img src="/api/placeholder/400/320" alt="placeholder" />\`
- When iterating on code, ensure that the code is complete and functional without any snippets, placeholders, or ellipses.
- If you are unable to follow the above requirements for any reason, don't use artifacts and use regular code blocks instead, which will not attempt to render the component.
5. Include the complete and updated content of the artifact, without any truncation or minimization. Don't use "// rest of the code remains the same...".
6. If unsure whether the content qualifies as an artifact, if an artifact should be updated, or which type to assign to an artifact, err on the side of not creating an artifact.
7. NEVER use triple backticks to enclose the artifact, ONLY the content within the artifact.
Here are some examples of correct usage of artifacts:
## Examples
### Example 1
This example demonstrates how to create a Mermaid artifact for a simple flow chart.
User: Can you create a simple flow chart showing the process of making tea using Mermaid?
Assistant: Sure! Here's a simple flow chart depicting the process of making tea using Mermaid syntax:
:::artifact{identifier="tea-making-flowchart" type="application/vnd.mermaid" title="Flow chart: Making Tea"}
\`\`\`mermaid
graph TD
A[Start] --> B{Water boiled?}
B -->|Yes| C[Add tea leaves to cup]
B -->|No| D[Boil water]
D --> B
C --> E[Pour boiling water into cup]
E --> F[Steep tea for desired time]
F --> G[Remove tea leaves]
G --> H[Add milk or sugar, if desired]
H --> I[Enjoy your tea!]
I --> J[End]
\`\`\`
:::
This flow chart uses Mermaid syntax to visualize the steps involved in making a cup of tea. Here's a brief explanation of the process:
1. Start
2. Check if water is boiled
3. If not boiled, boil the water
4. Once water is boiled, add tea leaves to the cup
5. Pour boiling water into the cup
6. Steep the tea for the desired time
7. Remove the tea leaves
8. Optionally add milk or sugar
9. Enjoy your tea!
10. End
This chart provides a clear visual representation of the tea-making process. You can easily modify or expand this chart if you want to add more details or steps to the process. Let me know if you'd like any changes or have any questions!
---
### Example 2
User: Create a simple React counter component
Assistant: Here's a simple React counter component:
:::artifact{identifier="react-counter" type="application/vnd.react" title="React Counter"}
\`\`\`
import { useState } from 'react';
export default function Counter() {
const [count, setCount] = useState(0);
return (
<div className="p-4">
<p className="mb-2">Count: {count}</p>
<button className="bg-blue-500 text-white px-4 py-2 rounded" onClick={() => setCount(count + 1)}>
Increment
</button>
</div>
);
}
\`\`\`
:::
This component creates a simple counter with an increment button.
---
### Example 3
User: Create a basic HTML structure for a blog post
Assistant: Here's a basic HTML structure for a blog post:
:::artifact{identifier="blog-post-html" type="text/html" title="Blog Post HTML"}
\`\`\`
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>My Blog Post</title>
<style>
body { font-family: Arial, sans-serif; line-height: 1.6; max-width: 800px; margin: 0 auto; padding: 20px; }
h1 { color: #333; }
p { margin-bottom: 15px; }
</style>
</head>
<body>
<header>
<h1>My First Blog Post</h1>
</header>
<main>
<article>
<p>This is the content of my blog post. It's short and sweet!</p>
</article>
</main>
<footer>
<p>&copy; 2023 My Blog</p>
</footer>
</body>
</html>
\`\`\`
:::
This HTML structure provides a simple layout for a blog post.
---`;
/**
*
* @param {Object} params
* @param {EModelEndpoint | string} params.endpoint - The current endpoint
* @param {ArtifactModes} params.artifacts - The current artifact mode
* @returns
*/
const generateArtifactsPrompt = ({ endpoint, artifacts }) => {
if (artifacts === ArtifactModes.CUSTOM) {
return null;
}
let prompt = artifactsPrompt;
if (endpoint !== EModelEndpoint.anthropic) {
prompt = artifactsOpenAIPrompt;
}
if (artifacts === ArtifactModes.SHADCNUI) {
prompt += generateShadcnPrompt({ components, useXML: endpoint === EModelEndpoint.anthropic });
}
return prompt;
};
module.exports = generateArtifactsPrompt;

View File

@@ -1,4 +1,5 @@
const { EModelEndpoint } = require('librechat-data-provider');
const { ToolMessage } = require('@langchain/core/messages');
const { EModelEndpoint, ContentTypes } = require('librechat-data-provider');
const { HumanMessage, AIMessage, SystemMessage } = require('langchain/schema');
/**
@@ -14,11 +15,11 @@ const { HumanMessage, AIMessage, SystemMessage } = require('langchain/schema');
*/
const formatVisionMessage = ({ message, image_urls, endpoint }) => {
if (endpoint === EModelEndpoint.anthropic) {
message.content = [...image_urls, { type: 'text', text: message.content }];
message.content = [...image_urls, { type: ContentTypes.TEXT, text: message.content }];
return message;
}
message.content = [{ type: 'text', text: message.content }, ...image_urls];
message.content = [{ type: ContentTypes.TEXT, text: message.content }, ...image_urls];
return message;
};
@@ -51,7 +52,7 @@ const formatMessage = ({ message, userName, assistantName, endpoint, langChain =
_role = roleMapping[lc_id[2]];
}
const role = _role ?? (sender && sender?.toLowerCase() === 'user' ? 'user' : 'assistant');
const content = text ?? _content ?? '';
const content = _content ?? text ?? '';
const formattedMessage = {
role,
content,
@@ -131,4 +132,79 @@ const formatFromLangChain = (message) => {
};
};
module.exports = { formatMessage, formatLangChainMessages, formatFromLangChain };
/**
* Formats an array of messages for LangChain, handling tool calls and creating ToolMessage instances.
*
* @param {Array<Partial<TMessage>>} payload - The array of messages to format.
* @returns {Array<(HumanMessage|AIMessage|SystemMessage|ToolMessage)>} - The array of formatted LangChain messages, including ToolMessages for tool calls.
*/
const formatAgentMessages = (payload) => {
const messages = [];
for (const message of payload) {
if (message.role !== 'assistant') {
messages.push(formatMessage({ message, langChain: true }));
continue;
}
let currentContent = [];
let lastAIMessage = null;
for (const part of message.content) {
if (part.type === ContentTypes.TEXT && part.tool_call_ids) {
// If there's pending content, add it as an AIMessage
if (currentContent.length > 0) {
messages.push(new AIMessage({ content: currentContent }));
currentContent = [];
}
// Create a new AIMessage with this text and prepare for tool calls
lastAIMessage = new AIMessage({
content: part.text || '',
});
messages.push(lastAIMessage);
} else if (part.type === ContentTypes.TOOL_CALL) {
if (!lastAIMessage) {
throw new Error('Invalid tool call structure: No preceding AIMessage with tool_call_ids');
}
// Note: `tool_calls` list is defined when constructed by `AIMessage` class, and outputs should be excluded from it
const { output, args: _args, ...tool_call } = part.tool_call;
// TODO: investigate; args as dictionary may need to be provider-or-tool-specific
let args = _args;
try {
args = JSON.parse(args);
} catch (e) {
// failed to parse, leave as is
}
tool_call.args = args;
lastAIMessage.tool_calls.push(tool_call);
// Add the corresponding ToolMessage
messages.push(
new ToolMessage({
tool_call_id: tool_call.id,
name: tool_call.name,
content: output,
}),
);
} else {
currentContent.push(part);
}
}
if (currentContent.length > 0) {
messages.push(new AIMessage({ content: currentContent }));
}
}
return messages;
};
module.exports = {
formatMessage,
formatFromLangChain,
formatAgentMessages,
formatLangChainMessages,
};

View File

@@ -1,3 +1,4 @@
const addCacheControl = require('./addCacheControl');
const formatMessages = require('./formatMessages');
const summaryPrompts = require('./summaryPrompts');
const handleInputs = require('./handleInputs');
@@ -8,6 +9,7 @@ const createVisionPrompt = require('./createVisionPrompt');
const createContextHandlers = require('./createContextHandlers');
module.exports = {
addCacheControl,
...formatMessages,
...summaryPrompts,
...handleInputs,

View File

@@ -0,0 +1,495 @@
// Essential Components
const essentialComponents = {
avatar: {
componentName: 'Avatar',
importDocs: 'import { Avatar, AvatarFallback, AvatarImage } from "/components/ui/avatar"',
usageDocs: `
<Avatar>
<AvatarImage src="https://github.com/shadcn.png" />
<AvatarFallback>CN</AvatarFallback>
</Avatar>`,
},
button: {
componentName: 'Button',
importDocs: 'import { Button } from "/components/ui/button"',
usageDocs: `
<Button variant="outline">Button</Button>`,
},
card: {
componentName: 'Card',
importDocs: `
import {
Card,
CardContent,
CardDescription,
CardFooter,
CardHeader,
CardTitle,
} from "/components/ui/card"`,
usageDocs: `
<Card>
<CardHeader>
<CardTitle>Card Title</CardTitle>
<CardDescription>Card Description</CardDescription>
</CardHeader>
<CardContent>
<p>Card Content</p>
</CardContent>
<CardFooter>
<p>Card Footer</p>
</CardFooter>
</Card>`,
},
checkbox: {
componentName: 'Checkbox',
importDocs: 'import { Checkbox } from "/components/ui/checkbox"',
usageDocs: '<Checkbox />',
},
input: {
componentName: 'Input',
importDocs: 'import { Input } from "/components/ui/input"',
usageDocs: '<Input />',
},
label: {
componentName: 'Label',
importDocs: 'import { Label } from "/components/ui/label"',
usageDocs: '<Label htmlFor="email">Your email address</Label>',
},
radioGroup: {
componentName: 'RadioGroup',
importDocs: `
import { Label } from "/components/ui/label"
import { RadioGroup, RadioGroupItem } from "/components/ui/radio-group"`,
usageDocs: `
<RadioGroup defaultValue="option-one">
<div className="flex items-center space-x-2">
<RadioGroupItem value="option-one" id="option-one" />
<Label htmlFor="option-one">Option One</Label>
</div>
<div className="flex items-center space-x-2">
<RadioGroupItem value="option-two" id="option-two" />
<Label htmlFor="option-two">Option Two</Label>
</div>
</RadioGroup>`,
},
select: {
componentName: 'Select',
importDocs: `
import {
Select,
SelectContent,
SelectItem,
SelectTrigger,
SelectValue,
} from "/components/ui/select"`,
usageDocs: `
<Select>
<SelectTrigger className="w-[180px]">
<SelectValue placeholder="Theme" />
</SelectTrigger>
<SelectContent>
<SelectItem value="light">Light</SelectItem>
<SelectItem value="dark">Dark</SelectItem>
<SelectItem value="system">System</SelectItem>
</SelectContent>
</Select>`,
},
textarea: {
componentName: 'Textarea',
importDocs: 'import { Textarea } from "/components/ui/textarea"',
usageDocs: '<Textarea />',
},
};
// Extra Components
const extraComponents = {
accordion: {
componentName: 'Accordion',
importDocs: `
import {
Accordion,
AccordionContent,
AccordionItem,
AccordionTrigger,
} from "/components/ui/accordion"`,
usageDocs: `
<Accordion type="single" collapsible>
<AccordionItem value="item-1">
<AccordionTrigger>Is it accessible?</AccordionTrigger>
<AccordionContent>
Yes. It adheres to the WAI-ARIA design pattern.
</AccordionContent>
</AccordionItem>
</Accordion>`,
},
alertDialog: {
componentName: 'AlertDialog',
importDocs: `
import {
AlertDialog,
AlertDialogAction,
AlertDialogCancel,
AlertDialogContent,
AlertDialogDescription,
AlertDialogFooter,
AlertDialogHeader,
AlertDialogTitle,
AlertDialogTrigger,
} from "/components/ui/alert-dialog"`,
usageDocs: `
<AlertDialog>
<AlertDialogTrigger>Open</AlertDialogTrigger>
<AlertDialogContent>
<AlertDialogHeader>
<AlertDialogTitle>Are you absolutely sure?</AlertDialogTitle>
<AlertDialogDescription>
This action cannot be undone.
</AlertDialogDescription>
</AlertDialogHeader>
<AlertDialogFooter>
<AlertDialogCancel>Cancel</AlertDialogCancel>
<AlertDialogAction>Continue</AlertDialogAction>
</AlertDialogFooter>
</AlertDialogContent>
</AlertDialog>`,
},
alert: {
componentName: 'Alert',
importDocs: `
import {
Alert,
AlertDescription,
AlertTitle,
} from "/components/ui/alert"`,
usageDocs: `
<Alert>
<AlertTitle>Heads up!</AlertTitle>
<AlertDescription>
You can add components to your app using the cli.
</AlertDescription>
</Alert>`,
},
aspectRatio: {
componentName: 'AspectRatio',
importDocs: 'import { AspectRatio } from "/components/ui/aspect-ratio"',
usageDocs: `
<AspectRatio ratio={16 / 9}>
<Image src="..." alt="Image" className="rounded-md object-cover" />
</AspectRatio>`,
},
badge: {
componentName: 'Badge',
importDocs: 'import { Badge } from "/components/ui/badge"',
usageDocs: '<Badge>Badge</Badge>',
},
calendar: {
componentName: 'Calendar',
importDocs: 'import { Calendar } from "/components/ui/calendar"',
usageDocs: '<Calendar />',
},
carousel: {
componentName: 'Carousel',
importDocs: `
import {
Carousel,
CarouselContent,
CarouselItem,
CarouselNext,
CarouselPrevious,
} from "/components/ui/carousel"`,
usageDocs: `
<Carousel>
<CarouselContent>
<CarouselItem>...</CarouselItem>
<CarouselItem>...</CarouselItem>
<CarouselItem>...</CarouselItem>
</CarouselContent>
<CarouselPrevious />
<CarouselNext />
</Carousel>`,
},
collapsible: {
componentName: 'Collapsible',
importDocs: `
import {
Collapsible,
CollapsibleContent,
CollapsibleTrigger,
} from "/components/ui/collapsible"`,
usageDocs: `
<Collapsible>
<CollapsibleTrigger>Can I use this in my project?</CollapsibleTrigger>
<CollapsibleContent>
Yes. Free to use for personal and commercial projects. No attribution required.
</CollapsibleContent>
</Collapsible>`,
},
dialog: {
componentName: 'Dialog',
importDocs: `
import {
Dialog,
DialogContent,
DialogDescription,
DialogHeader,
DialogTitle,
DialogTrigger,
} from "/components/ui/dialog"`,
usageDocs: `
<Dialog>
<DialogTrigger>Open</DialogTrigger>
<DialogContent>
<DialogHeader>
<DialogTitle>Are you sure absolutely sure?</DialogTitle>
<DialogDescription>
This action cannot be undone.
</DialogDescription>
</DialogHeader>
</DialogContent>
</Dialog>`,
},
dropdownMenu: {
componentName: 'DropdownMenu',
importDocs: `
import {
DropdownMenu,
DropdownMenuContent,
DropdownMenuItem,
DropdownMenuLabel,
DropdownMenuSeparator,
DropdownMenuTrigger,
} from "/components/ui/dropdown-menu"`,
usageDocs: `
<DropdownMenu>
<DropdownMenuTrigger>Open</DropdownMenuTrigger>
<DropdownMenuContent>
<DropdownMenuLabel>My Account</DropdownMenuLabel>
<DropdownMenuSeparator />
<DropdownMenuItem>Profile</DropdownMenuItem>
<DropdownMenuItem>Billing</DropdownMenuItem>
<DropdownMenuItem>Team</DropdownMenuItem>
<DropdownMenuItem>Subscription</DropdownMenuItem>
</DropdownMenuContent>
</DropdownMenu>`,
},
menubar: {
componentName: 'Menubar',
importDocs: `
import {
Menubar,
MenubarContent,
MenubarItem,
MenubarMenu,
MenubarSeparator,
MenubarShortcut,
MenubarTrigger,
} from "/components/ui/menubar"`,
usageDocs: `
<Menubar>
<MenubarMenu>
<MenubarTrigger>File</MenubarTrigger>
<MenubarContent>
<MenubarItem>
New Tab <MenubarShortcut>⌘T</MenubarShortcut>
</MenubarItem>
<MenubarItem>New Window</MenubarItem>
<MenubarSeparator />
<MenubarItem>Share</MenubarItem>
<MenubarSeparator />
<MenubarItem>Print</MenubarItem>
</MenubarContent>
</MenubarMenu>
</Menubar>`,
},
navigationMenu: {
componentName: 'NavigationMenu',
importDocs: `
import {
NavigationMenu,
NavigationMenuContent,
NavigationMenuItem,
NavigationMenuLink,
NavigationMenuList,
NavigationMenuTrigger,
navigationMenuTriggerStyle,
} from "/components/ui/navigation-menu"`,
usageDocs: `
<NavigationMenu>
<NavigationMenuList>
<NavigationMenuItem>
<NavigationMenuTrigger>Item One</NavigationMenuTrigger>
<NavigationMenuContent>
<NavigationMenuLink>Link</NavigationMenuLink>
</NavigationMenuContent>
</NavigationMenuItem>
</NavigationMenuList>
</NavigationMenu>`,
},
popover: {
componentName: 'Popover',
importDocs: `
import {
Popover,
PopoverContent,
PopoverTrigger,
} from "/components/ui/popover"`,
usageDocs: `
<Popover>
<PopoverTrigger>Open</PopoverTrigger>
<PopoverContent>Place content for the popover here.</PopoverContent>
</Popover>`,
},
progress: {
componentName: 'Progress',
importDocs: 'import { Progress } from "/components/ui/progress"',
usageDocs: '<Progress value={33} />',
},
separator: {
componentName: 'Separator',
importDocs: 'import { Separator } from "/components/ui/separator"',
usageDocs: '<Separator />',
},
sheet: {
componentName: 'Sheet',
importDocs: `
import {
Sheet,
SheetContent,
SheetDescription,
SheetHeader,
SheetTitle,
SheetTrigger,
} from "/components/ui/sheet"`,
usageDocs: `
<Sheet>
<SheetTrigger>Open</SheetTrigger>
<SheetContent>
<SheetHeader>
<SheetTitle>Are you sure absolutely sure?</SheetTitle>
<SheetDescription>
This action cannot be undone.
</SheetDescription>
</SheetHeader>
</SheetContent>
</Sheet>`,
},
skeleton: {
componentName: 'Skeleton',
importDocs: 'import { Skeleton } from "/components/ui/skeleton"',
usageDocs: '<Skeleton className="w-[100px] h-[20px] rounded-full" />',
},
slider: {
componentName: 'Slider',
importDocs: 'import { Slider } from "/components/ui/slider"',
usageDocs: '<Slider defaultValue={[33]} max={100} step={1} />',
},
switch: {
componentName: 'Switch',
importDocs: 'import { Switch } from "/components/ui/switch"',
usageDocs: '<Switch />',
},
table: {
componentName: 'Table',
importDocs: `
import {
Table,
TableBody,
TableCaption,
TableCell,
TableHead,
TableHeader,
TableRow,
} from "/components/ui/table"`,
usageDocs: `
<Table>
<TableCaption>A list of your recent invoices.</TableCaption>
<TableHeader>
<TableRow>
<TableHead className="w-[100px]">Invoice</TableHead>
<TableHead>Status</TableHead>
<TableHead>Method</TableHead>
<TableHead className="text-right">Amount</TableHead>
</TableRow>
</TableHeader>
<TableBody>
<TableRow>
<TableCell className="font-medium">INV001</TableCell>
<TableCell>Paid</TableCell>
<TableCell>Credit Card</TableCell>
<TableCell className="text-right">$250.00</TableCell>
</TableRow>
</TableBody>
</Table>`,
},
tabs: {
componentName: 'Tabs',
importDocs: `
import {
Tabs,
TabsContent,
TabsList,
TabsTrigger,
} from "/components/ui/tabs"`,
usageDocs: `
<Tabs defaultValue="account" className="w-[400px]">
<TabsList>
<TabsTrigger value="account">Account</TabsTrigger>
<TabsTrigger value="password">Password</TabsTrigger>
</TabsList>
<TabsContent value="account">Make changes to your account here.</TabsContent>
<TabsContent value="password">Change your password here.</TabsContent>
</Tabs>`,
},
toast: {
componentName: 'Toast',
importDocs: `
import { useToast } from "/components/ui/use-toast"
import { Button } from "/components/ui/button"`,
usageDocs: `
export function ToastDemo() {
const { toast } = useToast()
return (
<Button
onClick={() => {
toast({
title: "Scheduled: Catch up",
description: "Friday, February 10, 2023 at 5:57 PM",
})
}}
>
Show Toast
</Button>
)
}`,
},
toggle: {
componentName: 'Toggle',
importDocs: 'import { Toggle } from "/components/ui/toggle"',
usageDocs: '<Toggle>Toggle</Toggle>',
},
tooltip: {
componentName: 'Tooltip',
importDocs: `
import {
Tooltip,
TooltipContent,
TooltipProvider,
TooltipTrigger,
} from "/components/ui/tooltip"`,
usageDocs: `
<TooltipProvider>
<Tooltip>
<TooltipTrigger>Hover</TooltipTrigger>
<TooltipContent>
<p>Add to library</p>
</TooltipContent>
</Tooltip>
</TooltipProvider>`,
},
};
const components = Object.assign({}, essentialComponents, extraComponents);
module.exports = {
components,
};

View File

@@ -0,0 +1,50 @@
const dedent = require('dedent');
/**
* Generate system prompt for AI-assisted React component creation
* @param {Object} options - Configuration options
* @param {Object} options.components - Documentation for shadcn components
* @param {boolean} [options.useXML=false] - Whether to use XML-style formatting for component instructions
* @returns {string} The generated system prompt
*/
function generateShadcnPrompt(options) {
const { components, useXML = false } = options;
let systemPrompt = dedent`
## Additional Artifact Instructions for React Components: "application/vnd.react"
There are some prestyled components (primitives) available for use. Please use your best judgement to use any of these components if the app calls for one.
Here are the components that are available, along with how to import them, and how to use them:
${Object.values(components)
.map((component) => {
if (useXML) {
return dedent`
<component>
<name>${component.componentName}</name>
<import-instructions>${component.importDocs}</import-instructions>
<usage-instructions>${component.usageDocs}</usage-instructions>
</component>
`;
} else {
return dedent`
# ${component.componentName}
## Import Instructions
${component.importDocs}
## Usage Instructions
${component.usageDocs}
`;
}
})
.join('\n\n')}
`;
return systemPrompt;
}
module.exports = {
generateShadcnPrompt,
};

View File

@@ -206,12 +206,26 @@ describe('AnthropicClient', () => {
const modelOptions = {
model: 'claude-3-5-sonnet-20240307',
};
client.setOptions({ modelOptions });
client.setOptions({ modelOptions, promptCache: true });
const anthropicClient = client.getClient(modelOptions);
expect(anthropicClient._options.defaultHeaders).toBeDefined();
expect(anthropicClient._options.defaultHeaders).toHaveProperty('anthropic-beta');
expect(anthropicClient._options.defaultHeaders['anthropic-beta']).toBe(
'max-tokens-3-5-sonnet-2024-07-15',
'max-tokens-3-5-sonnet-2024-07-15,prompt-caching-2024-07-31',
);
});
it('should add beta header for claude-3-haiku model', () => {
const client = new AnthropicClient('test-api-key');
const modelOptions = {
model: 'claude-3-haiku-2028',
};
client.setOptions({ modelOptions, promptCache: true });
const anthropicClient = client.getClient(modelOptions);
expect(anthropicClient._options.defaultHeaders).toBeDefined();
expect(anthropicClient._options.defaultHeaders).toHaveProperty('anthropic-beta');
expect(anthropicClient._options.defaultHeaders['anthropic-beta']).toBe(
'prompt-caching-2024-07-31',
);
});
@@ -226,4 +240,145 @@ describe('AnthropicClient', () => {
expect(anthropicClient.defaultHeaders).not.toHaveProperty('anthropic-beta');
});
});
describe('calculateCurrentTokenCount', () => {
let client;
beforeEach(() => {
client = new AnthropicClient('test-api-key');
});
it('should calculate correct token count when usage is provided', () => {
const tokenCountMap = {
msg1: 10,
msg2: 20,
currentMsg: 30,
};
const currentMessageId = 'currentMsg';
const usage = {
input_tokens: 70,
output_tokens: 50,
};
const result = client.calculateCurrentTokenCount({ tokenCountMap, currentMessageId, usage });
expect(result).toBe(40); // 70 - (10 + 20) = 40
});
it('should return original estimate if calculation results in negative value', () => {
const tokenCountMap = {
msg1: 40,
msg2: 50,
currentMsg: 30,
};
const currentMessageId = 'currentMsg';
const usage = {
input_tokens: 80,
output_tokens: 50,
};
const result = client.calculateCurrentTokenCount({ tokenCountMap, currentMessageId, usage });
expect(result).toBe(30); // Original estimate
});
it('should handle cache creation and read input tokens', () => {
const tokenCountMap = {
msg1: 10,
msg2: 20,
currentMsg: 30,
};
const currentMessageId = 'currentMsg';
const usage = {
input_tokens: 50,
cache_creation_input_tokens: 10,
cache_read_input_tokens: 20,
output_tokens: 40,
};
const result = client.calculateCurrentTokenCount({ tokenCountMap, currentMessageId, usage });
expect(result).toBe(50); // (50 + 10 + 20) - (10 + 20) = 50
});
it('should handle missing usage properties', () => {
const tokenCountMap = {
msg1: 10,
msg2: 20,
currentMsg: 30,
};
const currentMessageId = 'currentMsg';
const usage = {
output_tokens: 40,
};
const result = client.calculateCurrentTokenCount({ tokenCountMap, currentMessageId, usage });
expect(result).toBe(30); // Original estimate
});
it('should handle empty tokenCountMap', () => {
const tokenCountMap = {};
const currentMessageId = 'currentMsg';
const usage = {
input_tokens: 50,
output_tokens: 40,
};
const result = client.calculateCurrentTokenCount({ tokenCountMap, currentMessageId, usage });
expect(result).toBe(50);
expect(Number.isNaN(result)).toBe(false);
});
it('should handle zero values in usage', () => {
const tokenCountMap = {
msg1: 10,
currentMsg: 20,
};
const currentMessageId = 'currentMsg';
const usage = {
input_tokens: 0,
cache_creation_input_tokens: 0,
cache_read_input_tokens: 0,
output_tokens: 0,
};
const result = client.calculateCurrentTokenCount({ tokenCountMap, currentMessageId, usage });
expect(result).toBe(20); // Should return original estimate
expect(Number.isNaN(result)).toBe(false);
});
it('should handle undefined usage', () => {
const tokenCountMap = {
msg1: 10,
currentMsg: 20,
};
const currentMessageId = 'currentMsg';
const usage = undefined;
const result = client.calculateCurrentTokenCount({ tokenCountMap, currentMessageId, usage });
expect(result).toBe(20); // Should return original estimate
expect(Number.isNaN(result)).toBe(false);
});
it('should handle non-numeric values in tokenCountMap', () => {
const tokenCountMap = {
msg1: 'ten',
currentMsg: 20,
};
const currentMessageId = 'currentMsg';
const usage = {
input_tokens: 30,
output_tokens: 10,
};
const result = client.calculateCurrentTokenCount({ tokenCountMap, currentMessageId, usage });
expect(result).toBe(30); // Should return 30 (input_tokens) - 0 (ignored 'ten') = 30
expect(Number.isNaN(result)).toBe(false);
});
});
});

View File

@@ -565,11 +565,13 @@ describe('BaseClient', () => {
const getReqData = jest.fn();
const opts = { getReqData };
const response = await TestClient.sendMessage('Hello, world!', opts);
expect(getReqData).toHaveBeenCalledWith({
userMessage: expect.objectContaining({ text: 'Hello, world!' }),
conversationId: response.conversationId,
responseMessageId: response.messageId,
});
expect(getReqData).toHaveBeenCalledWith(
expect.objectContaining({
userMessage: expect.objectContaining({ text: 'Hello, world!' }),
conversationId: response.conversationId,
responseMessageId: response.messageId,
}),
);
});
test('onStart is called with the correct arguments', async () => {

View File

@@ -0,0 +1,78 @@
const { z } = require('zod');
const { tool } = require('@langchain/core/tools');
const { getEnvironmentVariable } = require('@langchain/core/utils/env');
function createTavilySearchTool(fields = {}) {
const envVar = 'TAVILY_API_KEY';
const override = fields.override ?? false;
const apiKey = fields.apiKey ?? getApiKey(envVar, override);
const kwargs = fields?.kwargs ?? {};
function getApiKey(envVar, override) {
const key = getEnvironmentVariable(envVar);
if (!key && !override) {
throw new Error(`Missing ${envVar} environment variable.`);
}
return key;
}
return tool(
async (input) => {
const { query, ...rest } = input;
const requestBody = {
api_key: apiKey,
query,
...rest,
...kwargs,
};
const response = await fetch('https://api.tavily.com/search', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(requestBody),
});
const json = await response.json();
if (!response.ok) {
throw new Error(`Request failed with status ${response.status}: ${json.error}`);
}
return JSON.stringify(json);
},
{
name: 'tavily_search_results_json',
description:
'A search engine optimized for comprehensive, accurate, and trusted results. Useful for when you need to answer questions about current events.',
schema: z.object({
query: z.string().min(1).describe('The search query string.'),
max_results: z
.number()
.min(1)
.max(10)
.optional()
.describe('The maximum number of search results to return. Defaults to 5.'),
search_depth: z
.enum(['basic', 'advanced'])
.optional()
.describe(
'The depth of the search, affecting result quality and response time (`basic` or `advanced`). Default is basic for quick results and advanced for indepth high quality results but longer response time. Advanced calls equals 2 requests.',
),
include_images: z
.boolean()
.optional()
.describe(
'Whether to include a list of query-related images in the response. Default is False.',
),
include_answer: z
.boolean()
.optional()
.describe('Whether to include answers in the search results. Default is False.'),
}),
},
);
}
module.exports = createTavilySearchTool;

View File

@@ -35,7 +35,7 @@ const clearPendingReq = async ({ userId, cache: _cache }) => {
return;
}
const key = `${USE_REDIS ? namespace : ''}:${userId ?? ''}`;
const key = `${isEnabled(USE_REDIS) ? namespace : ''}:${userId ?? ''}`;
const currentReq = +((await cache.get(key)) ?? 0);
if (currentReq && currentReq >= 1) {

View File

@@ -5,17 +5,16 @@ const Action = mongoose.model('action', actionSchema);
/**
* Update an action with new data without overwriting existing properties,
* or create a new action if it doesn't exist, within a transaction session if provided.
* or create a new action if it doesn't exist.
*
* @param {Object} searchParams - The search parameters to find the action to update.
* @param {string} searchParams.action_id - The ID of the action to update.
* @param {string} searchParams.user - The user ID of the action's author.
* @param {Object} updateData - An object containing the properties to update.
* @param {mongoose.ClientSession} [session] - The transaction session to use.
* @returns {Promise<Object>} The updated or newly created action document as a plain object.
* @returns {Promise<Action>} The updated or newly created action document as a plain object.
*/
const updateAction = async (searchParams, updateData, session = null) => {
const options = { new: true, upsert: true, session };
const updateAction = async (searchParams, updateData) => {
const options = { new: true, upsert: true };
return await Action.findOneAndUpdate(searchParams, updateData, options).lean();
};
@@ -24,7 +23,7 @@ const updateAction = async (searchParams, updateData, session = null) => {
*
* @param {Object} searchParams - The search parameters to find matching actions.
* @param {boolean} includeSensitive - Flag to include sensitive data in the metadata.
* @returns {Promise<Array<Object>>} A promise that resolves to an array of action documents as plain objects.
* @returns {Promise<Array<Action>>} A promise that resolves to an array of action documents as plain objects.
*/
const getActions = async (searchParams, includeSensitive = false) => {
const actions = await Action.find(searchParams).lean();
@@ -49,31 +48,27 @@ const getActions = async (searchParams, includeSensitive = false) => {
};
/**
* Deletes an action by params, within a transaction session if provided.
* Deletes an action by params.
*
* @param {Object} searchParams - The search parameters to find the action to delete.
* @param {string} searchParams.action_id - The ID of the action to delete.
* @param {string} searchParams.user - The user ID of the action's author.
* @param {mongoose.ClientSession} [session] - The transaction session to use (optional).
* @returns {Promise<Object>} A promise that resolves to the deleted action document as a plain object, or null if no document was found.
* @returns {Promise<Action>} A promise that resolves to the deleted action document as a plain object, or null if no document was found.
*/
const deleteAction = async (searchParams, session = null) => {
const options = session ? { session } : {};
return await Action.findOneAndDelete(searchParams, options).lean();
const deleteAction = async (searchParams) => {
return await Action.findOneAndDelete(searchParams).lean();
};
/**
* Deletes actions by params, within a transaction session if provided.
* Deletes actions by params.
*
* @param {Object} searchParams - The search parameters to find the actions to delete.
* @param {string} searchParams.action_id - The ID of the action(s) to delete.
* @param {string} searchParams.user - The user ID of the action's author.
* @param {mongoose.ClientSession} [session] - The transaction session to use (optional).
* @returns {Promise<Number>} A promise that resolves to the number of deleted action documents.
*/
const deleteActions = async (searchParams, session = null) => {
const options = session ? { session } : {};
const result = await Action.deleteMany(searchParams, options);
const deleteActions = async (searchParams) => {
const result = await Action.deleteMany(searchParams);
return result.deletedCount;
};

142
api/models/Agent.js Normal file
View File

@@ -0,0 +1,142 @@
const mongoose = require('mongoose');
const { GLOBAL_PROJECT_NAME } = require('librechat-data-provider').Constants;
const {
getProjectByName,
addAgentIdsToProject,
removeAgentIdsFromProject,
removeAgentFromAllProjects,
} = require('./Project');
const agentSchema = require('./schema/agent');
const Agent = mongoose.model('agent', agentSchema);
/**
* Create an agent with the provided data.
* @param {Object} agentData - The agent data to create.
* @returns {Promise<Agent>} The created agent document as a plain object.
* @throws {Error} If the agent creation fails.
*/
const createAgent = async (agentData) => {
return await Agent.create(agentData);
};
/**
* Get an agent document based on the provided ID.
*
* @param {Object} searchParameter - The search parameters to find the agent to update.
* @param {string} searchParameter.id - The ID of the agent to update.
* @param {string} searchParameter.author - The user ID of the agent's author.
* @returns {Promise<Agent|null>} The agent document as a plain object, or null if not found.
*/
const getAgent = async (searchParameter) => await Agent.findOne(searchParameter).lean();
/**
* Update an agent with new data without overwriting existing
* properties, or create a new agent if it doesn't exist.
*
* @param {Object} searchParameter - The search parameters to find the agent to update.
* @param {string} searchParameter.id - The ID of the agent to update.
* @param {string} [searchParameter.author] - The user ID of the agent's author.
* @param {Object} updateData - An object containing the properties to update.
* @returns {Promise<Agent>} The updated or newly created agent document as a plain object.
*/
const updateAgent = async (searchParameter, updateData) => {
const options = { new: true, upsert: true };
return await Agent.findOneAndUpdate(searchParameter, updateData, options).lean();
};
/**
* Deletes an agent based on the provided ID.
*
* @param {Object} searchParameter - The search parameters to find the agent to delete.
* @param {string} searchParameter.id - The ID of the agent to delete.
* @param {string} [searchParameter.author] - The user ID of the agent's author.
* @returns {Promise<void>} Resolves when the agent has been successfully deleted.
*/
const deleteAgent = async (searchParameter) => {
const agent = await Agent.findOneAndDelete(searchParameter);
if (agent) {
await removeAgentFromAllProjects(agent.id);
}
return agent;
};
/**
* Get all agents.
* @param {Object} searchParameter - The search parameters to find matching agents.
* @param {string} searchParameter.author - The user ID of the agent's author.
* @returns {Promise<Object>} A promise that resolves to an object containing the agents data and pagination info.
*/
const getListAgents = async (searchParameter) => {
const { author, ...otherParams } = searchParameter;
let query = Object.assign({ author }, otherParams);
const globalProject = await getProjectByName(GLOBAL_PROJECT_NAME, ['agentIds']);
if (globalProject && (globalProject.agentIds?.length ?? 0) > 0) {
const globalQuery = { id: { $in: globalProject.agentIds }, ...otherParams };
delete globalQuery.author;
query = { $or: [globalQuery, query] };
}
const agents = await Agent.find(query, {
id: 1,
name: 1,
avatar: 1,
projectIds: 1,
}).lean();
const hasMore = agents.length > 0;
const firstId = agents.length > 0 ? agents[0].id : null;
const lastId = agents.length > 0 ? agents[agents.length - 1].id : null;
return {
data: agents,
has_more: hasMore,
first_id: firstId,
last_id: lastId,
};
};
/**
* Updates the projects associated with an agent, adding and removing project IDs as specified.
* This function also updates the corresponding projects to include or exclude the agent ID.
*
* @param {string} agentId - The ID of the agent to update.
* @param {string[]} [projectIds] - Array of project IDs to add to the agent.
* @param {string[]} [removeProjectIds] - Array of project IDs to remove from the agent.
* @returns {Promise<MongoAgent>} The updated agent document.
* @throws {Error} If there's an error updating the agent or projects.
*/
const updateAgentProjects = async (agentId, projectIds, removeProjectIds) => {
const updateOps = {};
if (removeProjectIds && removeProjectIds.length > 0) {
for (const projectId of removeProjectIds) {
await removeAgentIdsFromProject(projectId, [agentId]);
}
updateOps.$pull = { projectIds: { $in: removeProjectIds } };
}
if (projectIds && projectIds.length > 0) {
for (const projectId of projectIds) {
await addAgentIdsToProject(projectId, [agentId]);
}
updateOps.$addToSet = { projectIds: { $each: projectIds } };
}
if (Object.keys(updateOps).length === 0) {
return await getAgent({ id: agentId });
}
return await updateAgent({ id: agentId }, updateOps);
};
module.exports = {
createAgent,
getAgent,
updateAgent,
deleteAgent,
getListAgents,
updateAgentProjects,
};

View File

@@ -5,17 +5,16 @@ const Assistant = mongoose.model('assistant', assistantSchema);
/**
* Update an assistant with new data without overwriting existing properties,
* or create a new assistant if it doesn't exist, within a transaction session if provided.
* or create a new assistant if it doesn't exist.
*
* @param {Object} searchParams - The search parameters to find the assistant to update.
* @param {string} searchParams.assistant_id - The ID of the assistant to update.
* @param {string} searchParams.user - The user ID of the assistant's author.
* @param {Object} updateData - An object containing the properties to update.
* @param {mongoose.ClientSession} [session] - The transaction session to use (optional).
* @returns {Promise<Object>} The updated or newly created assistant document as a plain object.
* @returns {Promise<AssistantDocument>} The updated or newly created assistant document as a plain object.
*/
const updateAssistantDoc = async (searchParams, updateData, session = null) => {
const options = { new: true, upsert: true, session };
const updateAssistantDoc = async (searchParams, updateData) => {
const options = { new: true, upsert: true };
return await Assistant.findOneAndUpdate(searchParams, updateData, options).lean();
};
@@ -25,7 +24,7 @@ const updateAssistantDoc = async (searchParams, updateData, session = null) => {
* @param {Object} searchParams - The search parameters to find the assistant to update.
* @param {string} searchParams.assistant_id - The ID of the assistant to update.
* @param {string} searchParams.user - The user ID of the assistant's author.
* @returns {Promise<Object|null>} The assistant document as a plain object, or null if not found.
* @returns {Promise<AssistantDocument|null>} The assistant document as a plain object, or null if not found.
*/
const getAssistant = async (searchParams) => await Assistant.findOne(searchParams).lean();
@@ -33,10 +32,17 @@ const getAssistant = async (searchParams) => await Assistant.findOne(searchParam
* Retrieves all assistants that match the given search parameters.
*
* @param {Object} searchParams - The search parameters to find matching assistants.
* @returns {Promise<Array<Object>>} A promise that resolves to an array of action documents as plain objects.
* @param {Object} [select] - Optional. Specifies which document fields to include or exclude.
* @returns {Promise<Array<AssistantDocument>>} A promise that resolves to an array of assistant documents as plain objects.
*/
const getAssistants = async (searchParams) => {
return await Assistant.find(searchParams).lean();
const getAssistants = async (searchParams, select = null) => {
let query = Assistant.find(searchParams);
if (select) {
query = query.select(select);
}
return await query.lean();
};
/**

View File

@@ -133,14 +133,21 @@ const adjustPositions = async (user, oldPosition, newPosition) => {
}
const update = oldPosition < newPosition ? { $inc: { position: -1 } } : { $inc: { position: 1 } };
const position =
oldPosition < newPosition
? {
$gt: Math.min(oldPosition, newPosition),
$lte: Math.max(oldPosition, newPosition),
}
: {
$gte: Math.min(oldPosition, newPosition),
$lt: Math.max(oldPosition, newPosition),
};
await ConversationTag.updateMany(
{
user,
position: {
$gt: Math.min(oldPosition, newPosition),
$lte: Math.max(oldPosition, newPosition),
},
position,
},
update,
);

View File

@@ -35,82 +35,34 @@ const idSchema = z.string().uuid();
* @throws {Error} If there is an error in saving the message.
*/
async function saveMessage(req, params, metadata) {
if (!req?.user?.id) {
throw new Error('User not authenticated');
}
const validConvoId = idSchema.safeParse(params.conversationId);
if (!validConvoId.success) {
logger.warn(`Invalid conversation ID: ${params.conversationId}`);
logger.info(`---\`saveMessage\` context: ${metadata?.context}`);
logger.info(`---Invalid conversation ID Params: ${JSON.stringify(params, null, 2)}`);
return;
}
try {
if (!req || !req.user || !req.user.id) {
throw new Error('User not authenticated');
}
const {
text,
error,
model,
files,
plugin,
sender,
plugins,
iconURL,
endpoint,
isEdited,
messageId,
unfinished,
tokenCount,
newMessageId,
finish_reason,
conversationId,
parentMessageId,
isCreatedByUser,
} = params;
const validConvoId = idSchema.safeParse(conversationId);
if (!validConvoId.success) {
logger.warn(`Invalid conversation ID: ${conversationId}`);
if (metadata && metadata?.context) {
logger.info(`---\`saveMessage\` context: ${metadata.context}`);
}
logger.info(`---Invalid conversation ID Params:
${JSON.stringify(params, null, 2)}
`);
return;
}
const update = {
...params,
user: req.user.id,
iconURL,
endpoint,
messageId: newMessageId || messageId,
conversationId,
parentMessageId,
sender,
text,
isCreatedByUser,
isEdited,
finish_reason,
error,
unfinished,
tokenCount,
plugin,
plugins,
model,
messageId: params.newMessageId || params.messageId,
};
if (files) {
update.files = files;
}
const message = await Message.findOneAndUpdate({ messageId, user: req.user.id }, update, {
upsert: true,
new: true,
});
const message = await Message.findOneAndUpdate(
{ messageId: params.messageId, user: req.user.id },
update,
{ upsert: true, new: true },
);
return message.toObject();
} catch (err) {
logger.error('Error saving message:', err);
if (metadata && metadata?.context) {
logger.info(`---\`saveMessage\` context: ${metadata.context}`);
}
logger.info(`---\`saveMessage\` context: ${metadata?.context}`);
throw err;
}
}
@@ -212,8 +164,8 @@ async function updateMessageText(req, { messageId, text }) {
*
* @async
* @function updateMessage
* @param {Object} message - The message object containing update data.
* @param {Object} req - The request object.
* @param {Object} message - The message object containing update data.
* @param {string} message.messageId - The unique identifier for the message.
* @param {string} [message.text] - The new text content of the message.
* @param {Object[]} [message.files] - The files associated with the message.

View File

@@ -1,4 +1,5 @@
const { model } = require('mongoose');
const { GLOBAL_PROJECT_NAME } = require('librechat-data-provider').Constants;
const projectSchema = require('~/models/schema/projectSchema');
const Project = model('Project', projectSchema);
@@ -33,7 +34,7 @@ const getProjectByName = async function (projectName, fieldsToSelect = null) {
const update = { $setOnInsert: { name: projectName } };
const options = {
new: true,
upsert: projectName === 'instance',
upsert: projectName === GLOBAL_PROJECT_NAME,
lean: true,
select: fieldsToSelect,
};
@@ -81,10 +82,55 @@ const removeGroupFromAllProjects = async (promptGroupId) => {
await Project.updateMany({}, { $pull: { promptGroupIds: promptGroupId } });
};
/**
* Add an array of agent IDs to a project's agentIds array, ensuring uniqueness.
*
* @param {string} projectId - The ID of the project to update.
* @param {string[]} agentIds - The array of agent IDs to add to the project.
* @returns {Promise<MongoProject>} The updated project document.
*/
const addAgentIdsToProject = async function (projectId, agentIds) {
return await Project.findByIdAndUpdate(
projectId,
{ $addToSet: { agentIds: { $each: agentIds } } },
{ new: true },
);
};
/**
* Remove an array of agent IDs from a project's agentIds array.
*
* @param {string} projectId - The ID of the project to update.
* @param {string[]} agentIds - The array of agent IDs to remove from the project.
* @returns {Promise<MongoProject>} The updated project document.
*/
const removeAgentIdsFromProject = async function (projectId, agentIds) {
return await Project.findByIdAndUpdate(
projectId,
{ $pull: { agentIds: { $in: agentIds } } },
{ new: true },
);
};
/**
* Remove an agent ID from all projects.
*
* @param {string} agentId - The ID of the agent to remove from projects.
* @returns {Promise<void>}
*/
const removeAgentFromAllProjects = async (agentId) => {
await Project.updateMany({}, { $pull: { agentIds: agentId } });
};
module.exports = {
getProjectById,
getProjectByName,
/* prompts */
addGroupIdsToProject,
removeGroupIdsFromProject,
removeGroupFromAllProjects,
/* agents */
addAgentIdsToProject,
removeAgentIdsFromProject,
removeAgentFromAllProjects,
};

View File

@@ -1,5 +1,5 @@
const { ObjectId } = require('mongodb');
const { SystemRoles, SystemCategories } = require('librechat-data-provider');
const { SystemRoles, SystemCategories, Constants } = require('librechat-data-provider');
const {
getProjectByName,
addGroupIdsToProject,
@@ -123,7 +123,7 @@ const getAllPromptGroups = async (req, filter) => {
let combinedQuery = query;
if (searchShared) {
const project = await getProjectByName('instance', 'promptGroupIds');
const project = await getProjectByName(Constants.GLOBAL_PROJECT_NAME, 'promptGroupIds');
if (project && project.promptGroupIds.length > 0) {
const projectQuery = { _id: { $in: project.promptGroupIds }, ...query };
delete projectQuery.author;
@@ -177,7 +177,7 @@ const getPromptGroups = async (req, filter) => {
if (searchShared) {
// const projects = req.user.projects || []; // TODO: handle multiple projects
const project = await getProjectByName('instance', 'promptGroupIds');
const project = await getProjectByName(Constants.GLOBAL_PROJECT_NAME, 'promptGroupIds');
if (project && project.promptGroupIds.length > 0) {
const projectQuery = { _id: { $in: project.promptGroupIds }, ...query };
delete projectQuery.author;

View File

@@ -1,6 +1,17 @@
const { SystemRoles, CacheKeys, roleDefaults } = require('librechat-data-provider');
const {
CacheKeys,
SystemRoles,
roleDefaults,
PermissionTypes,
removeNullishValues,
agentPermissionsSchema,
promptPermissionsSchema,
bookmarkPermissionsSchema,
multiConvoPermissionsSchema,
} = require('librechat-data-provider');
const getLogStores = require('~/cache/getLogStores');
const Role = require('~/models/schema/roleSchema');
const { logger } = require('~/config');
/**
* Retrieve a role by name and convert the found role document to a plain object.
@@ -61,9 +72,69 @@ const updateRoleByName = async function (roleName, updates) {
}
};
const permissionSchemas = {
[PermissionTypes.AGENTS]: agentPermissionsSchema,
[PermissionTypes.PROMPTS]: promptPermissionsSchema,
[PermissionTypes.BOOKMARKS]: bookmarkPermissionsSchema,
[PermissionTypes.MULTI_CONVO]: multiConvoPermissionsSchema,
};
/**
* Updates access permissions for a specific role and multiple permission types.
* @param {SystemRoles} roleName - The role to update.
* @param {Object.<PermissionTypes, Object.<Permissions, boolean>>} permissionsUpdate - Permissions to update and their values.
*/
async function updateAccessPermissions(roleName, permissionsUpdate) {
const updates = {};
for (const [permissionType, permissions] of Object.entries(permissionsUpdate)) {
if (permissionSchemas[permissionType]) {
updates[permissionType] = removeNullishValues(permissions);
}
}
if (Object.keys(updates).length === 0) {
return;
}
try {
const role = await getRoleByName(roleName);
if (!role) {
return;
}
const updatedPermissions = {};
let hasChanges = false;
for (const [permissionType, permissions] of Object.entries(updates)) {
const currentPermissions = role[permissionType] || {};
updatedPermissions[permissionType] = { ...currentPermissions };
for (const [permission, value] of Object.entries(permissions)) {
if (currentPermissions[permission] !== value) {
updatedPermissions[permissionType][permission] = value;
hasChanges = true;
logger.info(
`Updating '${roleName}' role ${permissionType} '${permission}' permission from ${currentPermissions[permission]} to: ${value}`,
);
}
}
}
if (hasChanges) {
await updateRoleByName(roleName, updatedPermissions);
logger.info(`Updated '${roleName}' role permissions`);
} else {
logger.info(`No changes needed for '${roleName}' role permissions`);
}
} catch (error) {
logger.error(`Failed to update ${roleName} role permissions:`, error);
}
}
/**
* Initialize default roles in the system.
* Creates the default roles (ADMIN, USER) if they don't exist in the database.
* Updates existing roles with new permission types if they're missing.
*
* @returns {Promise<void>}
*/
@@ -71,16 +142,30 @@ const initializeRoles = async function () {
const defaultRoles = [SystemRoles.ADMIN, SystemRoles.USER];
for (const roleName of defaultRoles) {
let role = await Role.findOne({ name: roleName }).select('name').lean();
let role = await Role.findOne({ name: roleName });
if (!role) {
// Create new role if it doesn't exist
role = new Role(roleDefaults[roleName]);
await role.save();
} else {
// Add missing permission types
let isUpdated = false;
for (const permType of Object.values(PermissionTypes)) {
if (!role[permType]) {
role[permType] = roleDefaults[roleName][permType];
isUpdated = true;
}
}
if (isUpdated) {
await role.save();
}
}
await role.save();
}
};
module.exports = {
getRoleByName,
initializeRoles,
updateRoleByName,
updateAccessPermissions,
};

420
api/models/Role.spec.js Normal file
View File

@@ -0,0 +1,420 @@
const mongoose = require('mongoose');
const { MongoMemoryServer } = require('mongodb-memory-server');
const {
SystemRoles,
PermissionTypes,
roleDefaults,
Permissions,
} = require('librechat-data-provider');
const { updateAccessPermissions, initializeRoles } = require('~/models/Role');
const getLogStores = require('~/cache/getLogStores');
const Role = require('~/models/schema/roleSchema');
// Mock the cache
jest.mock('~/cache/getLogStores', () => {
return jest.fn().mockReturnValue({
get: jest.fn(),
set: jest.fn(),
del: jest.fn(),
});
});
let mongoServer;
beforeAll(async () => {
mongoServer = await MongoMemoryServer.create();
const mongoUri = mongoServer.getUri();
await mongoose.connect(mongoUri);
});
afterAll(async () => {
await mongoose.disconnect();
await mongoServer.stop();
});
beforeEach(async () => {
await Role.deleteMany({});
getLogStores.mockClear();
});
describe('updateAccessPermissions', () => {
it('should update permissions when changes are needed', async () => {
await new Role({
name: SystemRoles.USER,
[PermissionTypes.PROMPTS]: {
CREATE: true,
USE: true,
SHARED_GLOBAL: false,
},
}).save();
await updateAccessPermissions(SystemRoles.USER, {
[PermissionTypes.PROMPTS]: {
CREATE: true,
USE: true,
SHARED_GLOBAL: true,
},
});
const updatedRole = await Role.findOne({ name: SystemRoles.USER }).lean();
expect(updatedRole[PermissionTypes.PROMPTS]).toEqual({
CREATE: true,
USE: true,
SHARED_GLOBAL: true,
});
});
it('should not update permissions when no changes are needed', async () => {
await new Role({
name: SystemRoles.USER,
[PermissionTypes.PROMPTS]: {
CREATE: true,
USE: true,
SHARED_GLOBAL: false,
},
}).save();
await updateAccessPermissions(SystemRoles.USER, {
[PermissionTypes.PROMPTS]: {
CREATE: true,
USE: true,
SHARED_GLOBAL: false,
},
});
const updatedRole = await Role.findOne({ name: SystemRoles.USER }).lean();
expect(updatedRole[PermissionTypes.PROMPTS]).toEqual({
CREATE: true,
USE: true,
SHARED_GLOBAL: false,
});
});
it('should handle non-existent roles', async () => {
await updateAccessPermissions('NON_EXISTENT_ROLE', {
[PermissionTypes.PROMPTS]: {
CREATE: true,
},
});
const role = await Role.findOne({ name: 'NON_EXISTENT_ROLE' });
expect(role).toBeNull();
});
it('should update only specified permissions', async () => {
await new Role({
name: SystemRoles.USER,
[PermissionTypes.PROMPTS]: {
CREATE: true,
USE: true,
SHARED_GLOBAL: false,
},
}).save();
await updateAccessPermissions(SystemRoles.USER, {
[PermissionTypes.PROMPTS]: {
SHARED_GLOBAL: true,
},
});
const updatedRole = await Role.findOne({ name: SystemRoles.USER }).lean();
expect(updatedRole[PermissionTypes.PROMPTS]).toEqual({
CREATE: true,
USE: true,
SHARED_GLOBAL: true,
});
});
it('should handle partial updates', async () => {
await new Role({
name: SystemRoles.USER,
[PermissionTypes.PROMPTS]: {
CREATE: true,
USE: true,
SHARED_GLOBAL: false,
},
}).save();
await updateAccessPermissions(SystemRoles.USER, {
[PermissionTypes.PROMPTS]: {
USE: false,
},
});
const updatedRole = await Role.findOne({ name: SystemRoles.USER }).lean();
expect(updatedRole[PermissionTypes.PROMPTS]).toEqual({
CREATE: true,
USE: false,
SHARED_GLOBAL: false,
});
});
it('should update multiple permission types at once', async () => {
await new Role({
name: SystemRoles.USER,
[PermissionTypes.PROMPTS]: {
CREATE: true,
USE: true,
SHARED_GLOBAL: false,
},
[PermissionTypes.BOOKMARKS]: {
USE: true,
},
}).save();
await updateAccessPermissions(SystemRoles.USER, {
[PermissionTypes.PROMPTS]: { USE: false, SHARED_GLOBAL: true },
[PermissionTypes.BOOKMARKS]: { USE: false },
});
const updatedRole = await Role.findOne({ name: SystemRoles.USER }).lean();
expect(updatedRole[PermissionTypes.PROMPTS]).toEqual({
CREATE: true,
USE: false,
SHARED_GLOBAL: true,
});
expect(updatedRole[PermissionTypes.BOOKMARKS]).toEqual({
USE: false,
});
});
it('should handle updates for a single permission type', async () => {
await new Role({
name: SystemRoles.USER,
[PermissionTypes.PROMPTS]: {
CREATE: true,
USE: true,
SHARED_GLOBAL: false,
},
}).save();
await updateAccessPermissions(SystemRoles.USER, {
[PermissionTypes.PROMPTS]: { USE: false, SHARED_GLOBAL: true },
});
const updatedRole = await Role.findOne({ name: SystemRoles.USER }).lean();
expect(updatedRole[PermissionTypes.PROMPTS]).toEqual({
CREATE: true,
USE: false,
SHARED_GLOBAL: true,
});
});
it('should update MULTI_CONVO permissions', async () => {
await new Role({
name: SystemRoles.USER,
[PermissionTypes.MULTI_CONVO]: {
USE: false,
},
}).save();
await updateAccessPermissions(SystemRoles.USER, {
[PermissionTypes.MULTI_CONVO]: {
USE: true,
},
});
const updatedRole = await Role.findOne({ name: SystemRoles.USER }).lean();
expect(updatedRole[PermissionTypes.MULTI_CONVO]).toEqual({
USE: true,
});
});
it('should update MULTI_CONVO permissions along with other permission types', async () => {
await new Role({
name: SystemRoles.USER,
[PermissionTypes.PROMPTS]: {
CREATE: true,
USE: true,
SHARED_GLOBAL: false,
},
[PermissionTypes.MULTI_CONVO]: {
USE: false,
},
}).save();
await updateAccessPermissions(SystemRoles.USER, {
[PermissionTypes.PROMPTS]: { SHARED_GLOBAL: true },
[PermissionTypes.MULTI_CONVO]: { USE: true },
});
const updatedRole = await Role.findOne({ name: SystemRoles.USER }).lean();
expect(updatedRole[PermissionTypes.PROMPTS]).toEqual({
CREATE: true,
USE: true,
SHARED_GLOBAL: true,
});
expect(updatedRole[PermissionTypes.MULTI_CONVO]).toEqual({
USE: true,
});
});
it('should not update MULTI_CONVO permissions when no changes are needed', async () => {
await new Role({
name: SystemRoles.USER,
[PermissionTypes.MULTI_CONVO]: {
USE: true,
},
}).save();
await updateAccessPermissions(SystemRoles.USER, {
[PermissionTypes.MULTI_CONVO]: {
USE: true,
},
});
const updatedRole = await Role.findOne({ name: SystemRoles.USER }).lean();
expect(updatedRole[PermissionTypes.MULTI_CONVO]).toEqual({
USE: true,
});
});
});
describe('initializeRoles', () => {
beforeEach(async () => {
await Role.deleteMany({});
});
it('should create default roles if they do not exist', async () => {
await initializeRoles();
const adminRole = await Role.findOne({ name: SystemRoles.ADMIN }).lean();
const userRole = await Role.findOne({ name: SystemRoles.USER }).lean();
expect(adminRole).toBeTruthy();
expect(userRole).toBeTruthy();
// Check if all permission types exist
Object.values(PermissionTypes).forEach((permType) => {
expect(adminRole[permType]).toBeDefined();
expect(userRole[permType]).toBeDefined();
});
// Check if permissions match defaults (example for ADMIN role)
expect(adminRole[PermissionTypes.PROMPTS].SHARED_GLOBAL).toBe(true);
expect(adminRole[PermissionTypes.BOOKMARKS].USE).toBe(true);
expect(adminRole[PermissionTypes.AGENTS].CREATE).toBe(true);
});
it('should not modify existing permissions for existing roles', async () => {
const customUserRole = {
name: SystemRoles.USER,
[PermissionTypes.PROMPTS]: {
[Permissions.USE]: false,
[Permissions.CREATE]: true,
[Permissions.SHARED_GLOBAL]: true,
},
[PermissionTypes.BOOKMARKS]: {
[Permissions.USE]: false,
},
};
await new Role(customUserRole).save();
await initializeRoles();
const userRole = await Role.findOne({ name: SystemRoles.USER }).lean();
expect(userRole[PermissionTypes.PROMPTS]).toEqual(customUserRole[PermissionTypes.PROMPTS]);
expect(userRole[PermissionTypes.BOOKMARKS]).toEqual(customUserRole[PermissionTypes.BOOKMARKS]);
expect(userRole[PermissionTypes.AGENTS]).toBeDefined();
});
it('should add new permission types to existing roles', async () => {
const partialUserRole = {
name: SystemRoles.USER,
[PermissionTypes.PROMPTS]: roleDefaults[SystemRoles.USER][PermissionTypes.PROMPTS],
[PermissionTypes.BOOKMARKS]: roleDefaults[SystemRoles.USER][PermissionTypes.BOOKMARKS],
};
await new Role(partialUserRole).save();
await initializeRoles();
const userRole = await Role.findOne({ name: SystemRoles.USER }).lean();
expect(userRole[PermissionTypes.AGENTS]).toBeDefined();
expect(userRole[PermissionTypes.AGENTS].CREATE).toBeDefined();
expect(userRole[PermissionTypes.AGENTS].USE).toBeDefined();
expect(userRole[PermissionTypes.AGENTS].SHARED_GLOBAL).toBeDefined();
});
it('should handle multiple runs without duplicating or modifying data', async () => {
await initializeRoles();
await initializeRoles();
const adminRoles = await Role.find({ name: SystemRoles.ADMIN });
const userRoles = await Role.find({ name: SystemRoles.USER });
expect(adminRoles).toHaveLength(1);
expect(userRoles).toHaveLength(1);
const adminRole = adminRoles[0].toObject();
const userRole = userRoles[0].toObject();
// Check if all permission types exist
Object.values(PermissionTypes).forEach((permType) => {
expect(adminRole[permType]).toBeDefined();
expect(userRole[permType]).toBeDefined();
});
});
it('should update roles with missing permission types from roleDefaults', async () => {
const partialAdminRole = {
name: SystemRoles.ADMIN,
[PermissionTypes.PROMPTS]: {
[Permissions.USE]: false,
[Permissions.CREATE]: false,
[Permissions.SHARED_GLOBAL]: false,
},
[PermissionTypes.BOOKMARKS]: roleDefaults[SystemRoles.ADMIN][PermissionTypes.BOOKMARKS],
};
await new Role(partialAdminRole).save();
await initializeRoles();
const adminRole = await Role.findOne({ name: SystemRoles.ADMIN }).lean();
expect(adminRole[PermissionTypes.PROMPTS]).toEqual(partialAdminRole[PermissionTypes.PROMPTS]);
expect(adminRole[PermissionTypes.AGENTS]).toBeDefined();
expect(adminRole[PermissionTypes.AGENTS].CREATE).toBeDefined();
expect(adminRole[PermissionTypes.AGENTS].USE).toBeDefined();
expect(adminRole[PermissionTypes.AGENTS].SHARED_GLOBAL).toBeDefined();
});
it('should include MULTI_CONVO permissions when creating default roles', async () => {
await initializeRoles();
const adminRole = await Role.findOne({ name: SystemRoles.ADMIN }).lean();
const userRole = await Role.findOne({ name: SystemRoles.USER }).lean();
expect(adminRole[PermissionTypes.MULTI_CONVO]).toBeDefined();
expect(userRole[PermissionTypes.MULTI_CONVO]).toBeDefined();
// Check if MULTI_CONVO permissions match defaults
expect(adminRole[PermissionTypes.MULTI_CONVO].USE).toBe(
roleDefaults[SystemRoles.ADMIN][PermissionTypes.MULTI_CONVO].USE,
);
expect(userRole[PermissionTypes.MULTI_CONVO].USE).toBe(
roleDefaults[SystemRoles.USER][PermissionTypes.MULTI_CONVO].USE,
);
});
it('should add MULTI_CONVO permissions to existing roles without them', async () => {
const partialUserRole = {
name: SystemRoles.USER,
[PermissionTypes.PROMPTS]: roleDefaults[SystemRoles.USER][PermissionTypes.PROMPTS],
[PermissionTypes.BOOKMARKS]: roleDefaults[SystemRoles.USER][PermissionTypes.BOOKMARKS],
};
await new Role(partialUserRole).save();
await initializeRoles();
const userRole = await Role.findOne({ name: SystemRoles.USER }).lean();
expect(userRole[PermissionTypes.MULTI_CONVO]).toBeDefined();
expect(userRole[PermissionTypes.MULTI_CONVO].USE).toBeDefined();
});
});

117
api/models/Token.js Normal file
View File

@@ -0,0 +1,117 @@
const tokenSchema = require('./schema/tokenSchema');
const mongoose = require('mongoose');
const { logger } = require('~/config');
/**
* Token model.
* @type {mongoose.Model}
*/
const Token = mongoose.model('Token', tokenSchema);
/**
* Creates a new Token instance.
* @param {Object} tokenData - The data for the new Token.
* @param {mongoose.Types.ObjectId} tokenData.userId - The user's ID. It is required.
* @param {String} tokenData.email - The user's email.
* @param {String} tokenData.token - The token. It is required.
* @param {Number} tokenData.expiresIn - The number of seconds until the token expires.
* @returns {Promise<mongoose.Document>} The new Token instance.
* @throws Will throw an error if token creation fails.
*/
async function createToken(tokenData) {
try {
const currentTime = new Date();
const expiresAt = new Date(currentTime.getTime() + tokenData.expiresIn * 1000);
const newTokenData = {
...tokenData,
createdAt: currentTime,
expiresAt,
};
const newToken = new Token(newTokenData);
return await newToken.save();
} catch (error) {
logger.debug('An error occurred while creating token:', error);
throw error;
}
}
/**
* Finds a Token document that matches the provided query.
* @param {Object} query - The query to match against.
* @param {mongoose.Types.ObjectId|String} query.userId - The ID of the user.
* @param {String} query.token - The token value.
* @param {String} query.email - The email of the user.
* @returns {Promise<Object|null>} The matched Token document, or null if not found.
* @throws Will throw an error if the find operation fails.
*/
async function findToken(query) {
try {
const conditions = [];
if (query.userId) {
conditions.push({ userId: query.userId });
}
if (query.token) {
conditions.push({ token: query.token });
}
if (query.email) {
conditions.push({ email: query.email });
}
const token = await Token.findOne({
$and: conditions,
}).lean();
return token;
} catch (error) {
logger.debug('An error occurred while finding token:', error);
throw error;
}
}
/**
* Updates a Token document that matches the provided query.
* @param {Object} query - The query to match against.
* @param {mongoose.Types.ObjectId|String} query.userId - The ID of the user.
* @param {String} query.token - The token value.
* @param {Object} updateData - The data to update the Token with.
* @returns {Promise<mongoose.Document|null>} The updated Token document, or null if not found.
* @throws Will throw an error if the update operation fails.
*/
async function updateToken(query, updateData) {
try {
return await Token.findOneAndUpdate(query, updateData, { new: true });
} catch (error) {
logger.debug('An error occurred while updating token:', error);
throw error;
}
}
/**
* Deletes all Token documents that match the provided token, user ID, or email.
* @param {Object} query - The query to match against.
* @param {mongoose.Types.ObjectId|String} query.userId - The ID of the user.
* @param {String} query.token - The token value.
* @param {String} query.email - The email of the user.
* @returns {Promise<Object>} The result of the delete operation.
* @throws Will throw an error if the delete operation fails.
*/
async function deleteTokens(query) {
try {
return await Token.deleteMany({
$or: [{ userId: query.userId }, { token: query.token }, { email: query.email }],
});
} catch (error) {
logger.debug('An error occurred while deleting tokens:', error);
throw error;
}
}
module.exports = {
createToken,
findToken,
updateToken,
deleteTokens,
};

View File

@@ -1,12 +1,12 @@
const mongoose = require('mongoose');
const { isEnabled } = require('../server/utils/handleText');
const { isEnabled } = require('~/server/utils/handleText');
const transactionSchema = require('./schema/transaction');
const { getMultiplier } = require('./tx');
const { getMultiplier, getCacheMultiplier } = require('./tx');
const { logger } = require('~/config');
const Balance = require('./Balance');
const cancelRate = 1.15;
// Method to calculate and set the tokenValue for a transaction
/** Method to calculate and set the tokenValue for a transaction */
transactionSchema.methods.calculateTokenValue = function () {
if (!this.valueKey || !this.tokenType) {
this.tokenValue = this.rawAmount;
@@ -21,15 +21,17 @@ transactionSchema.methods.calculateTokenValue = function () {
}
};
// Static method to create a transaction and update the balance
transactionSchema.statics.create = async function (transactionData) {
/**
* Static method to create a transaction and update the balance
* @param {txData} txData - Transaction data.
*/
transactionSchema.statics.create = async function (txData) {
const Transaction = this;
const transaction = new Transaction(transactionData);
transaction.endpointTokenConfig = transactionData.endpointTokenConfig;
const transaction = new Transaction(txData);
transaction.endpointTokenConfig = txData.endpointTokenConfig;
transaction.calculateTokenValue();
// Save the transaction
await transaction.save();
if (!isEnabled(process.env.CHECK_BALANCE)) {
@@ -57,6 +59,109 @@ transactionSchema.statics.create = async function (transactionData) {
};
};
/**
* Static method to create a structured transaction and update the balance
* @param {txData} txData - Transaction data.
*/
transactionSchema.statics.createStructured = async function (txData) {
const Transaction = this;
const transaction = new Transaction({
...txData,
endpointTokenConfig: txData.endpointTokenConfig,
});
transaction.calculateStructuredTokenValue();
await transaction.save();
if (!isEnabled(process.env.CHECK_BALANCE)) {
return;
}
let balance = await Balance.findOne({ user: transaction.user }).lean();
let incrementValue = transaction.tokenValue;
if (balance && balance?.tokenCredits + incrementValue < 0) {
incrementValue = -balance.tokenCredits;
}
balance = await Balance.findOneAndUpdate(
{ user: transaction.user },
{ $inc: { tokenCredits: incrementValue } },
{ upsert: true, new: true },
).lean();
return {
rate: transaction.rate,
user: transaction.user.toString(),
balance: balance.tokenCredits,
[transaction.tokenType]: incrementValue,
};
};
/** Method to calculate token value for structured tokens */
transactionSchema.methods.calculateStructuredTokenValue = function () {
if (!this.tokenType) {
this.tokenValue = this.rawAmount;
return;
}
const { model, endpointTokenConfig } = this;
if (this.tokenType === 'prompt') {
const inputMultiplier = getMultiplier({ tokenType: 'prompt', model, endpointTokenConfig });
const writeMultiplier =
getCacheMultiplier({ cacheType: 'write', model, endpointTokenConfig }) ?? inputMultiplier;
const readMultiplier =
getCacheMultiplier({ cacheType: 'read', model, endpointTokenConfig }) ?? inputMultiplier;
this.rateDetail = {
input: inputMultiplier,
write: writeMultiplier,
read: readMultiplier,
};
const totalPromptTokens =
Math.abs(this.inputTokens || 0) +
Math.abs(this.writeTokens || 0) +
Math.abs(this.readTokens || 0);
if (totalPromptTokens > 0) {
this.rate =
(Math.abs(inputMultiplier * (this.inputTokens || 0)) +
Math.abs(writeMultiplier * (this.writeTokens || 0)) +
Math.abs(readMultiplier * (this.readTokens || 0))) /
totalPromptTokens;
} else {
this.rate = Math.abs(inputMultiplier); // Default to input rate if no tokens
}
this.tokenValue = -(
Math.abs(this.inputTokens || 0) * inputMultiplier +
Math.abs(this.writeTokens || 0) * writeMultiplier +
Math.abs(this.readTokens || 0) * readMultiplier
);
this.rawAmount = -totalPromptTokens;
} else if (this.tokenType === 'completion') {
const multiplier = getMultiplier({ tokenType: this.tokenType, model, endpointTokenConfig });
this.rate = Math.abs(multiplier);
this.tokenValue = -Math.abs(this.rawAmount) * multiplier;
this.rawAmount = -Math.abs(this.rawAmount);
}
if (this.context && this.tokenType === 'completion' && this.context === 'incomplete') {
this.tokenValue = Math.ceil(this.tokenValue * cancelRate);
this.rate *= cancelRate;
if (this.rateDetail) {
this.rateDetail = Object.fromEntries(
Object.entries(this.rateDetail).map(([k, v]) => [k, v * cancelRate]),
);
}
}
};
const Transaction = mongoose.model('Transaction', transactionSchema);
/**

View File

@@ -0,0 +1,348 @@
const mongoose = require('mongoose');
const { MongoMemoryServer } = require('mongodb-memory-server');
const Balance = require('./Balance');
const { spendTokens, spendStructuredTokens } = require('./spendTokens');
const { getMultiplier, getCacheMultiplier } = require('./tx');
let mongoServer;
beforeAll(async () => {
mongoServer = await MongoMemoryServer.create();
const mongoUri = mongoServer.getUri();
await mongoose.connect(mongoUri);
});
afterAll(async () => {
await mongoose.disconnect();
await mongoServer.stop();
});
beforeEach(async () => {
await mongoose.connection.dropDatabase();
});
describe('Regular Token Spending Tests', () => {
test('Balance should decrease when spending tokens with spendTokens', async () => {
// Arrange
const userId = new mongoose.Types.ObjectId();
const initialBalance = 10000000; // $10.00
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'gpt-3.5-turbo';
const txData = {
user: userId,
conversationId: 'test-conversation-id',
model,
context: 'test',
endpointTokenConfig: null,
};
const tokenUsage = {
promptTokens: 100,
completionTokens: 50,
};
// Act
process.env.CHECK_BALANCE = 'true';
await spendTokens(txData, tokenUsage);
// Assert
console.log('Initial Balance:', initialBalance);
const updatedBalance = await Balance.findOne({ user: userId });
console.log('Updated Balance:', updatedBalance.tokenCredits);
const promptMultiplier = getMultiplier({ model, tokenType: 'prompt' });
const completionMultiplier = getMultiplier({ model, tokenType: 'completion' });
const expectedPromptCost = tokenUsage.promptTokens * promptMultiplier;
const expectedCompletionCost = tokenUsage.completionTokens * completionMultiplier;
const expectedTotalCost = expectedPromptCost + expectedCompletionCost;
const expectedBalance = initialBalance - expectedTotalCost;
expect(updatedBalance.tokenCredits).toBeLessThan(initialBalance);
expect(updatedBalance.tokenCredits).toBeCloseTo(expectedBalance, 0);
console.log('Expected Total Cost:', expectedTotalCost);
console.log('Actual Balance Decrease:', initialBalance - updatedBalance.tokenCredits);
});
test('spendTokens should handle zero completion tokens', async () => {
// Arrange
const userId = new mongoose.Types.ObjectId();
const initialBalance = 10000000; // $10.00
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'gpt-3.5-turbo';
const txData = {
user: userId,
conversationId: 'test-conversation-id',
model,
context: 'test',
endpointTokenConfig: null,
};
const tokenUsage = {
promptTokens: 100,
completionTokens: 0,
};
// Act
process.env.CHECK_BALANCE = 'true';
await spendTokens(txData, tokenUsage);
// Assert
const updatedBalance = await Balance.findOne({ user: userId });
const promptMultiplier = getMultiplier({ model, tokenType: 'prompt' });
const expectedCost = tokenUsage.promptTokens * promptMultiplier;
expect(updatedBalance.tokenCredits).toBeCloseTo(initialBalance - expectedCost, 0);
console.log('Initial Balance:', initialBalance);
console.log('Updated Balance:', updatedBalance.tokenCredits);
console.log('Expected Cost:', expectedCost);
});
test('spendTokens should handle undefined token counts', async () => {
const userId = new mongoose.Types.ObjectId();
const initialBalance = 10000000; // $10.00
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'gpt-3.5-turbo';
const txData = {
user: userId,
conversationId: 'test-conversation-id',
model,
context: 'test',
endpointTokenConfig: null,
};
const tokenUsage = {};
const result = await spendTokens(txData, tokenUsage);
expect(result).toBeUndefined();
});
test('spendTokens should handle only prompt tokens', async () => {
const userId = new mongoose.Types.ObjectId();
const initialBalance = 10000000; // $10.00
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'gpt-3.5-turbo';
const txData = {
user: userId,
conversationId: 'test-conversation-id',
model,
context: 'test',
endpointTokenConfig: null,
};
const tokenUsage = { promptTokens: 100 };
await spendTokens(txData, tokenUsage);
const updatedBalance = await Balance.findOne({ user: userId });
const promptMultiplier = getMultiplier({ model, tokenType: 'prompt' });
const expectedCost = 100 * promptMultiplier;
expect(updatedBalance.tokenCredits).toBeCloseTo(initialBalance - expectedCost, 0);
});
});
describe('Structured Token Spending Tests', () => {
test('Balance should decrease and rawAmount should be set when spending a large number of structured tokens', async () => {
// Arrange
const userId = new mongoose.Types.ObjectId();
const initialBalance = 17613154.55; // $17.61
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'claude-3-5-sonnet';
const txData = {
user: userId,
conversationId: 'c23a18da-706c-470a-ac28-ec87ed065199',
model,
context: 'message',
endpointTokenConfig: null, // We'll use the default rates
};
const tokenUsage = {
promptTokens: {
input: 11,
write: 140522,
read: 0,
},
completionTokens: 5,
};
// Get the actual multipliers
const promptMultiplier = getMultiplier({ model, tokenType: 'prompt' });
const completionMultiplier = getMultiplier({ model, tokenType: 'completion' });
const writeMultiplier = getCacheMultiplier({ model, cacheType: 'write' });
const readMultiplier = getCacheMultiplier({ model, cacheType: 'read' });
console.log('Multipliers:', {
promptMultiplier,
completionMultiplier,
writeMultiplier,
readMultiplier,
});
// Act
process.env.CHECK_BALANCE = 'true';
const result = await spendStructuredTokens(txData, tokenUsage);
// Assert
console.log('Initial Balance:', initialBalance);
console.log('Updated Balance:', result.completion.balance);
console.log('Transaction Result:', result);
const expectedPromptCost =
tokenUsage.promptTokens.input * promptMultiplier +
tokenUsage.promptTokens.write * writeMultiplier +
tokenUsage.promptTokens.read * readMultiplier;
const expectedCompletionCost = tokenUsage.completionTokens * completionMultiplier;
const expectedTotalCost = expectedPromptCost + expectedCompletionCost;
const expectedBalance = initialBalance - expectedTotalCost;
console.log('Expected Cost:', expectedTotalCost);
console.log('Expected Balance:', expectedBalance);
expect(result.completion.balance).toBeLessThan(initialBalance);
// Allow for a small difference (e.g., 100 token credits, which is $0.0001)
const allowedDifference = 100;
expect(Math.abs(result.completion.balance - expectedBalance)).toBeLessThan(allowedDifference);
// Check if the decrease is approximately as expected
const balanceDecrease = initialBalance - result.completion.balance;
expect(balanceDecrease).toBeCloseTo(expectedTotalCost, 0);
// Check token values
const expectedPromptTokenValue = -(
tokenUsage.promptTokens.input * promptMultiplier +
tokenUsage.promptTokens.write * writeMultiplier +
tokenUsage.promptTokens.read * readMultiplier
);
const expectedCompletionTokenValue = -tokenUsage.completionTokens * completionMultiplier;
expect(result.prompt.prompt).toBeCloseTo(expectedPromptTokenValue, 1);
expect(result.completion.completion).toBe(expectedCompletionTokenValue);
console.log('Expected prompt tokenValue:', expectedPromptTokenValue);
console.log('Actual prompt tokenValue:', result.prompt.prompt);
console.log('Expected completion tokenValue:', expectedCompletionTokenValue);
console.log('Actual completion tokenValue:', result.completion.completion);
});
test('should handle zero completion tokens in structured spending', async () => {
const userId = new mongoose.Types.ObjectId();
const initialBalance = 17613154.55;
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'claude-3-5-sonnet';
const txData = {
user: userId,
conversationId: 'test-convo',
model,
context: 'message',
};
const tokenUsage = {
promptTokens: {
input: 10,
write: 100,
read: 5,
},
completionTokens: 0,
};
process.env.CHECK_BALANCE = 'true';
const result = await spendStructuredTokens(txData, tokenUsage);
expect(result.prompt).toBeDefined();
expect(result.completion).toBeUndefined();
expect(result.prompt.prompt).toBeLessThan(0);
});
test('should handle only prompt tokens in structured spending', async () => {
const userId = new mongoose.Types.ObjectId();
const initialBalance = 17613154.55;
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'claude-3-5-sonnet';
const txData = {
user: userId,
conversationId: 'test-convo',
model,
context: 'message',
};
const tokenUsage = {
promptTokens: {
input: 10,
write: 100,
read: 5,
},
};
process.env.CHECK_BALANCE = 'true';
const result = await spendStructuredTokens(txData, tokenUsage);
expect(result.prompt).toBeDefined();
expect(result.completion).toBeUndefined();
expect(result.prompt.prompt).toBeLessThan(0);
});
test('should handle undefined token counts in structured spending', async () => {
const userId = new mongoose.Types.ObjectId();
const initialBalance = 17613154.55;
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'claude-3-5-sonnet';
const txData = {
user: userId,
conversationId: 'test-convo',
model,
context: 'message',
};
const tokenUsage = {};
process.env.CHECK_BALANCE = 'true';
const result = await spendStructuredTokens(txData, tokenUsage);
expect(result).toEqual({
prompt: undefined,
completion: undefined,
});
});
test('should handle incomplete context for completion tokens', async () => {
const userId = new mongoose.Types.ObjectId();
const initialBalance = 17613154.55;
await Balance.create({ user: userId, tokenCredits: initialBalance });
const model = 'claude-3-5-sonnet';
const txData = {
user: userId,
conversationId: 'test-convo',
model,
context: 'incomplete',
};
const tokenUsage = {
promptTokens: {
input: 10,
write: 100,
read: 5,
},
completionTokens: 50,
};
process.env.CHECK_BALANCE = 'true';
const result = await spendStructuredTokens(txData, tokenUsage);
expect(result.completion.completion).toBeCloseTo(-50 * 15 * 1.15, 0); // Assuming multiplier is 15 and cancelRate is 1.15
});
});

View File

@@ -1,11 +1,3 @@
const {
getMessages,
saveMessage,
recordMessage,
updateMessage,
deleteMessagesSince,
deleteMessages,
} = require('./Message');
const {
comparePassword,
deleteUserById,
@@ -16,8 +8,6 @@ const {
countUsers,
findUser,
} = require('./userMethods');
const { getConvoTitle, getConvo, saveConvo, deleteConvos } = require('./Conversation');
const { getPreset, getPresets, savePreset, deletePresets } = require('./Preset');
const {
findFileById,
createFile,
@@ -27,26 +17,40 @@ const {
getFiles,
updateFileUsage,
} = require('./File');
const Key = require('./Key');
const User = require('./User');
const {
getMessages,
saveMessage,
recordMessage,
updateMessage,
deleteMessagesSince,
deleteMessages,
} = require('./Message');
const { getConvoTitle, getConvo, saveConvo, deleteConvos } = require('./Conversation');
const { getPreset, getPresets, savePreset, deletePresets } = require('./Preset');
const { createToken, findToken, updateToken, deleteTokens } = require('./Token');
const Session = require('./Session');
const Balance = require('./Balance');
const User = require('./User');
const Key = require('./Key');
module.exports = {
User,
Key,
Session,
Balance,
comparePassword,
deleteUserById,
generateToken,
getUserById,
countUsers,
createUser,
updateUser,
createUser,
countUsers,
findUser,
findFileById,
createFile,
updateFile,
deleteFile,
deleteFiles,
getFiles,
updateFileUsage,
getMessages,
saveMessage,
recordMessage,
@@ -64,11 +68,13 @@ module.exports = {
savePreset,
deletePresets,
findFileById,
createFile,
updateFile,
deleteFile,
deleteFiles,
getFiles,
updateFileUsage,
createToken,
findToken,
updateToken,
deleteTokens,
User,
Key,
Session,
Balance,
};

70
api/models/inviteUser.js Normal file
View File

@@ -0,0 +1,70 @@
const crypto = require('crypto');
const bcrypt = require('bcryptjs');
const mongoose = require('mongoose');
const { createToken, findToken } = require('./Token');
const logger = require('~/config/winston');
/**
* @module inviteUser
* @description This module provides functions to create and get user invites
*/
/**
* @function createInvite
* @description This function creates a new user invite
* @param {string} email - The email of the user to invite
* @returns {Promise<Object>} A promise that resolves to the saved invite document
* @throws {Error} If there is an error creating the invite
*/
const createInvite = async (email) => {
try {
let token = crypto.randomBytes(32).toString('hex');
const hash = bcrypt.hashSync(token, 10);
const encodedToken = encodeURIComponent(token);
const fakeUserId = new mongoose.Types.ObjectId();
await createToken({
userId: fakeUserId,
email,
token: hash,
createdAt: Date.now(),
expiresIn: 604800,
});
return encodedToken;
} catch (error) {
logger.error('[createInvite] Error creating invite', error);
return { message: 'Error creating invite' };
}
};
/**
* @function getInvite
* @description This function retrieves a user invite
* @param {string} encodedToken - The token of the invite to retrieve
* @param {string} email - The email of the user to validate
* @returns {Promise<Object>} A promise that resolves to the retrieved invite document
* @throws {Error} If there is an error retrieving the invite, if the invite does not exist, or if the email does not match
*/
const getInvite = async (encodedToken, email) => {
try {
const token = decodeURIComponent(encodedToken);
const hash = bcrypt.hashSync(token, 10);
const invite = await findToken({ token: hash, email });
if (!invite) {
throw new Error('Invite not found or email does not match');
}
return invite;
} catch (error) {
logger.error('[getInvite] Error getting invite', error);
return { error: true, message: error.message };
}
};
module.exports = {
createInvite,
getInvite,
};

View File

@@ -39,6 +39,7 @@ const actionSchema = new Schema({
default: 'action_prototype',
},
settings: Schema.Types.Mixed,
agent_id: String,
assistant_id: String,
metadata: {
api_key: String, // private, encrypted

View File

@@ -0,0 +1,71 @@
const mongoose = require('mongoose');
const agentSchema = mongoose.Schema(
{
id: {
type: String,
index: true,
required: true,
},
name: {
type: String,
},
description: {
type: String,
},
instructions: {
type: String,
},
avatar: {
type: {
filepath: String,
source: String,
},
default: undefined,
},
provider: {
type: String,
required: true,
},
model: {
type: String,
required: true,
},
model_parameters: {
type: Object,
},
access_level: {
type: Number,
},
tools: {
type: [String],
default: undefined,
},
tool_kwargs: {
type: [{ type: mongoose.Schema.Types.Mixed }],
},
file_ids: {
type: [String],
default: undefined,
},
actions: {
type: [String],
default: undefined,
},
author: {
type: mongoose.Schema.Types.ObjectId,
ref: 'User',
required: true,
},
projectIds: {
type: [mongoose.Schema.Types.ObjectId],
ref: 'Project',
index: true,
},
},
{
timestamps: true,
},
);
module.exports = agentSchema;

View File

@@ -19,6 +19,10 @@ const assistantSchema = mongoose.Schema(
},
default: undefined,
},
conversation_starters: {
type: [String],
default: [],
},
access_level: {
type: Number,
},

View File

@@ -13,6 +13,11 @@ const conversationPreset = {
type: String,
required: false,
},
// for bedrock only
region: {
type: String,
required: false,
},
// for azureOpenAI, openAI only
chatGptLabel: {
type: String,
@@ -74,6 +79,13 @@ const conversationPreset = {
resendImages: {
type: Boolean,
},
/* Anthropic only */
promptCache: {
type: Boolean,
},
system: {
type: String,
},
// files
resendFiles: {
type: Boolean,

View File

@@ -21,6 +21,11 @@ const projectSchema = new Schema(
ref: 'PromptGroup',
default: [],
},
agentIds: {
type: [String],
ref: 'Agent',
default: [],
},
},
{
timestamps: true,

View File

@@ -8,6 +8,12 @@ const roleSchema = new mongoose.Schema({
unique: true,
index: true,
},
[PermissionTypes.BOOKMARKS]: {
[Permissions.USE]: {
type: Boolean,
default: true,
},
},
[PermissionTypes.PROMPTS]: {
[Permissions.SHARED_GLOBAL]: {
type: Boolean,
@@ -22,6 +28,26 @@ const roleSchema = new mongoose.Schema({
default: true,
},
},
[PermissionTypes.AGENTS]: {
[Permissions.SHARED_GLOBAL]: {
type: Boolean,
default: false,
},
[Permissions.USE]: {
type: Boolean,
default: true,
},
[Permissions.CREATE]: {
type: Boolean,
default: true,
},
},
[PermissionTypes.MULTI_CONVO]: {
[Permissions.USE]: {
type: Boolean,
default: true,
},
},
});
const Role = mongoose.model('Role', roleSchema);

View File

@@ -18,8 +18,13 @@ const tokenSchema = new Schema({
type: Date,
required: true,
default: Date.now,
expires: 900,
},
expiresAt: {
type: Date,
required: true,
},
});
module.exports = mongoose.model('Token', tokenSchema);
tokenSchema.index({ expiresAt: 1 }, { expireAfterSeconds: 0 });
module.exports = tokenSchema;

View File

@@ -30,6 +30,9 @@ const transactionSchema = mongoose.Schema(
rate: Number,
rawAmount: Number,
tokenValue: Number,
inputTokens: { type: Number },
writeTokens: { type: Number },
readTokens: { type: Number },
},
{
timestamps: true,

View File

@@ -122,7 +122,12 @@ const userSchema = mongoose.Schema(
type: Date,
expires: 604800, // 7 days in seconds
},
termsAccepted: {
type: Boolean,
default: false,
},
},
{ timestamps: true },
);

View File

@@ -11,7 +11,7 @@ const { logger } = require('~/config');
* @param {String} txData.conversationId - The ID of the conversation.
* @param {String} txData.model - The model name.
* @param {String} txData.context - The context in which the transaction is made.
* @param {String} [txData.endpointTokenConfig] - The current endpoint token config.
* @param {EndpointTokenConfig} [txData.endpointTokenConfig] - The current endpoint token config.
* @param {String} [txData.valueKey] - The value key (optional).
* @param {Object} tokenUsage - The number of tokens used.
* @param {Number} tokenUsage.promptTokens - The number of prompt tokens used.
@@ -32,38 +32,109 @@ const spendTokens = async (txData, tokenUsage) => {
);
let prompt, completion;
try {
if (promptTokens >= 0) {
if (promptTokens !== undefined) {
prompt = await Transaction.create({
...txData,
tokenType: 'prompt',
rawAmount: -promptTokens,
rawAmount: -Math.max(promptTokens, 0),
});
}
if (!completionTokens && isNaN(completionTokens)) {
logger.debug('[spendTokens] !completionTokens', { prompt, completion });
return;
if (completionTokens !== undefined) {
completion = await Transaction.create({
...txData,
tokenType: 'completion',
rawAmount: -Math.max(completionTokens, 0),
});
}
completion = await Transaction.create({
...txData,
tokenType: 'completion',
rawAmount: -completionTokens,
});
prompt &&
completion &&
if (prompt || completion) {
logger.debug('[spendTokens] Transaction data record against balance:', {
user: txData.user,
prompt: prompt.prompt,
promptRate: prompt.rate,
completion: completion.completion,
completionRate: completion.rate,
balance: completion.balance,
prompt: prompt?.prompt,
promptRate: prompt?.rate,
completion: completion?.completion,
completionRate: completion?.rate,
balance: completion?.balance ?? prompt?.balance,
});
} else {
logger.debug('[spendTokens] No transactions incurred against balance');
}
} catch (err) {
logger.error('[spendTokens]', err);
}
};
module.exports = spendTokens;
/**
* Creates transactions to record the spending of structured tokens.
*
* @function
* @async
* @param {Object} txData - Transaction data.
* @param {mongoose.Schema.Types.ObjectId} txData.user - The user ID.
* @param {String} txData.conversationId - The ID of the conversation.
* @param {String} txData.model - The model name.
* @param {String} txData.context - The context in which the transaction is made.
* @param {EndpointTokenConfig} [txData.endpointTokenConfig] - The current endpoint token config.
* @param {String} [txData.valueKey] - The value key (optional).
* @param {Object} tokenUsage - The number of tokens used.
* @param {Object} tokenUsage.promptTokens - The number of prompt tokens used.
* @param {Number} tokenUsage.promptTokens.input - The number of input tokens.
* @param {Number} tokenUsage.promptTokens.write - The number of write tokens.
* @param {Number} tokenUsage.promptTokens.read - The number of read tokens.
* @param {Number} tokenUsage.completionTokens - The number of completion tokens used.
* @returns {Promise<void>} - Returns nothing.
* @throws {Error} - Throws an error if there's an issue creating the transactions.
*/
const spendStructuredTokens = async (txData, tokenUsage) => {
const { promptTokens, completionTokens } = tokenUsage;
logger.debug(
`[spendStructuredTokens] conversationId: ${txData.conversationId}${
txData?.context ? ` | Context: ${txData?.context}` : ''
} | Token usage: `,
{
promptTokens,
completionTokens,
},
);
let prompt, completion;
try {
if (promptTokens) {
const { input = 0, write = 0, read = 0 } = promptTokens;
prompt = await Transaction.createStructured({
...txData,
tokenType: 'prompt',
inputTokens: -input,
writeTokens: -write,
readTokens: -read,
});
}
if (completionTokens) {
completion = await Transaction.create({
...txData,
tokenType: 'completion',
rawAmount: -completionTokens,
});
}
if (prompt || completion) {
logger.debug('[spendStructuredTokens] Transaction data record against balance:', {
user: txData.user,
prompt: prompt?.prompt,
promptRate: prompt?.rate,
completion: completion?.completion,
completionRate: completion?.rate,
balance: completion?.balance ?? prompt?.balance,
});
} else {
logger.debug('[spendStructuredTokens] No transactions incurred against balance');
}
} catch (err) {
logger.error('[spendStructuredTokens]', err);
}
return { prompt, completion };
};
module.exports = { spendTokens, spendStructuredTokens };

View File

@@ -0,0 +1,197 @@
const mongoose = require('mongoose');
jest.mock('./Transaction', () => ({
Transaction: {
create: jest.fn(),
createStructured: jest.fn(),
},
}));
jest.mock('./Balance', () => ({
findOne: jest.fn(),
findOneAndUpdate: jest.fn(),
}));
jest.mock('~/config', () => ({
logger: {
debug: jest.fn(),
error: jest.fn(),
},
}));
// Import after mocking
const { spendTokens, spendStructuredTokens } = require('./spendTokens');
const { Transaction } = require('./Transaction');
const Balance = require('./Balance');
describe('spendTokens', () => {
beforeEach(() => {
jest.clearAllMocks();
process.env.CHECK_BALANCE = 'true';
});
it('should create transactions for both prompt and completion tokens', async () => {
const txData = {
user: new mongoose.Types.ObjectId(),
conversationId: 'test-convo',
model: 'gpt-3.5-turbo',
context: 'test',
};
const tokenUsage = {
promptTokens: 100,
completionTokens: 50,
};
Transaction.create.mockResolvedValueOnce({ tokenType: 'prompt', rawAmount: -100 });
Transaction.create.mockResolvedValueOnce({ tokenType: 'completion', rawAmount: -50 });
Balance.findOne.mockResolvedValue({ tokenCredits: 10000 });
Balance.findOneAndUpdate.mockResolvedValue({ tokenCredits: 9850 });
await spendTokens(txData, tokenUsage);
expect(Transaction.create).toHaveBeenCalledTimes(2);
expect(Transaction.create).toHaveBeenCalledWith(
expect.objectContaining({
tokenType: 'prompt',
rawAmount: -100,
}),
);
expect(Transaction.create).toHaveBeenCalledWith(
expect.objectContaining({
tokenType: 'completion',
rawAmount: -50,
}),
);
});
it('should handle zero completion tokens', async () => {
const txData = {
user: new mongoose.Types.ObjectId(),
conversationId: 'test-convo',
model: 'gpt-3.5-turbo',
context: 'test',
};
const tokenUsage = {
promptTokens: 100,
completionTokens: 0,
};
Transaction.create.mockResolvedValueOnce({ tokenType: 'prompt', rawAmount: -100 });
Transaction.create.mockResolvedValueOnce({ tokenType: 'completion', rawAmount: -0 });
Balance.findOne.mockResolvedValue({ tokenCredits: 10000 });
Balance.findOneAndUpdate.mockResolvedValue({ tokenCredits: 9850 });
await spendTokens(txData, tokenUsage);
expect(Transaction.create).toHaveBeenCalledTimes(2);
expect(Transaction.create).toHaveBeenCalledWith(
expect.objectContaining({
tokenType: 'prompt',
rawAmount: -100,
}),
);
expect(Transaction.create).toHaveBeenCalledWith(
expect.objectContaining({
tokenType: 'completion',
rawAmount: -0, // Changed from 0 to -0
}),
);
});
it('should handle undefined token counts', async () => {
const txData = {
user: new mongoose.Types.ObjectId(),
conversationId: 'test-convo',
model: 'gpt-3.5-turbo',
context: 'test',
};
const tokenUsage = {};
await spendTokens(txData, tokenUsage);
expect(Transaction.create).not.toHaveBeenCalled();
});
it('should not update balance when CHECK_BALANCE is false', async () => {
process.env.CHECK_BALANCE = 'false';
const txData = {
user: new mongoose.Types.ObjectId(),
conversationId: 'test-convo',
model: 'gpt-3.5-turbo',
context: 'test',
};
const tokenUsage = {
promptTokens: 100,
completionTokens: 50,
};
Transaction.create.mockResolvedValueOnce({ tokenType: 'prompt', rawAmount: -100 });
Transaction.create.mockResolvedValueOnce({ tokenType: 'completion', rawAmount: -50 });
await spendTokens(txData, tokenUsage);
expect(Transaction.create).toHaveBeenCalledTimes(2);
expect(Balance.findOne).not.toHaveBeenCalled();
expect(Balance.findOneAndUpdate).not.toHaveBeenCalled();
});
it('should create structured transactions for both prompt and completion tokens', async () => {
const txData = {
user: new mongoose.Types.ObjectId(),
conversationId: 'test-convo',
model: 'claude-3-5-sonnet',
context: 'test',
};
const tokenUsage = {
promptTokens: {
input: 10,
write: 100,
read: 5,
},
completionTokens: 50,
};
Transaction.createStructured.mockResolvedValueOnce({
rate: 3.75,
user: txData.user.toString(),
balance: 9570,
prompt: -430,
});
Transaction.create.mockResolvedValueOnce({
rate: 15,
user: txData.user.toString(),
balance: 8820,
completion: -750,
});
const result = await spendStructuredTokens(txData, tokenUsage);
expect(Transaction.createStructured).toHaveBeenCalledWith(
expect.objectContaining({
tokenType: 'prompt',
inputTokens: -10,
writeTokens: -100,
readTokens: -5,
}),
);
expect(Transaction.create).toHaveBeenCalledWith(
expect.objectContaining({
tokenType: 'completion',
rawAmount: -50,
}),
);
expect(result).toEqual({
prompt: expect.objectContaining({
rate: 3.75,
user: txData.user.toString(),
balance: 9570,
prompt: -430,
}),
completion: expect.objectContaining({
rate: 15,
user: txData.user.toString(),
balance: 8820,
completion: -750,
}),
});
});
});

View File

@@ -3,38 +3,28 @@ const defaultRate = 6;
/** AWS Bedrock pricing */
const bedrockValues = {
'anthropic.claude-3-haiku-20240307-v1:0': { prompt: 0.25, completion: 1.25 },
'anthropic.claude-3-sonnet-20240229-v1:0': { prompt: 3.0, completion: 15.0 },
'anthropic.claude-3-opus-20240229-v1:0': { prompt: 15.0, completion: 75.0 },
'anthropic.claude-3-5-sonnet-20240620-v1:0': { prompt: 3.0, completion: 15.0 },
'anthropic.claude-v2:1': { prompt: 8.0, completion: 24.0 },
'anthropic.claude-instant-v1': { prompt: 0.8, completion: 2.4 },
'meta.llama2-13b-chat-v1': { prompt: 0.75, completion: 1.0 },
'meta.llama2-70b-chat-v1': { prompt: 1.95, completion: 2.56 },
'meta.llama3-8b-instruct-v1:0': { prompt: 0.3, completion: 0.6 },
'meta.llama3-70b-instruct-v1:0': { prompt: 2.65, completion: 3.5 },
'meta.llama3-1-8b-instruct-v1:0': { prompt: 0.3, completion: 0.6 },
'meta.llama3-1-70b-instruct-v1:0': { prompt: 2.65, completion: 3.5 },
'meta.llama3-1-405b-instruct-v1:0': { prompt: 5.32, completion: 16.0 },
'mistral.mistral-7b-instruct-v0:2': { prompt: 0.15, completion: 0.2 },
'mistral.mistral-small-2402-v1:0': { prompt: 0.15, completion: 0.2 },
'mistral.mixtral-8x7b-instruct-v0:1': { prompt: 0.45, completion: 0.7 },
'mistral.mistral-large-2402-v1:0': { prompt: 4.0, completion: 12.0 },
'mistral.mistral-large-2407-v1:0': { prompt: 3.0, completion: 9.0 },
'cohere.command-text-v14': { prompt: 1.5, completion: 2.0 },
'cohere.command-light-text-v14': { prompt: 0.3, completion: 0.6 },
'cohere.command-r-v1:0': { prompt: 0.5, completion: 1.5 },
'cohere.command-r-plus-v1:0': { prompt: 3.0, completion: 15.0 },
'llama2-13b': { prompt: 0.75, completion: 1.0 },
'llama2-70b': { prompt: 1.95, completion: 2.56 },
'llama3-8b': { prompt: 0.3, completion: 0.6 },
'llama3-70b': { prompt: 2.65, completion: 3.5 },
'llama3-1-8b': { prompt: 0.3, completion: 0.6 },
'llama3-1-70b': { prompt: 2.65, completion: 3.5 },
'llama3-1-405b': { prompt: 5.32, completion: 16.0 },
'mistral-7b': { prompt: 0.15, completion: 0.2 },
'mistral-small': { prompt: 0.15, completion: 0.2 },
'mixtral-8x7b': { prompt: 0.45, completion: 0.7 },
'mistral-large-2402': { prompt: 4.0, completion: 12.0 },
'mistral-large-2407': { prompt: 3.0, completion: 9.0 },
'command-text': { prompt: 1.5, completion: 2.0 },
'command-light': { prompt: 0.3, completion: 0.6 },
'ai21.j2-mid-v1': { prompt: 12.5, completion: 12.5 },
'ai21.j2-ultra-v1': { prompt: 18.8, completion: 18.8 },
'ai21.jamba-instruct-v1:0': { prompt: 0.5, completion: 0.7 },
'amazon.titan-text-lite-v1': { prompt: 0.15, completion: 0.2 },
'amazon.titan-text-express-v1': { prompt: 0.2, completion: 0.6 },
'amazon.titan-text-premier-v1:0': { prompt: 0.5, completion: 1.5 },
};
for (const [key, value] of Object.entries(bedrockValues)) {
bedrockValues[`bedrock/${key}`] = value;
}
/**
* Mapping of model token sizes to their respective multipliers for prompt and completion.
* The rates are 1 USD per 1M tokens.
@@ -55,9 +45,11 @@ const tokenValues = Object.assign(
'claude-3-opus': { prompt: 15, completion: 75 },
'claude-3-sonnet': { prompt: 3, completion: 15 },
'claude-3-5-sonnet': { prompt: 3, completion: 15 },
'claude-3.5-sonnet': { prompt: 3, completion: 15 },
'claude-3-haiku': { prompt: 0.25, completion: 1.25 },
'claude-2.1': { prompt: 8, completion: 24 },
'claude-2': { prompt: 8, completion: 24 },
'claude-instant': { prompt: 0.8, completion: 2.4 },
'claude-': { prompt: 0.8, completion: 2.4 },
'command-r-plus': { prompt: 3, completion: 15 },
'command-r': { prompt: 0.5, completion: 1.5 },
@@ -70,6 +62,18 @@ const tokenValues = Object.assign(
bedrockValues,
);
/**
* Mapping of model token sizes to their respective multipliers for cached input, read and write.
* See Anthropic's documentation on this: https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching#pricing
* The rates are 1 USD per 1M tokens.
* @type {Object.<string, {write: number, read: number }>}
*/
const cacheTokenValues = {
'claude-3.5-sonnet': { write: 3.75, read: 0.3 },
'claude-3-5-sonnet': { write: 3.75, read: 0.3 },
'claude-3-haiku': { write: 0.3, read: 0.03 },
};
/**
* Retrieves the key associated with a given model name.
*
@@ -122,7 +126,7 @@ const getValueKey = (model, endpoint) => {
*
* @param {Object} params - The parameters for the function.
* @param {string} [params.valueKey] - The key corresponding to the model name.
* @param {string} [params.tokenType] - The type of token (e.g., 'prompt' or 'completion').
* @param {'prompt' | 'completion'} [params.tokenType] - The type of token (e.g., 'prompt' or 'completion').
* @param {string} [params.model] - The model name to derive the value key from if not provided.
* @param {string} [params.endpoint] - The endpoint name to derive the value key from if not provided.
* @param {EndpointTokenConfig} [params.endpointTokenConfig] - The token configuration for the endpoint.
@@ -147,7 +151,41 @@ const getMultiplier = ({ valueKey, tokenType, model, endpoint, endpointTokenConf
}
// If we got this far, and values[tokenType] is undefined somehow, return a rough average of default multipliers
return tokenValues[valueKey][tokenType] ?? defaultRate;
return tokenValues[valueKey]?.[tokenType] ?? defaultRate;
};
module.exports = { tokenValues, getValueKey, getMultiplier, defaultRate };
/**
* Retrieves the cache multiplier for a given value key and token type. If no value key is provided,
* it attempts to derive it from the model name.
*
* @param {Object} params - The parameters for the function.
* @param {string} [params.valueKey] - The key corresponding to the model name.
* @param {'write' | 'read'} [params.cacheType] - The type of token (e.g., 'write' or 'read').
* @param {string} [params.model] - The model name to derive the value key from if not provided.
* @param {string} [params.endpoint] - The endpoint name to derive the value key from if not provided.
* @param {EndpointTokenConfig} [params.endpointTokenConfig] - The token configuration for the endpoint.
* @returns {number | null} The multiplier for the given parameters, or `null` if not found.
*/
const getCacheMultiplier = ({ valueKey, cacheType, model, endpoint, endpointTokenConfig }) => {
if (endpointTokenConfig) {
return endpointTokenConfig?.[model]?.[cacheType] ?? null;
}
if (valueKey && cacheType) {
return cacheTokenValues[valueKey]?.[cacheType] ?? null;
}
if (!cacheType || !model) {
return null;
}
valueKey = getValueKey(model, endpoint);
if (!valueKey) {
return null;
}
// If we got this far, and values[cacheType] is undefined somehow, return a rough average of default multipliers
return cacheTokenValues[valueKey]?.[cacheType] ?? null;
};
module.exports = { tokenValues, getValueKey, getMultiplier, getCacheMultiplier, defaultRate };

View File

@@ -1,4 +1,11 @@
const { getValueKey, getMultiplier, defaultRate, tokenValues } = require('./tx');
const { EModelEndpoint } = require('librechat-data-provider');
const {
defaultRate,
tokenValues,
getValueKey,
getMultiplier,
getCacheMultiplier,
} = require('./tx');
describe('getValueKey', () => {
it('should return "16k" for model name containing "gpt-3.5-turbo-16k"', () => {
@@ -63,12 +70,26 @@ describe('getValueKey', () => {
expect(getValueKey('gpt-4o-2024-08-06-0718')).not.toBe('gpt-4o');
});
it('should return "gpt-4o" for model type of "chatgpt-4o"', () => {
expect(getValueKey('chatgpt-4o-latest')).toBe('gpt-4o');
expect(getValueKey('openai/chatgpt-4o-latest')).toBe('gpt-4o');
expect(getValueKey('chatgpt-4o-latest-0916')).toBe('gpt-4o');
expect(getValueKey('chatgpt-4o-latest-0718')).toBe('gpt-4o');
});
it('should return "claude-3-5-sonnet" for model type of "claude-3-5-sonnet-"', () => {
expect(getValueKey('claude-3-5-sonnet-20240620')).toBe('claude-3-5-sonnet');
expect(getValueKey('anthropic/claude-3-5-sonnet')).toBe('claude-3-5-sonnet');
expect(getValueKey('claude-3-5-sonnet-turbo')).toBe('claude-3-5-sonnet');
expect(getValueKey('claude-3-5-sonnet-0125')).toBe('claude-3-5-sonnet');
});
it('should return "claude-3.5-sonnet" for model type of "claude-3.5-sonnet-"', () => {
expect(getValueKey('claude-3.5-sonnet-20240620')).toBe('claude-3.5-sonnet');
expect(getValueKey('anthropic/claude-3.5-sonnet')).toBe('claude-3.5-sonnet');
expect(getValueKey('claude-3.5-sonnet-turbo')).toBe('claude-3.5-sonnet');
expect(getValueKey('claude-3.5-sonnet-0125')).toBe('claude-3.5-sonnet');
});
});
describe('getMultiplier', () => {
@@ -136,6 +157,17 @@ describe('getMultiplier', () => {
);
});
it('should return the correct multiplier for chatgpt-4o-latest', () => {
const valueKey = getValueKey('chatgpt-4o-latest');
expect(getMultiplier({ valueKey, tokenType: 'prompt' })).toBe(tokenValues['gpt-4o'].prompt);
expect(getMultiplier({ valueKey, tokenType: 'completion' })).toBe(
tokenValues['gpt-4o'].completion,
);
expect(getMultiplier({ valueKey, tokenType: 'completion' })).not.toBe(
tokenValues['gpt-4o-mini'].completion,
);
});
it('should derive the valueKey from the model if not provided for new models', () => {
expect(
getMultiplier({ tokenType: 'prompt', model: 'gpt-3.5-turbo-1106-some-other-info' }),
@@ -193,35 +225,92 @@ describe('AWS Bedrock Model Tests', () => {
it('should return the correct prompt multipliers for all models', () => {
const results = awsModels.map((model) => {
const multiplier = getMultiplier({ valueKey: model, tokenType: 'prompt' });
return multiplier === tokenValues[model].prompt;
const valueKey = getValueKey(model, EModelEndpoint.bedrock);
const multiplier = getMultiplier({ valueKey, tokenType: 'prompt' });
return tokenValues[valueKey].prompt && multiplier === tokenValues[valueKey].prompt;
});
expect(results.every(Boolean)).toBe(true);
});
it('should return the correct completion multipliers for all models', () => {
const results = awsModels.map((model) => {
const multiplier = getMultiplier({ valueKey: model, tokenType: 'completion' });
return multiplier === tokenValues[model].completion;
});
expect(results.every(Boolean)).toBe(true);
});
it('should return the correct prompt multipliers for all models with Bedrock prefix', () => {
const results = awsModels.map((model) => {
const modelName = `bedrock/${model}`;
const multiplier = getMultiplier({ valueKey: modelName, tokenType: 'prompt' });
return multiplier === tokenValues[model].prompt;
});
expect(results.every(Boolean)).toBe(true);
});
it('should return the correct completion multipliers for all models with Bedrock prefix', () => {
const results = awsModels.map((model) => {
const modelName = `bedrock/${model}`;
const multiplier = getMultiplier({ valueKey: modelName, tokenType: 'completion' });
return multiplier === tokenValues[model].completion;
const valueKey = getValueKey(model, EModelEndpoint.bedrock);
const multiplier = getMultiplier({ valueKey, tokenType: 'completion' });
return tokenValues[valueKey].completion && multiplier === tokenValues[valueKey].completion;
});
expect(results.every(Boolean)).toBe(true);
});
});
describe('getCacheMultiplier', () => {
it('should return the correct cache multiplier for a given valueKey and cacheType', () => {
expect(getCacheMultiplier({ valueKey: 'claude-3-5-sonnet', cacheType: 'write' })).toBe(3.75);
expect(getCacheMultiplier({ valueKey: 'claude-3-5-sonnet', cacheType: 'read' })).toBe(0.3);
expect(getCacheMultiplier({ valueKey: 'claude-3-haiku', cacheType: 'write' })).toBe(0.3);
expect(getCacheMultiplier({ valueKey: 'claude-3-haiku', cacheType: 'read' })).toBe(0.03);
});
it('should return null if cacheType is provided but not found in cacheTokenValues', () => {
expect(
getCacheMultiplier({ valueKey: 'claude-3-5-sonnet', cacheType: 'unknownType' }),
).toBeNull();
});
it('should derive the valueKey from the model if not provided', () => {
expect(getCacheMultiplier({ cacheType: 'write', model: 'claude-3-5-sonnet-20240620' })).toBe(
3.75,
);
expect(getCacheMultiplier({ cacheType: 'read', model: 'claude-3-haiku-20240307' })).toBe(0.03);
});
it('should return null if only model or cacheType is missing', () => {
expect(getCacheMultiplier({ cacheType: 'write' })).toBeNull();
expect(getCacheMultiplier({ model: 'claude-3-5-sonnet' })).toBeNull();
});
it('should return null if derived valueKey does not match any known patterns', () => {
expect(getCacheMultiplier({ cacheType: 'write', model: 'gpt-4-some-other-info' })).toBeNull();
});
it('should handle endpointTokenConfig if provided', () => {
const endpointTokenConfig = {
'custom-model': {
write: 5,
read: 1,
},
};
expect(
getCacheMultiplier({ model: 'custom-model', cacheType: 'write', endpointTokenConfig }),
).toBe(5);
expect(
getCacheMultiplier({ model: 'custom-model', cacheType: 'read', endpointTokenConfig }),
).toBe(1);
});
it('should return null if model is not found in endpointTokenConfig', () => {
const endpointTokenConfig = {
'custom-model': {
write: 5,
read: 1,
},
};
expect(
getCacheMultiplier({ model: 'unknown-model', cacheType: 'write', endpointTokenConfig }),
).toBeNull();
});
it('should handle models with "bedrock/" prefix', () => {
expect(
getCacheMultiplier({
model: 'bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0',
cacheType: 'write',
}),
).toBe(3.75);
expect(
getCacheMultiplier({
model: 'bedrock/anthropic.claude-3-haiku-20240307-v1:0',
cacheType: 'read',
}),
).toBe(0.03);
});
});

View File

@@ -1,6 +1,6 @@
{
"name": "@librechat/backend",
"version": "0.7.4",
"version": "v0.7.5-rc2",
"description": "",
"scripts": {
"start": "echo 'please run this from the root directory'",
@@ -12,6 +12,7 @@
"list-balances": "node ./list-balances.js",
"user-stats": "node ./user-stats.js",
"create-user": "node ./create-user.js",
"invite-user": "node ./invite-user.js",
"ban-user": "node ./ban-user.js",
"delete-user": "node ./delete-user.js"
},
@@ -39,8 +40,10 @@
"@keyv/mongo": "^2.1.8",
"@keyv/redis": "^2.8.1",
"@langchain/community": "^0.0.46",
"@langchain/core": "^0.2.18",
"@langchain/google-genai": "^0.0.11",
"@langchain/google-vertexai": "^0.0.17",
"@librechat/agents": "^1.5.2",
"axios": "^1.3.4",
"bcryptjs": "^2.4.3",
"cheerio": "^1.0.0-rc.12",
@@ -48,7 +51,9 @@
"compression": "^1.7.4",
"connect-redis": "^7.1.0",
"cookie": "^0.5.0",
"cookie-parser": "^1.4.6",
"cors": "^2.8.5",
"dedent": "^1.5.3",
"dotenv": "^16.0.3",
"express": "^4.18.2",
"express-mongo-sanitize": "^2.2.0",
@@ -100,7 +105,8 @@
"zod": "^3.22.4"
},
"devDependencies": {
"jest": "^29.5.0",
"jest": "^29.7.0",
"mongodb-memory-server": "^10.0.0",
"nodemon": "^3.0.1",
"supertest": "^6.3.3"
}

View File

@@ -123,11 +123,6 @@ const AskController = async (req, res, next, initializeClient, addTitle) => {
};
let response = await client.sendMessage(text, messageOptions);
if (overrideParentMessageId) {
response.parentMessageId = overrideParentMessageId;
}
response.endpoint = endpointOption.endpoint;
const { conversation = {} } = await client.responsePromise;

View File

@@ -1,4 +1,4 @@
const Balance = require('../../models/Balance');
const Balance = require('~/models/Balance');
async function balanceController(req, res) {
const { tokenCredits: balance = '' } =

View File

@@ -44,6 +44,14 @@ async function endpointController(req, res) {
};
}
if (mergedConfig[EModelEndpoint.bedrock] && req.app.locals?.[EModelEndpoint.bedrock]) {
const { availableRegions } = req.app.locals[EModelEndpoint.bedrock];
mergedConfig[EModelEndpoint.bedrock] = {
...mergedConfig[EModelEndpoint.bedrock],
availableRegions,
};
}
const endpointsConfig = orderEndpointsConfig(mergedConfig);
await cache.set(CacheKeys.ENDPOINT_CONFIG, endpointsConfig);

View File

@@ -2,6 +2,9 @@ const { CacheKeys } = require('librechat-data-provider');
const { loadDefaultModels, loadConfigModels } = require('~/server/services/Config');
const { getLogStores } = require('~/cache');
/**
* @param {ServerRequest} req
*/
const getModelsConfig = async (req) => {
const cache = getLogStores(CacheKeys.CONFIG_STORE);
let modelsConfig = await cache.get(CacheKeys.MODELS_CONFIG);
@@ -14,7 +17,7 @@ const getModelsConfig = async (req) => {
/**
* Loads the models from the config.
* @param {Express.Request} req - The Express request object.
* @param {ServerRequest} req - The Express request object.
* @returns {Promise<TModelsConfig>} The models config.
*/
async function loadModels(req) {

View File

@@ -8,6 +8,7 @@ const {
deleteMessages,
deleteUserById,
} = require('~/models');
const User = require('~/models/User');
const { updateUserPluginAuth, deleteUserPluginAuth } = require('~/server/services/PluginService');
const { updateUserPluginsService, deleteUserKey } = require('~/server/services/UserService');
const { verifyEmail, resendVerificationEmail } = require('~/server/services/AuthService');
@@ -20,6 +21,32 @@ const getUserController = async (req, res) => {
res.status(200).send(req.user);
};
const getTermsStatusController = async (req, res) => {
try {
const user = await User.findById(req.user.id);
if (!user) {
return res.status(404).json({ message: 'User not found' });
}
res.status(200).json({ termsAccepted: !!user.termsAccepted });
} catch (error) {
logger.error('Error fetching terms acceptance status:', error);
res.status(500).json({ message: 'Error fetching terms acceptance status' });
}
};
const acceptTermsController = async (req, res) => {
try {
const user = await User.findByIdAndUpdate(req.user.id, { termsAccepted: true }, { new: true });
if (!user) {
return res.status(404).json({ message: 'User not found' });
}
res.status(200).json({ message: 'Terms accepted successfully' });
} catch (error) {
logger.error('Error accepting terms:', error);
res.status(500).json({ message: 'Error accepting terms' });
}
};
const deleteUserFiles = async (req) => {
try {
const userFiles = await getFiles({ user: req.user.id });
@@ -135,6 +162,8 @@ const resendVerificationController = async (req, res) => {
module.exports = {
getUserController,
getTermsStatusController,
acceptTermsController,
deleteUserController,
verifyEmailController,
updateUserPluginsController,

View File

@@ -0,0 +1,127 @@
const { GraphEvents, ToolEndHandler, ChatModelStreamHandler } = require('@librechat/agents');
/** @typedef {import('@librechat/agents').Graph} Graph */
/** @typedef {import('@librechat/agents').EventHandler} EventHandler */
/** @typedef {import('@librechat/agents').ModelEndData} ModelEndData */
/** @typedef {import('@librechat/agents').ChatModelStreamHandler} ChatModelStreamHandler */
/** @typedef {import('@librechat/agents').ContentAggregatorResult['aggregateContent']} ContentAggregator */
/** @typedef {import('@librechat/agents').GraphEvents} GraphEvents */
/**
* Sends message data in Server Sent Events format.
* @param {ServerResponse} res - The server response.
* @param {{ data: string | Record<string, unknown>, event?: string }} event - The message event.
* @param {string} event.event - The type of event.
* @param {string} event.data - The message to be sent.
*/
const sendEvent = (res, event) => {
if (typeof event.data === 'string' && event.data.length === 0) {
return;
}
res.write(`event: message\ndata: ${JSON.stringify(event)}\n\n`);
};
class ModelEndHandler {
/**
* @param {Array<UsageMetadata>} collectedUsage
*/
constructor(collectedUsage) {
if (!Array.isArray(collectedUsage)) {
throw new Error('collectedUsage must be an array');
}
this.collectedUsage = collectedUsage;
}
/**
* @param {string} event
* @param {ModelEndData | undefined} data
* @param {Record<string, unknown> | undefined} metadata
* @param {Graph} graph
* @returns
*/
handle(event, data, metadata, graph) {
if (!graph || !metadata) {
console.warn(`Graph or metadata not found in ${event} event`);
return;
}
const usage = data?.output?.usage_metadata;
if (usage) {
this.collectedUsage.push(usage);
}
}
}
/**
* Get default handlers for stream events.
* @param {Object} options - The options object.
* @param {ServerResponse} options.res - The options object.
* @param {ContentAggregator} options.aggregateContent - The options object.
* @param {Array<UsageMetadata>} options.collectedUsage - The list of collected usage metadata.
* @returns {Record<string, t.EventHandler>} The default handlers.
* @throws {Error} If the request is not found.
*/
function getDefaultHandlers({ res, aggregateContent, collectedUsage }) {
if (!res || !aggregateContent) {
throw new Error(
`[getDefaultHandlers] Missing required options: res: ${!res}, aggregateContent: ${!aggregateContent}`,
);
}
const handlers = {
[GraphEvents.CHAT_MODEL_END]: new ModelEndHandler(collectedUsage),
[GraphEvents.TOOL_END]: new ToolEndHandler(),
[GraphEvents.CHAT_MODEL_STREAM]: new ChatModelStreamHandler(),
[GraphEvents.ON_RUN_STEP]: {
/**
* Handle ON_RUN_STEP event.
* @param {string} event - The event name.
* @param {StreamEventData} data - The event data.
*/
handle: (event, data) => {
sendEvent(res, { event, data });
aggregateContent({ event, data });
},
},
[GraphEvents.ON_RUN_STEP_DELTA]: {
/**
* Handle ON_RUN_STEP_DELTA event.
* @param {string} event - The event name.
* @param {StreamEventData} data - The event data.
*/
handle: (event, data) => {
sendEvent(res, { event, data });
aggregateContent({ event, data });
},
},
[GraphEvents.ON_RUN_STEP_COMPLETED]: {
/**
* Handle ON_RUN_STEP_COMPLETED event.
* @param {string} event - The event name.
* @param {StreamEventData & { result: ToolEndData }} data - The event data.
*/
handle: (event, data) => {
sendEvent(res, { event, data });
aggregateContent({ event, data });
},
},
[GraphEvents.ON_MESSAGE_DELTA]: {
/**
* Handle ON_MESSAGE_DELTA event.
* @param {string} event - The event name.
* @param {StreamEventData} data - The event data.
*/
handle: (event, data) => {
sendEvent(res, { event, data });
aggregateContent({ event, data });
},
},
};
return handlers;
}
module.exports = {
sendEvent,
getDefaultHandlers,
};

View File

@@ -0,0 +1,608 @@
// const { HttpsProxyAgent } = require('https-proxy-agent');
// const {
// Constants,
// ImageDetail,
// EModelEndpoint,
// resolveHeaders,
// validateVisionModel,
// mapModelToAzureConfig,
// } = require('librechat-data-provider');
const { Callback, createMetadataAggregator } = require('@librechat/agents');
const {
Constants,
EModelEndpoint,
bedrockOutputParser,
providerEndpointMap,
removeNullishValues,
} = require('librechat-data-provider');
const {
extractBaseURL,
// constructAzureURL,
// genAzureChatCompletion,
} = require('~/utils');
const {
formatMessage,
formatAgentMessages,
createContextHandlers,
} = require('~/app/clients/prompts');
const { encodeAndFormat } = require('~/server/services/Files/images/encode');
const Tokenizer = require('~/server/services/Tokenizer');
const { spendTokens } = require('~/models/spendTokens');
const BaseClient = require('~/app/clients/BaseClient');
// const { sleep } = require('~/server/utils');
const { createRun } = require('./run');
const { logger } = require('~/config');
/** @typedef {import('@librechat/agents').MessageContentComplex} MessageContentComplex */
// const providerSchemas = {
// [EModelEndpoint.bedrock]: true,
// };
const providerParsers = {
[EModelEndpoint.bedrock]: bedrockOutputParser,
};
class AgentClient extends BaseClient {
constructor(options = {}) {
super(null, options);
/** @type {'discard' | 'summarize'} */
this.contextStrategy = 'discard';
/** @deprecated @type {true} - Is a Chat Completion Request */
this.isChatCompletion = true;
/** @type {AgentRun} */
this.run;
const {
maxContextTokens,
modelOptions = {},
contentParts,
collectedUsage,
...clientOptions
} = options;
this.modelOptions = modelOptions;
this.maxContextTokens = maxContextTokens;
/** @type {MessageContentComplex[]} */
this.contentParts = contentParts;
/** @type {Array<UsageMetadata>} */
this.collectedUsage = collectedUsage;
this.options = Object.assign({ endpoint: options.endpoint }, clientOptions);
}
/**
* Returns the aggregated content parts for the current run.
* @returns {MessageContentComplex[]} */
getContentParts() {
return this.contentParts;
}
setOptions(options) {
logger.info('[api/server/controllers/agents/client.js] setOptions', options);
}
/**
*
* Checks if the model is a vision model based on request attachments and sets the appropriate options:
* - Sets `this.modelOptions.model` to `gpt-4-vision-preview` if the request is a vision request.
* - Sets `this.isVisionModel` to `true` if vision request.
* - Deletes `this.modelOptions.stop` if vision request.
* @param {MongoFile[]} attachments
*/
checkVisionRequest(attachments) {
logger.info(
'[api/server/controllers/agents/client.js #checkVisionRequest] not implemented',
attachments,
);
// if (!attachments) {
// return;
// }
// const availableModels = this.options.modelsConfig?.[this.options.endpoint];
// if (!availableModels) {
// return;
// }
// let visionRequestDetected = false;
// for (const file of attachments) {
// if (file?.type?.includes('image')) {
// visionRequestDetected = true;
// break;
// }
// }
// if (!visionRequestDetected) {
// return;
// }
// this.isVisionModel = validateVisionModel({ model: this.modelOptions.model, availableModels });
// if (this.isVisionModel) {
// delete this.modelOptions.stop;
// return;
// }
// for (const model of availableModels) {
// if (!validateVisionModel({ model, availableModels })) {
// continue;
// }
// this.modelOptions.model = model;
// this.isVisionModel = true;
// delete this.modelOptions.stop;
// return;
// }
// if (!availableModels.includes(this.defaultVisionModel)) {
// return;
// }
// if (!validateVisionModel({ model: this.defaultVisionModel, availableModels })) {
// return;
// }
// this.modelOptions.model = this.defaultVisionModel;
// this.isVisionModel = true;
// delete this.modelOptions.stop;
}
getSaveOptions() {
const parseOptions = providerParsers[this.options.endpoint];
let runOptions =
this.options.endpoint === EModelEndpoint.agents
? {
model: undefined,
// TODO:
// would need to be override settings; otherwise, model needs to be undefined
// model: this.override.model,
// instructions: this.override.instructions,
// additional_instructions: this.override.additional_instructions,
}
: {};
if (parseOptions) {
runOptions = parseOptions(this.modelOptions);
}
return removeNullishValues(
Object.assign(
{
endpoint: this.options.endpoint,
agent_id: this.options.agent.id,
modelLabel: this.options.modelLabel,
maxContextTokens: this.options.maxContextTokens,
resendFiles: this.options.resendFiles,
imageDetail: this.options.imageDetail,
spec: this.options.spec,
},
// TODO: PARSE OPTIONS BY PROVIDER, MAY CONTAIN SENSITIVE DATA
runOptions,
),
);
}
getBuildMessagesOptions(opts) {
return {
instructions: opts.instructions,
additional_instructions: opts.additional_instructions,
};
}
async addImageURLs(message, attachments) {
const { files, image_urls } = await encodeAndFormat(
this.options.req,
attachments,
this.options.agent.provider,
);
message.image_urls = image_urls.length ? image_urls : undefined;
return files;
}
async buildMessages(
messages,
parentMessageId,
{ instructions = null, additional_instructions = null },
opts,
) {
let orderedMessages = this.constructor.getMessagesForConversation({
messages,
parentMessageId,
summary: this.shouldSummarize,
});
let payload;
/** @type {{ role: string; name: string; content: string } | undefined} */
let systemMessage;
/** @type {number | undefined} */
let promptTokens;
/** @type {string} */
let systemContent = `${instructions ?? ''}${additional_instructions ?? ''}`;
if (this.options.attachments) {
const attachments = await this.options.attachments;
if (this.message_file_map) {
this.message_file_map[orderedMessages[orderedMessages.length - 1].messageId] = attachments;
} else {
this.message_file_map = {
[orderedMessages[orderedMessages.length - 1].messageId]: attachments,
};
}
const files = await this.addImageURLs(
orderedMessages[orderedMessages.length - 1],
attachments,
);
this.options.attachments = files;
}
if (this.message_file_map) {
this.contextHandlers = createContextHandlers(
this.options.req,
orderedMessages[orderedMessages.length - 1].text,
);
}
const formattedMessages = orderedMessages.map((message, i) => {
const formattedMessage = formatMessage({
message,
userName: this.options?.name,
assistantName: this.options?.modelLabel,
});
const needsTokenCount = this.contextStrategy && !orderedMessages[i].tokenCount;
/* If tokens were never counted, or, is a Vision request and the message has files, count again */
if (needsTokenCount || (this.isVisionModel && (message.image_urls || message.files))) {
orderedMessages[i].tokenCount = this.getTokenCountForMessage(formattedMessage);
}
/* If message has files, calculate image token cost */
// if (this.message_file_map && this.message_file_map[message.messageId]) {
// const attachments = this.message_file_map[message.messageId];
// for (const file of attachments) {
// if (file.embedded) {
// this.contextHandlers?.processFile(file);
// continue;
// }
// orderedMessages[i].tokenCount += this.calculateImageTokenCost({
// width: file.width,
// height: file.height,
// detail: this.options.imageDetail ?? ImageDetail.auto,
// });
// }
// }
return formattedMessage;
});
if (this.contextHandlers) {
this.augmentedPrompt = await this.contextHandlers.createContext();
systemContent = this.augmentedPrompt + systemContent;
}
if (systemContent) {
systemContent = `${systemContent.trim()}`;
systemMessage = {
role: 'system',
name: 'instructions',
content: systemContent,
};
if (this.contextStrategy) {
const instructionTokens = this.getTokenCountForMessage(systemMessage);
if (instructionTokens >= 0) {
const firstMessageTokens = orderedMessages[0].tokenCount ?? 0;
orderedMessages[0].tokenCount = firstMessageTokens + instructionTokens;
}
}
}
if (this.contextStrategy) {
({ payload, promptTokens, messages } = await this.handleContextStrategy({
orderedMessages,
formattedMessages,
/* prefer usage_metadata from final message */
buildTokenMap: false,
}));
}
const result = {
prompt: payload,
promptTokens,
messages,
};
if (promptTokens >= 0 && typeof opts?.getReqData === 'function') {
opts.getReqData({ promptTokens });
}
return result;
}
/** @type {sendCompletion} */
async sendCompletion(payload, opts = {}) {
this.modelOptions.user = this.user;
await this.chatCompletion({
payload,
onProgress: opts.onProgress,
abortController: opts.abortController,
});
return this.contentParts;
}
/**
* @param {Object} params
* @param {string} [params.model]
* @param {string} [params.context='message']
* @param {UsageMetadata[]} [params.collectedUsage=this.collectedUsage]
*/
async recordCollectedUsage({ model, context = 'message', collectedUsage = this.collectedUsage }) {
for (const usage of collectedUsage) {
await spendTokens(
{
context,
model: model ?? this.modelOptions.model,
conversationId: this.conversationId,
user: this.user ?? this.options.req.user?.id,
endpointTokenConfig: this.options.endpointTokenConfig,
},
{ promptTokens: usage.input_tokens, completionTokens: usage.output_tokens },
);
}
}
async chatCompletion({ payload, abortController = null }) {
try {
if (!abortController) {
abortController = new AbortController();
}
const baseURL = extractBaseURL(this.completionsUrl);
logger.debug('[api/server/controllers/agents/client.js] chatCompletion', {
baseURL,
payload,
});
// if (this.useOpenRouter) {
// opts.defaultHeaders = {
// 'HTTP-Referer': 'https://librechat.ai',
// 'X-Title': 'LibreChat',
// };
// }
// if (this.options.headers) {
// opts.defaultHeaders = { ...opts.defaultHeaders, ...this.options.headers };
// }
// if (this.options.proxy) {
// opts.httpAgent = new HttpsProxyAgent(this.options.proxy);
// }
// if (this.isVisionModel) {
// modelOptions.max_tokens = 4000;
// }
// /** @type {TAzureConfig | undefined} */
// const azureConfig = this.options?.req?.app?.locals?.[EModelEndpoint.azureOpenAI];
// if (
// (this.azure && this.isVisionModel && azureConfig) ||
// (azureConfig && this.isVisionModel && this.options.endpoint === EModelEndpoint.azureOpenAI)
// ) {
// const { modelGroupMap, groupMap } = azureConfig;
// const {
// azureOptions,
// baseURL,
// headers = {},
// serverless,
// } = mapModelToAzureConfig({
// modelName: modelOptions.model,
// modelGroupMap,
// groupMap,
// });
// opts.defaultHeaders = resolveHeaders(headers);
// this.langchainProxy = extractBaseURL(baseURL);
// this.apiKey = azureOptions.azureOpenAIApiKey;
// const groupName = modelGroupMap[modelOptions.model].group;
// this.options.addParams = azureConfig.groupMap[groupName].addParams;
// this.options.dropParams = azureConfig.groupMap[groupName].dropParams;
// // Note: `forcePrompt` not re-assigned as only chat models are vision models
// this.azure = !serverless && azureOptions;
// this.azureEndpoint =
// !serverless && genAzureChatCompletion(this.azure, modelOptions.model, this);
// }
// if (this.azure || this.options.azure) {
// /* Azure Bug, extremely short default `max_tokens` response */
// if (!modelOptions.max_tokens && modelOptions.model === 'gpt-4-vision-preview') {
// modelOptions.max_tokens = 4000;
// }
// /* Azure does not accept `model` in the body, so we need to remove it. */
// delete modelOptions.model;
// opts.baseURL = this.langchainProxy
// ? constructAzureURL({
// baseURL: this.langchainProxy,
// azureOptions: this.azure,
// })
// : this.azureEndpoint.split(/(?<!\/)\/(chat|completion)\//)[0];
// opts.defaultQuery = { 'api-version': this.azure.azureOpenAIApiVersion };
// opts.defaultHeaders = { ...opts.defaultHeaders, 'api-key': this.apiKey };
// }
// if (process.env.OPENAI_ORGANIZATION) {
// opts.organization = process.env.OPENAI_ORGANIZATION;
// }
// if (this.options.addParams && typeof this.options.addParams === 'object') {
// modelOptions = {
// ...modelOptions,
// ...this.options.addParams,
// };
// logger.debug('[api/server/controllers/agents/client.js #chatCompletion] added params', {
// addParams: this.options.addParams,
// modelOptions,
// });
// }
// if (this.options.dropParams && Array.isArray(this.options.dropParams)) {
// this.options.dropParams.forEach((param) => {
// delete modelOptions[param];
// });
// logger.debug('[api/server/controllers/agents/client.js #chatCompletion] dropped params', {
// dropParams: this.options.dropParams,
// modelOptions,
// });
// }
const run = await createRun({
req: this.options.req,
agent: this.options.agent,
tools: this.options.tools,
toolMap: this.options.toolMap,
runId: this.responseMessageId,
modelOptions: this.modelOptions,
customHandlers: this.options.eventHandlers,
});
const config = {
configurable: {
provider: providerEndpointMap[this.options.agent.provider],
thread_id: this.conversationId,
},
run_id: this.responseMessageId,
signal: abortController.signal,
streamMode: 'values',
version: 'v2',
};
if (!run) {
throw new Error('Failed to create run');
}
this.run = run;
const messages = formatAgentMessages(payload);
await run.processStream({ messages }, config, {
[Callback.TOOL_ERROR]: (graph, error, toolId) => {
logger.error(
'[api/server/controllers/agents/client.js #chatCompletion] Tool Error',
error,
toolId,
);
},
});
this.recordCollectedUsage({ context: 'message' }).catch((err) => {
logger.error(
'[api/server/controllers/agents/client.js #chatCompletion] Error recording collected usage',
err,
);
});
} catch (err) {
if (!abortController.signal.aborted) {
logger.error(
'[api/server/controllers/agents/client.js #sendCompletion] Unhandled error type',
err,
);
throw err;
}
logger.warn(
'[api/server/controllers/agents/client.js #sendCompletion] Operation aborted',
err,
);
}
}
/**
*
* @param {Object} params
* @param {string} params.text
* @param {string} params.conversationId
*/
async titleConvo({ text }) {
if (!this.run) {
throw new Error('Run not initialized');
}
const { handleLLMEnd, collected: collectedMetadata } = createMetadataAggregator();
const clientOptions = {};
const providerConfig = this.options.req.app.locals[this.options.agent.provider];
if (
providerConfig &&
providerConfig.titleModel &&
providerConfig.titleModel !== Constants.CURRENT_MODEL
) {
clientOptions.model = providerConfig.titleModel;
}
try {
const titleResult = await this.run.generateTitle({
inputText: text,
contentParts: this.contentParts,
clientOptions,
chainOptions: {
callbacks: [
{
handleLLMEnd,
},
],
},
});
const collectedUsage = collectedMetadata.map((item) => {
let input_tokens, output_tokens;
if (item.usage) {
input_tokens = item.usage.input_tokens || item.usage.inputTokens;
output_tokens = item.usage.output_tokens || item.usage.outputTokens;
} else if (item.tokenUsage) {
input_tokens = item.tokenUsage.promptTokens;
output_tokens = item.tokenUsage.completionTokens;
}
return {
input_tokens: input_tokens,
output_tokens: output_tokens,
};
});
this.recordCollectedUsage({
model: clientOptions.model,
context: 'title',
collectedUsage,
}).catch((err) => {
logger.error(
'[api/server/controllers/agents/client.js #titleConvo] Error recording collected usage',
err,
);
});
return titleResult.title;
} catch (err) {
logger.error('[api/server/controllers/agents/client.js #titleConvo] Error', err);
return;
}
}
getEncoding() {
return this.modelOptions.model?.includes('gpt-4o') ? 'o200k_base' : 'cl100k_base';
}
/**
* Returns the token count of a given text. It also checks and resets the tokenizers if necessary.
* @param {string} text - The text to get the token count for.
* @returns {number} The token count of the given text.
*/
getTokenCount(text) {
const encoding = this.getEncoding();
return Tokenizer.getTokenCount(text, encoding);
}
}
module.exports = AgentClient;

View File

@@ -0,0 +1,153 @@
// errorHandler.js
const { logger } = require('~/config');
const getLogStores = require('~/cache/getLogStores');
const { CacheKeys, ViolationTypes } = require('librechat-data-provider');
const { recordUsage } = require('~/server/services/Threads');
const { getConvo } = require('~/models/Conversation');
const { sendResponse } = require('~/server/utils');
/**
* @typedef {Object} ErrorHandlerContext
* @property {OpenAIClient} openai - The OpenAI client
* @property {string} run_id - The run ID
* @property {boolean} completedRun - Whether the run has completed
* @property {string} assistant_id - The assistant ID
* @property {string} conversationId - The conversation ID
* @property {string} parentMessageId - The parent message ID
* @property {string} responseMessageId - The response message ID
* @property {string} endpoint - The endpoint being used
* @property {string} cacheKey - The cache key for the current request
*/
/**
* @typedef {Object} ErrorHandlerDependencies
* @property {Express.Request} req - The Express request object
* @property {Express.Response} res - The Express response object
* @property {() => ErrorHandlerContext} getContext - Function to get the current context
* @property {string} [originPath] - The origin path for the error handler
*/
/**
* Creates an error handler function with the given dependencies
* @param {ErrorHandlerDependencies} dependencies - The dependencies for the error handler
* @returns {(error: Error) => Promise<void>} The error handler function
*/
const createErrorHandler = ({ req, res, getContext, originPath = '/assistants/chat/' }) => {
const cache = getLogStores(CacheKeys.ABORT_KEYS);
/**
* Handles errors that occur during the chat process
* @param {Error} error - The error that occurred
* @returns {Promise<void>}
*/
return async (error) => {
const {
openai,
run_id,
endpoint,
cacheKey,
completedRun,
assistant_id,
conversationId,
parentMessageId,
responseMessageId,
} = getContext();
const defaultErrorMessage =
'The Assistant run failed to initialize. Try sending a message in a new conversation.';
const messageData = {
assistant_id,
conversationId,
parentMessageId,
sender: 'System',
user: req.user.id,
shouldSaveMessage: false,
messageId: responseMessageId,
endpoint,
};
if (error.message === 'Run cancelled') {
return res.end();
} else if (error.message === 'Request closed' && completedRun) {
return;
} else if (error.message === 'Request closed') {
logger.debug(`[${originPath}] Request aborted on close`);
} else if (/Files.*are invalid/.test(error.message)) {
const errorMessage = `Files are invalid, or may not have uploaded yet.${
endpoint === 'azureAssistants'
? ' If using Azure OpenAI, files are only available in the region of the assistant\'s model at the time of upload.'
: ''
}`;
return sendResponse(req, res, messageData, errorMessage);
} else if (error?.message?.includes('string too long')) {
return sendResponse(
req,
res,
messageData,
'Message too long. The Assistants API has a limit of 32,768 characters per message. Please shorten it and try again.',
);
} else if (error?.message?.includes(ViolationTypes.TOKEN_BALANCE)) {
return sendResponse(req, res, messageData, error.message);
} else {
logger.error(`[${originPath}]`, error);
}
if (!openai || !run_id) {
return sendResponse(req, res, messageData, defaultErrorMessage);
}
await new Promise((resolve) => setTimeout(resolve, 2000));
try {
const status = await cache.get(cacheKey);
if (status === 'cancelled') {
logger.debug(`[${originPath}] Run already cancelled`);
return res.end();
}
await cache.delete(cacheKey);
// const cancelledRun = await openai.beta.threads.runs.cancel(thread_id, run_id);
// logger.debug(`[${originPath}] Cancelled run:`, cancelledRun);
} catch (error) {
logger.error(`[${originPath}] Error cancelling run`, error);
}
await new Promise((resolve) => setTimeout(resolve, 2000));
let run;
try {
// run = await openai.beta.threads.runs.retrieve(thread_id, run_id);
await recordUsage({
...run.usage,
model: run.model,
user: req.user.id,
conversationId,
});
} catch (error) {
logger.error(`[${originPath}] Error fetching or processing run`, error);
}
let finalEvent;
try {
// const errorContentPart = {
// text: {
// value:
// error?.message ?? 'There was an error processing your request. Please try again later.',
// },
// type: ContentTypes.ERROR,
// };
finalEvent = {
final: true,
conversation: await getConvo(req.user.id, conversationId),
// runMessages,
};
} catch (error) {
logger.error(`[${originPath}] Error finalizing error process`, error);
return sendResponse(req, res, messageData, 'The Assistant run failed');
}
return sendResponse(req, res, finalEvent);
};
};
module.exports = { createErrorHandler };

View File

@@ -0,0 +1,106 @@
const { HttpsProxyAgent } = require('https-proxy-agent');
const { resolveHeaders } = require('librechat-data-provider');
const { createLLM } = require('~/app/clients/llm');
/**
* Initializes and returns a Language Learning Model (LLM) instance.
*
* @param {Object} options - Configuration options for the LLM.
* @param {string} options.model - The model identifier.
* @param {string} options.modelName - The specific name of the model.
* @param {number} options.temperature - The temperature setting for the model.
* @param {number} options.presence_penalty - The presence penalty for the model.
* @param {number} options.frequency_penalty - The frequency penalty for the model.
* @param {number} options.max_tokens - The maximum number of tokens for the model output.
* @param {boolean} options.streaming - Whether to use streaming for the model output.
* @param {Object} options.context - The context for the conversation.
* @param {number} options.tokenBuffer - The token buffer size.
* @param {number} options.initialMessageCount - The initial message count.
* @param {string} options.conversationId - The ID of the conversation.
* @param {string} options.user - The user identifier.
* @param {string} options.langchainProxy - The langchain proxy URL.
* @param {boolean} options.useOpenRouter - Whether to use OpenRouter.
* @param {Object} options.options - Additional options.
* @param {Object} options.options.headers - Custom headers for the request.
* @param {string} options.options.proxy - Proxy URL.
* @param {Object} options.options.req - The request object.
* @param {Object} options.options.res - The response object.
* @param {boolean} options.options.debug - Whether to enable debug mode.
* @param {string} options.apiKey - The API key for authentication.
* @param {Object} options.azure - Azure-specific configuration.
* @param {Object} options.abortController - The AbortController instance.
* @returns {Object} The initialized LLM instance.
*/
function initializeLLM(options) {
const {
model,
modelName,
temperature,
presence_penalty,
frequency_penalty,
max_tokens,
streaming,
user,
langchainProxy,
useOpenRouter,
options: { headers, proxy },
apiKey,
azure,
} = options;
const modelOptions = {
modelName: modelName || model,
temperature,
presence_penalty,
frequency_penalty,
user,
};
if (max_tokens) {
modelOptions.max_tokens = max_tokens;
}
const configOptions = {};
if (langchainProxy) {
configOptions.basePath = langchainProxy;
}
if (useOpenRouter) {
configOptions.basePath = 'https://openrouter.ai/api/v1';
configOptions.baseOptions = {
headers: {
'HTTP-Referer': 'https://librechat.ai',
'X-Title': 'LibreChat',
},
};
}
if (headers && typeof headers === 'object' && !Array.isArray(headers)) {
configOptions.baseOptions = {
headers: resolveHeaders({
...headers,
...configOptions?.baseOptions?.headers,
}),
};
}
if (proxy) {
configOptions.httpAgent = new HttpsProxyAgent(proxy);
configOptions.httpsAgent = new HttpsProxyAgent(proxy);
}
const llm = createLLM({
modelOptions,
configOptions,
openAIApiKey: apiKey,
azure,
streaming,
});
return llm;
}
module.exports = {
initializeLLM,
};

View File

@@ -0,0 +1,142 @@
const { Constants } = require('librechat-data-provider');
const { createAbortController, handleAbortError } = require('~/server/middleware');
const { sendMessage } = require('~/server/utils');
const { saveMessage } = require('~/models');
const { logger } = require('~/config');
const AgentController = async (req, res, next, initializeClient, addTitle) => {
let {
text,
endpointOption,
conversationId,
parentMessageId = null,
overrideParentMessageId = null,
} = req.body;
let sender;
let userMessage;
let promptTokens;
let userMessageId;
let responseMessageId;
let userMessagePromise;
const newConvo = !conversationId;
const user = req.user.id;
const getReqData = (data = {}) => {
for (let key in data) {
if (key === 'userMessage') {
userMessage = data[key];
userMessageId = data[key].messageId;
} else if (key === 'userMessagePromise') {
userMessagePromise = data[key];
} else if (key === 'responseMessageId') {
responseMessageId = data[key];
} else if (key === 'promptTokens') {
promptTokens = data[key];
} else if (key === 'sender') {
sender = data[key];
} else if (!conversationId && key === 'conversationId') {
conversationId = data[key];
}
}
};
try {
/** @type {{ client: TAgentClient }} */
const { client } = await initializeClient({ req, res, endpointOption });
const getAbortData = () => ({
sender,
userMessage,
promptTokens,
conversationId,
userMessagePromise,
messageId: responseMessageId,
content: client.getContentParts(),
parentMessageId: overrideParentMessageId ?? userMessageId,
});
const { abortController, onStart } = createAbortController(req, res, getAbortData, getReqData);
res.on('close', () => {
logger.debug('[AgentController] Request closed');
if (!abortController) {
return;
} else if (abortController.signal.aborted) {
return;
} else if (abortController.requestCompleted) {
return;
}
abortController.abort();
logger.debug('[AgentController] Request aborted on close');
});
const messageOptions = {
user,
onStart,
getReqData,
conversationId,
parentMessageId,
abortController,
overrideParentMessageId,
progressOptions: {
res,
// parentMessageId: overrideParentMessageId || userMessageId,
},
};
let response = await client.sendMessage(text, messageOptions);
response.endpoint = endpointOption.endpoint;
const { conversation = {} } = await client.responsePromise;
conversation.title =
conversation && !conversation.title ? null : conversation?.title || 'New Chat';
if (client.options.attachments) {
userMessage.files = client.options.attachments;
delete userMessage.image_urls;
}
if (!abortController.signal.aborted) {
sendMessage(res, {
final: true,
conversation,
title: conversation.title,
requestMessage: userMessage,
responseMessage: response,
});
res.end();
await saveMessage(
req,
{ ...response, user },
{ context: 'api/server/controllers/agents/request.js - response end' },
);
}
if (!client.skipSaveUserMessage) {
await saveMessage(req, userMessage, {
context: 'api/server/controllers/agents/request.js - don\'t skip saving user message',
});
}
if (addTitle && parentMessageId === Constants.NO_PARENT && newConvo) {
addTitle(req, {
text,
response,
client,
});
}
} catch (error) {
handleAbortError(res, req, error, {
conversationId,
sender,
messageId: responseMessageId,
parentMessageId: userMessageId ?? parentMessageId,
});
}
};
module.exports = AgentController;

View File

@@ -0,0 +1,67 @@
const { Run, Providers } = require('@librechat/agents');
const { providerEndpointMap } = require('librechat-data-provider');
/**
* @typedef {import('@librechat/agents').t} t
* @typedef {import('@librechat/agents').StreamEventData} StreamEventData
* @typedef {import('@librechat/agents').ClientOptions} ClientOptions
* @typedef {import('@librechat/agents').EventHandler} EventHandler
* @typedef {import('@librechat/agents').GraphEvents} GraphEvents
* @typedef {import('@librechat/agents').IState} IState
*/
/**
* Creates a new Run instance with custom handlers and configuration.
*
* @param {Object} options - The options for creating the Run instance.
* @param {ServerRequest} [options.req] - The server request.
* @param {string | undefined} [options.runId] - Optional run ID; otherwise, a new run ID will be generated.
* @param {Agent} options.agent - The agent for this run.
* @param {StructuredTool[] | undefined} [options.tools] - The tools to use in the run.
* @param {Record<string, StructuredTool[]> | undefined} [options.toolMap] - The tool map for the run.
* @param {Record<GraphEvents, EventHandler> | undefined} [options.customHandlers] - Custom event handlers.
* @param {ClientOptions} [options.modelOptions] - Optional model to use; if not provided, it will use the default from modelMap.
* @param {boolean} [options.streaming=true] - Whether to use streaming.
* @param {boolean} [options.streamUsage=true] - Whether to stream usage information.
* @returns {Promise<Run<IState>>} A promise that resolves to a new Run instance.
*/
async function createRun({
runId,
tools,
agent,
toolMap,
modelOptions,
customHandlers,
streaming = true,
streamUsage = true,
}) {
const llmConfig = Object.assign(
{
provider: providerEndpointMap[agent.provider],
streaming,
streamUsage,
},
modelOptions,
);
const graphConfig = {
runId,
llmConfig,
tools,
toolMap,
instructions: agent.instructions,
additional_instructions: agent.additional_instructions,
};
// TEMPORARY FOR TESTING
if (agent.provider === Providers.ANTHROPIC) {
graphConfig.streamBuffer = 2000;
}
return Run.create({
graphConfig,
customHandlers,
});
}
module.exports = { createRun };

View File

@@ -0,0 +1,239 @@
const { nanoid } = require('nanoid');
const { FileContext, Constants } = require('librechat-data-provider');
const {
getAgent,
createAgent,
updateAgent,
deleteAgent,
getListAgents,
} = require('~/models/Agent');
const { getStrategyFunctions } = require('~/server/services/Files/strategies');
const { uploadImageBuffer } = require('~/server/services/Files/process');
const { getProjectByName } = require('~/models/Project');
const { updateAgentProjects } = require('~/models/Agent');
const { deleteFileByFilter } = require('~/models/File');
const { logger } = require('~/config');
/**
* Creates an Agent.
* @route POST /Agents
* @param {ServerRequest} req - The request object.
* @param {AgentCreateParams} req.body - The request body.
* @param {ServerResponse} res - The response object.
* @returns {Agent} 201 - success response - application/json
*/
const createAgentHandler = async (req, res) => {
try {
const { tools = [], provider, name, description, instructions, model, ...agentData } = req.body;
const { id: userId } = req.user;
agentData.tools = tools
.map((tool) => (typeof tool === 'string' ? req.app.locals.availableTools[tool] : tool))
.filter(Boolean);
Object.assign(agentData, {
author: userId,
name,
description,
instructions,
provider,
model,
});
agentData.id = `agent_${nanoid()}`;
const agent = await createAgent(agentData);
res.status(201).json(agent);
} catch (error) {
logger.error('[/Agents] Error creating agent', error);
res.status(500).json({ error: error.message });
}
};
/**
* Retrieves an Agent by ID.
* @route GET /Agents/:id
* @param {object} req - Express Request
* @param {object} req.params - Request params
* @param {string} req.params.id - Agent identifier.
* @param {object} req.user - Authenticated user information
* @param {string} req.user.id - User ID
* @returns {Promise<Agent>} 200 - success response - application/json
* @returns {Error} 404 - Agent not found
*/
const getAgentHandler = async (req, res) => {
try {
const id = req.params.id;
const author = req.user.id;
let query = { id, author };
const globalProject = await getProjectByName(Constants.GLOBAL_PROJECT_NAME, ['agentIds']);
if (globalProject && (globalProject.agentIds?.length ?? 0) > 0) {
query = {
$or: [{ id, $in: globalProject.agentIds }, query],
};
}
const agent = await getAgent(query);
if (!agent) {
return res.status(404).json({ error: 'Agent not found' });
}
if (agent.author !== author) {
delete agent.author;
}
return res.status(200).json(agent);
} catch (error) {
logger.error('[/Agents/:id] Error retrieving agent', error);
res.status(500).json({ error: error.message });
}
};
/**
* Updates an Agent.
* @route PATCH /Agents/:id
* @param {object} req - Express Request
* @param {object} req.params - Request params
* @param {string} req.params.id - Agent identifier.
* @param {AgentUpdateParams} req.body - The Agent update parameters.
* @returns {Agent} 200 - success response - application/json
*/
const updateAgentHandler = async (req, res) => {
try {
const id = req.params.id;
const { projectIds, removeProjectIds, ...updateData } = req.body;
let updatedAgent;
if (Object.keys(updateData).length > 0) {
updatedAgent = await updateAgent({ id, author: req.user.id }, updateData);
}
if (projectIds || removeProjectIds) {
updatedAgent = await updateAgentProjects(id, projectIds, removeProjectIds);
}
return res.json(updatedAgent);
} catch (error) {
logger.error('[/Agents/:id] Error updating Agent', error);
res.status(500).json({ error: error.message });
}
};
/**
* Deletes an Agent based on the provided ID.
* @route DELETE /Agents/:id
* @param {object} req - Express Request
* @param {object} req.params - Request params
* @param {string} req.params.id - Agent identifier.
* @returns {Agent} 200 - success response - application/json
*/
const deleteAgentHandler = async (req, res) => {
try {
const id = req.params.id;
const agent = await getAgent({ id });
if (!agent) {
return res.status(404).json({ error: 'Agent not found' });
}
await deleteAgent({ id, author: req.user.id });
return res.json({ message: 'Agent deleted' });
} catch (error) {
logger.error('[/Agents/:id] Error deleting Agent', error);
res.status(500).json({ error: error.message });
}
};
/**
*
* @route GET /Agents
* @param {object} req - Express Request
* @param {object} req.query - Request query
* @param {string} [req.query.user] - The user ID of the agent's author.
* @returns {Promise<AgentListResponse>} 200 - success response - application/json
*/
const getListAgentsHandler = async (req, res) => {
try {
const data = await getListAgents({
author: req.user.id,
});
return res.json(data);
} catch (error) {
logger.error('[/Agents] Error listing Agents', error);
res.status(500).json({ error: error.message });
}
};
/**
* Uploads and updates an avatar for a specific agent.
* @route POST /avatar/:agent_id
* @param {object} req - Express Request
* @param {object} req.params - Request params
* @param {string} req.params.agent_id - The ID of the agent.
* @param {Express.Multer.File} req.file - The avatar image file.
* @param {object} req.body - Request body
* @param {string} [req.body.avatar] - Optional avatar for the agent's avatar.
* @returns {Object} 200 - success response - application/json
*/
const uploadAgentAvatarHandler = async (req, res) => {
try {
const { agent_id } = req.params;
if (!agent_id) {
return res.status(400).json({ message: 'Agent ID is required' });
}
let { avatar: _avatar = '{}' } = req.body;
const image = await uploadImageBuffer({
req,
context: FileContext.avatar,
metadata: {
buffer: req.file.buffer,
},
});
try {
_avatar = JSON.parse(_avatar);
} catch (error) {
logger.error('[/avatar/:agent_id] Error parsing avatar', error);
_avatar = {};
}
if (_avatar && _avatar.source) {
const { deleteFile } = getStrategyFunctions(_avatar.source);
try {
await deleteFile(req, { filepath: _avatar.filepath });
await deleteFileByFilter({ filepath: _avatar.filepath });
} catch (error) {
logger.error('[/avatar/:agent_id] Error deleting old avatar', error);
}
}
const promises = [];
const data = {
avatar: {
filepath: image.filepath,
source: req.app.locals.fileStrategy,
},
};
promises.push(await updateAgent({ id: agent_id, author: req.user.id }, data));
const resolved = await Promise.all(promises);
res.status(201).json(resolved[0]);
} catch (error) {
const message = 'An error occurred while updating the Agent Avatar';
logger.error(message, error);
res.status(500).json({ message });
}
};
module.exports = {
createAgent: createAgentHandler,
getAgent: getAgentHandler,
updateAgent: updateAgentHandler,
deleteAgent: deleteAgentHandler,
getListAgents: getListAgentsHandler,
uploadAgentAvatar: uploadAgentAvatarHandler,
};

View File

@@ -54,6 +54,7 @@ const chatV1 = async (req, res) => {
promptPrefix,
assistant_id,
instructions,
endpointOption,
thread_id: _thread_id,
messageId: _messageId,
conversationId: convoId,
@@ -283,7 +284,7 @@ const chatV1 = async (req, res) => {
const { openai: _openai, client } = await getOpenAIClient({
req,
res,
endpointOption: req.body.endpointOption,
endpointOption,
initAppClient: true,
});
@@ -312,6 +313,10 @@ const chatV1 = async (req, res) => {
body.additional_instructions = promptPrefix;
}
if (typeof endpointOption.artifactsPrompt === 'string' && endpointOption.artifactsPrompt) {
body.additional_instructions = `${body.additional_instructions ?? ''}\n${endpointOption.artifactsPrompt}`.trim();
}
if (instructions) {
body.instructions = instructions;
}
@@ -333,12 +338,12 @@ const chatV1 = async (req, res) => {
};
const addVisionPrompt = async () => {
if (!req.body.endpointOption.attachments) {
if (!endpointOption.attachments) {
return;
}
/** @type {MongoFile[]} */
const attachments = await req.body.endpointOption.attachments;
const attachments = await endpointOption.attachments;
if (attachments && attachments.every((attachment) => checkOpenAIStorage(attachment.source))) {
return;
}

View File

@@ -53,6 +53,7 @@ const chatV2 = async (req, res) => {
promptPrefix,
assistant_id,
instructions,
endpointOption,
thread_id: _thread_id,
messageId: _messageId,
conversationId: convoId,
@@ -160,7 +161,7 @@ const chatV2 = async (req, res) => {
const { openai: _openai, client } = await getOpenAIClient({
req,
res,
endpointOption: req.body.endpointOption,
endpointOption,
initAppClient: true,
});
@@ -194,6 +195,10 @@ const chatV2 = async (req, res) => {
body.additional_instructions = promptPrefix;
}
if (typeof endpointOption.artifactsPrompt === 'string' && endpointOption.artifactsPrompt) {
body.additional_instructions = `${body.additional_instructions ?? ''}\n${endpointOption.artifactsPrompt}`.trim();
}
if (instructions) {
body.instructions = instructions;
}

View File

@@ -64,7 +64,7 @@ const _listAssistants = async ({ req, res, version, query }) => {
* @param {object} params.res - The response object, used for initializing the client.
* @param {string} params.version - The API version to use.
* @param {Omit<AssistantListParams, 'endpoint'>} params.query - The query parameters to list assistants (e.g., limit, order).
* @returns {Promise<object>} A promise that resolves to the response from the `openai.beta.assistants.list` method call.
* @returns {Promise<Array<Assistant>>} A promise that resolves to the response from the `openai.beta.assistants.list` method call.
*/
const listAllAssistants = async ({ req, res, version, query }) => {
/** @type {{ openai: OpenAIClient }} */

View File

@@ -18,7 +18,9 @@ const createAssistant = async (req, res) => {
try {
const { openai } = await getOpenAIClient({ req, res });
const { tools = [], endpoint, ...assistantData } = req.body;
const { tools = [], endpoint, conversation_starters, ...assistantData } = req.body;
delete assistantData.conversation_starters;
assistantData.tools = tools
.map((tool) => {
if (typeof tool !== 'string') {
@@ -41,11 +43,22 @@ const createAssistant = async (req, res) => {
};
const assistant = await openai.beta.assistants.create(assistantData);
const promise = updateAssistantDoc({ assistant_id: assistant.id }, { user: req.user.id });
const createData = { user: req.user.id };
if (conversation_starters) {
createData.conversation_starters = conversation_starters;
}
const document = await updateAssistantDoc({ assistant_id: assistant.id }, createData);
if (azureModelIdentifier) {
assistant.model = azureModelIdentifier;
}
await promise;
if (document.conversation_starters) {
assistant.conversation_starters = document.conversation_starters;
}
logger.debug('/assistants/', assistant);
res.status(201).json(assistant);
} catch (error) {
@@ -88,7 +101,7 @@ const patchAssistant = async (req, res) => {
await validateAuthor({ req, openai });
const assistant_id = req.params.id;
const { endpoint: _e, ...updateData } = req.body;
const { endpoint: _e, conversation_starters, ...updateData } = req.body;
updateData.tools = (updateData.tools ?? [])
.map((tool) => {
if (typeof tool !== 'string') {
@@ -104,6 +117,15 @@ const patchAssistant = async (req, res) => {
}
const updatedAssistant = await openai.beta.assistants.update(assistant_id, updateData);
if (conversation_starters !== undefined) {
const conversationStartersUpdate = await updateAssistantDoc(
{ assistant_id },
{ conversation_starters },
);
updatedAssistant.conversation_starters = conversationStartersUpdate.conversation_starters;
}
res.json(updatedAssistant);
} catch (error) {
logger.error('[/assistants/:id] Error updating assistant', error);
@@ -153,6 +175,32 @@ const listAssistants = async (req, res) => {
}
};
/**
* Filter assistants based on configuration.
*
* @param {object} params - The parameters object.
* @param {string} params.userId - The user ID to filter private assistants.
* @param {AssistantDocument[]} params.assistants - The list of assistants to filter.
* @param {Partial<TAssistantEndpoint>} [params.assistantsConfig] - The assistant configuration.
* @returns {AssistantDocument[]} - The filtered list of assistants.
*/
function filterAssistantDocs({ documents, userId, assistantsConfig = {} }) {
const { supportedIds, excludedIds, privateAssistants } = assistantsConfig;
const removeUserId = (doc) => {
const { user: _u, ...document } = doc;
return document;
};
if (privateAssistants) {
return documents.filter((doc) => userId === doc.user.toString()).map(removeUserId);
} else if (supportedIds?.length) {
return documents.filter((doc) => supportedIds.includes(doc.assistant_id)).map(removeUserId);
} else if (excludedIds?.length) {
return documents.filter((doc) => !excludedIds.includes(doc.assistant_id)).map(removeUserId);
}
return documents.map(removeUserId);
}
/**
* Returns a list of the user's assistant documents (metadata saved to database).
* @route GET /assistants/documents
@@ -160,7 +208,25 @@ const listAssistants = async (req, res) => {
*/
const getAssistantDocuments = async (req, res) => {
try {
res.json(await getAssistants({ user: req.user.id }));
const endpoint = req.query;
const assistantsConfig = req.app.locals[endpoint];
const documents = await getAssistants(
{},
{
user: 1,
assistant_id: 1,
conversation_starters: 1,
createdAt: 1,
updatedAt: 1,
},
);
const docs = filterAssistantDocs({
documents,
userId: req.user.id,
assistantsConfig,
});
res.json(docs);
} catch (error) {
logger.error('[/assistants/documents] Error listing assistant documents', error);
res.status(500).json({ error: error.message });

View File

@@ -16,7 +16,9 @@ const createAssistant = async (req, res) => {
/** @type {{ openai: OpenAIClient }} */
const { openai } = await getOpenAIClient({ req, res });
const { tools = [], endpoint, ...assistantData } = req.body;
const { tools = [], endpoint, conversation_starters, ...assistantData } = req.body;
delete assistantData.conversation_starters;
assistantData.tools = tools
.map((tool) => {
if (typeof tool !== 'string') {
@@ -39,11 +41,22 @@ const createAssistant = async (req, res) => {
};
const assistant = await openai.beta.assistants.create(assistantData);
const promise = updateAssistantDoc({ assistant_id: assistant.id }, { user: req.user.id });
const createData = { user: req.user.id };
if (conversation_starters) {
createData.conversation_starters = conversation_starters;
}
const document = await updateAssistantDoc({ assistant_id: assistant.id }, createData);
if (azureModelIdentifier) {
assistant.model = azureModelIdentifier;
}
await promise;
if (document.conversation_starters) {
assistant.conversation_starters = document.conversation_starters;
}
logger.debug('/assistants/', assistant);
res.status(201).json(assistant);
} catch (error) {
@@ -64,6 +77,17 @@ const createAssistant = async (req, res) => {
const updateAssistant = async ({ req, openai, assistant_id, updateData }) => {
await validateAuthor({ req, openai });
const tools = [];
let conversation_starters = null;
if (updateData?.conversation_starters) {
const conversationStartersUpdate = await updateAssistantDoc(
{ assistant_id: assistant_id },
{ conversation_starters: updateData.conversation_starters },
);
conversation_starters = conversationStartersUpdate.conversation_starters;
delete updateData.conversation_starters;
}
let hasFileSearch = false;
for (const tool of updateData.tools ?? []) {
@@ -108,7 +132,13 @@ const updateAssistant = async ({ req, openai, assistant_id, updateData }) => {
updateData.model = openai.locals.azureOptions.azureOpenAIApiDeploymentName;
}
return await openai.beta.assistants.update(assistant_id, updateData);
const assistant = await openai.beta.assistants.update(assistant_id, updateData);
if (conversation_starters) {
assistant.conversation_starters = conversation_starters;
}
return assistant;
};
/**

View File

@@ -7,6 +7,8 @@ const express = require('express');
const compression = require('compression');
const passport = require('passport');
const mongoSanitize = require('express-mongo-sanitize');
const fs = require('fs');
const cookieParser = require('cookie-parser');
const { jwtLogin, passportLogin } = require('~/strategies');
const { connectDb, indexSync } = require('~/lib/db');
const { isEnabled } = require('~/server/utils');
@@ -16,9 +18,9 @@ const validateImageRequest = require('./middleware/validateImageRequest');
const errorController = require('./controllers/ErrorController');
const configureSocialLogins = require('./socialLogins');
const AppService = require('./services/AppService');
const staticCache = require('./utils/staticCache');
const noIndex = require('./middleware/noIndex');
const routes = require('./routes');
const staticCache = require('./utils/staticCache');
const { PORT, HOST, ALLOW_SOCIAL_LOGIN, DISABLE_COMPRESSION } = process.env ?? {};
@@ -37,9 +39,12 @@ const startServer = async () => {
app.disable('x-powered-by');
await AppService(app);
const indexPath = path.join(app.locals.paths.dist, 'index.html');
const indexHTML = fs.readFileSync(indexPath, 'utf8');
app.get('/health', (_req, res) => res.status(200).send('OK'));
// Middleware
/* Middleware */
app.use(noIndex);
app.use(errorController);
app.use(express.json({ limit: '3mb' }));
@@ -48,10 +53,11 @@ const startServer = async () => {
app.use(staticCache(app.locals.paths.dist));
app.use(staticCache(app.locals.paths.fonts));
app.use(staticCache(app.locals.paths.assets));
app.set('trust proxy', 1); // trust first proxy
app.set('trust proxy', 1); /* trust first proxy */
app.use(cors());
app.use(cookieParser());
if (DISABLE_COMPRESSION !== 'true') {
if (!isEnabled(DISABLE_COMPRESSION)) {
app.use(compression());
}
@@ -61,12 +67,12 @@ const startServer = async () => {
);
}
// OAUTH
/* OAUTH */
app.use(passport.initialize());
passport.use(await jwtLogin());
passport.use(passportLogin());
// LDAP Auth
/* LDAP Auth */
if (process.env.LDAP_URL && process.env.LDAP_USER_SEARCH_BASE) {
passport.use(ldapLogin);
}
@@ -76,7 +82,7 @@ const startServer = async () => {
}
app.use('/oauth', routes.oauth);
// API Endpoints
/* API Endpoints */
app.use('/api/auth', routes.auth);
app.use('/api/keys', routes.keys);
app.use('/api/user', routes.user);
@@ -99,10 +105,16 @@ const startServer = async () => {
app.use('/images/', validateImageRequest, routes.staticRoute);
app.use('/api/share', routes.share);
app.use('/api/roles', routes.roles);
app.use('/api/agents', routes.agents);
app.use('/api/bedrock', routes.bedrock);
app.use('/api/tags', routes.tags);
app.use((req, res) => {
res.sendFile(path.join(app.locals.paths.dist, 'index.html'));
// Replace lang attribute in index.html with lang from cookies or accept-language header
const lang = req.cookies.lang || req.headers['accept-language']?.split(',')[0] || 'en-US';
const updatedIndexHtml = indexHTML.replace(/lang="en-US"/g, `lang="${lang}"`);
res.send(updatedIndexHtml);
});
app.listen(port, host, () => {

View File

@@ -1,10 +1,10 @@
const { isAssistantsEndpoint } = require('librechat-data-provider');
const { isAssistantsEndpoint, ErrorTypes } = require('librechat-data-provider');
const { sendMessage, sendError, countTokens, isEnabled } = require('~/server/utils');
const { truncateText, smartTruncateText } = require('~/app/clients/prompts');
const clearPendingReq = require('~/cache/clearPendingReq');
const { spendTokens } = require('~/models/spendTokens');
const abortControllers = require('./abortControllers');
const { saveMessage, getConvo } = require('~/models');
const spendTokens = require('~/models/spendTokens');
const { abortRun } = require('./abortRun');
const { logger } = require('~/config');
@@ -107,7 +107,7 @@ const createAbortController = (req, res, getAbortData, getReqData) => {
finish_reason: 'incomplete',
endpoint: endpointOption.endpoint,
iconURL: endpointOption.iconURL,
model: endpointOption.modelOptions.model,
model: endpointOption.modelOptions?.model ?? endpointOption.model_parameters?.model,
unfinished: false,
error: false,
isCreatedByUser: false,
@@ -165,10 +165,14 @@ const handleAbortError = async (res, req, error, data) => {
);
}
const errorText = error?.message?.includes('"type"')
let errorText = error?.message?.includes('"type"')
? error.message
: 'An error occurred while processing your request. Please contact the Admin.';
if (error?.type === ErrorTypes.INVALID_REQUEST) {
errorText = `{"type":"${ErrorTypes.INVALID_REQUEST}"}`;
}
const respondWithError = async (partialText) => {
let options = {
sender,

View File

@@ -1,11 +1,13 @@
const { parseConvo, EModelEndpoint } = require('librechat-data-provider');
const { parseCompactConvo, EModelEndpoint, isAgentsEndpoint } = require('librechat-data-provider');
const { getModelsConfig } = require('~/server/controllers/ModelController');
const azureAssistants = require('~/server/services/Endpoints/azureAssistants');
const assistants = require('~/server/services/Endpoints/assistants');
const gptPlugins = require('~/server/services/Endpoints/gptPlugins');
const { processFiles } = require('~/server/services/Files/process');
const anthropic = require('~/server/services/Endpoints/anthropic');
const bedrock = require('~/server/services/Endpoints/bedrock');
const openAI = require('~/server/services/Endpoints/openAI');
const agents = require('~/server/services/Endpoints/agents');
const custom = require('~/server/services/Endpoints/custom');
const google = require('~/server/services/Endpoints/google');
const enforceModelSpec = require('./enforceModelSpec');
@@ -15,6 +17,8 @@ const buildFunction = {
[EModelEndpoint.openAI]: openAI.buildOptions,
[EModelEndpoint.google]: google.buildOptions,
[EModelEndpoint.custom]: custom.buildOptions,
[EModelEndpoint.agents]: agents.buildOptions,
[EModelEndpoint.bedrock]: bedrock.buildOptions,
[EModelEndpoint.azureOpenAI]: openAI.buildOptions,
[EModelEndpoint.anthropic]: anthropic.buildOptions,
[EModelEndpoint.gptPlugins]: gptPlugins.buildOptions,
@@ -24,7 +28,7 @@ const buildFunction = {
async function buildEndpointOption(req, res, next) {
const { endpoint, endpointType } = req.body;
const parsedBody = parseConvo({ endpoint, endpointType, conversation: req.body });
const parsedBody = parseCompactConvo({ endpoint, endpointType, conversation: req.body });
if (req.app.locals.modelSpecs?.list && req.app.locals.modelSpecs?.enforce) {
/** @type {{ list: TModelSpec[] }}*/
@@ -59,12 +63,13 @@ async function buildEndpointOption(req, res, next) {
}
}
req.body.endpointOption = buildFunction[endpointType ?? endpoint](
endpoint,
parsedBody,
endpointType,
);
const endpointFn = buildFunction[endpointType ?? endpoint];
const builder = isAgentsEndpoint(endpoint) ? (...args) => endpointFn(req, ...args) : endpointFn;
// TODO: use object params
req.body.endpointOption = builder(endpoint, parsedBody, endpointType);
// TODO: use `getModelsConfig` only when necessary
const modelsConfig = await getModelsConfig(req);
req.body.endpointOption.modelsConfig = modelsConfig;

View File

@@ -0,0 +1,27 @@
const { getInvite } = require('~/models/inviteUser');
const { deleteTokens } = require('~/models/Token');
async function checkInviteUser(req, res, next) {
const token = req.body.token;
if (!token || token === 'undefined') {
next();
return;
}
try {
const invite = await getInvite(token, req.body.email);
if (!invite || invite.error === true) {
return res.status(400).json({ message: 'Invalid invite token' });
}
await deleteTokens({ token: invite.token });
req.invite = invite;
next();
} catch (error) {
return res.status(429).json({ message: error.message });
}
}
module.exports = checkInviteUser;

View File

@@ -10,6 +10,7 @@ const requireLocalAuth = require('./requireLocalAuth');
const canDeleteAccount = require('./canDeleteAccount');
const requireLdapAuth = require('./requireLdapAuth');
const abortMiddleware = require('./abortMiddleware');
const checkInviteUser = require('./checkInviteUser');
const requireJwtAuth = require('./requireJwtAuth');
const validateModel = require('./validateModel');
const moderateText = require('./moderateText');
@@ -33,6 +34,7 @@ module.exports = {
moderateText,
validateModel,
requireJwtAuth,
checkInviteUser,
requireLdapAuth,
requireLocalAuth,
canDeleteAccount,

View File

@@ -1,4 +1,3 @@
const { SystemRoles } = require('librechat-data-provider');
const { getRoleByName } = require('~/models/Role');
/**
@@ -17,10 +16,6 @@ const generateCheckAccess = (permissionType, permissions, bodyProps = {}) => {
return res.status(401).json({ message: 'Authorization required' });
}
if (user.role === SystemRoles.ADMIN) {
return next();
}
const role = await getRoleByName(user.role);
if (role && role[permissionType]) {
const hasAnyPermission = permissions.some((permission) => {

View File

@@ -1,10 +1,16 @@
const { isEnabled } = require('~/server/utils');
function validateRegistration(req, res, next) {
if (req.invite) {
return next();
}
if (isEnabled(process.env.ALLOW_REGISTRATION)) {
next();
} else {
res.status(403).send('Registration is not allowed.');
return res.status(403).json({
message: 'Registration is not allowed.',
});
}
}

View File

@@ -0,0 +1,166 @@
const express = require('express');
const { nanoid } = require('nanoid');
const { actionDelimiter } = require('librechat-data-provider');
const { encryptMetadata, domainParser } = require('~/server/services/ActionService');
const { updateAction, getActions, deleteAction } = require('~/models/Action');
const { getAgent, updateAgent } = require('~/models/Agent');
const { logger } = require('~/config');
const router = express.Router();
/**
* Retrieves all user's actions
* @route GET /actions/
* @param {string} req.params.id - Assistant identifier.
* @returns {Action[]} 200 - success response - application/json
*/
router.get('/', async (req, res) => {
try {
res.json(await getActions({ user: req.user.id }));
} catch (error) {
res.status(500).json({ error: error.message });
}
});
/**
* Adds or updates actions for a specific agent.
* @route POST /actions/:agent_id
* @param {string} req.params.agent_id - The ID of the agent.
* @param {FunctionTool[]} req.body.functions - The functions to be added or updated.
* @param {string} [req.body.action_id] - Optional ID for the action.
* @param {ActionMetadata} req.body.metadata - Metadata for the action.
* @returns {Object} 200 - success response - application/json
*/
router.post('/:agent_id', async (req, res) => {
try {
const { agent_id } = req.params;
/** @type {{ functions: FunctionTool[], action_id: string, metadata: ActionMetadata }} */
const { functions, action_id: _action_id, metadata: _metadata } = req.body;
if (!functions.length) {
return res.status(400).json({ message: 'No functions provided' });
}
let metadata = await encryptMetadata(_metadata);
let { domain } = metadata;
domain = await domainParser(req, domain, true);
if (!domain) {
return res.status(400).json({ message: 'No domain provided' });
}
const action_id = _action_id ?? nanoid();
const initialPromises = [];
// TODO: share agents
initialPromises.push(getAgent({ id: agent_id, author: req.user.id }));
if (_action_id) {
initialPromises.push(getActions({ action_id }, true));
}
/** @type {[Agent, [Action|undefined]]} */
const [agent, actions_result] = await Promise.all(initialPromises);
if (!agent) {
return res.status(404).json({ message: 'Agent not found for adding action' });
}
if (actions_result && actions_result.length) {
const action = actions_result[0];
metadata = { ...action.metadata, ...metadata };
}
const { actions: _actions = [] } = agent ?? {};
const actions = [];
for (const action of _actions) {
const [_action_domain, current_action_id] = action.split(actionDelimiter);
if (current_action_id === action_id) {
continue;
}
actions.push(action);
}
actions.push(`${domain}${actionDelimiter}${action_id}`);
/** @type {string[]}} */
const { tools: _tools = [] } = agent;
const tools = _tools
.filter((tool) => !(tool && (tool.includes(domain) || tool.includes(action_id))))
.concat(functions.map((tool) => `${tool.function.name}${actionDelimiter}${domain}`));
const updatedAgent = await updateAgent(
{ id: agent_id, author: req.user.id },
{ tools, actions },
);
/** @type {[Action]} */
const updatedAction = await updateAction(
{ action_id },
{ metadata, agent_id, user: req.user.id },
);
const sensitiveFields = ['api_key', 'oauth_client_id', 'oauth_client_secret'];
for (let field of sensitiveFields) {
if (updatedAction.metadata[field]) {
delete updatedAction.metadata[field];
}
}
res.json([updatedAgent, updatedAction]);
} catch (error) {
const message = 'Trouble updating the Agent Action';
logger.error(message, error);
res.status(500).json({ message });
}
});
/**
* Deletes an action for a specific agent.
* @route DELETE /actions/:agent_id/:action_id
* @param {string} req.params.agent_id - The ID of the agent.
* @param {string} req.params.action_id - The ID of the action to delete.
* @returns {Object} 200 - success response - application/json
*/
router.delete('/:agent_id/:action_id', async (req, res) => {
try {
const { agent_id, action_id } = req.params;
const agent = await getAgent({ id: agent_id, author: req.user.id });
if (!agent) {
return res.status(404).json({ message: 'Agent not found for deleting action' });
}
const { tools = [], actions = [] } = agent;
let domain = '';
const updatedActions = actions.filter((action) => {
if (action.includes(action_id)) {
[domain] = action.split(actionDelimiter);
return false;
}
return true;
});
domain = await domainParser(req, domain, true);
if (!domain) {
return res.status(400).json({ message: 'No domain provided' });
}
const updatedTools = tools.filter((tool) => !(tool && tool.includes(domain)));
await updateAgent(
{ id: agent_id, author: req.user.id },
{ tools: updatedTools, actions: updatedActions },
);
await deleteAction({ action_id });
res.status(200).json({ message: 'Action deleted successfully' });
} catch (error) {
const message = 'Trouble deleting the Agent Action';
logger.error(message, error);
res.status(500).json({ message });
}
});
module.exports = router;

View File

@@ -0,0 +1,35 @@
const express = require('express');
const router = express.Router();
const {
setHeaders,
handleAbort,
// validateModel,
// validateEndpoint,
buildEndpointOption,
} = require('~/server/middleware');
const { initializeClient } = require('~/server/services/Endpoints/agents');
const AgentController = require('~/server/controllers/agents/request');
router.post('/abort', handleAbort());
/**
* @route POST /
* @desc Chat with an assistant
* @access Public
* @param {express.Request} req - The request object, containing the request data.
* @param {express.Response} res - The response object, used to send back a response.
* @returns {void}
*/
router.post(
'/',
// validateModel,
// validateEndpoint,
buildEndpointOption,
setHeaders,
async (req, res, next) => {
await AgentController(req, res, next, initializeClient);
},
);
module.exports = router;

View File

@@ -0,0 +1,21 @@
const express = require('express');
const router = express.Router();
const {
uaParser,
checkBan,
requireJwtAuth,
// concurrentLimiter,
// messageIpLimiter,
// messageUserLimiter,
} = require('~/server/middleware');
const v1 = require('./v1');
const chat = require('./chat');
router.use(requireJwtAuth);
router.use(checkBan);
router.use(uaParser);
router.use('/', v1);
router.use('/chat', chat);
module.exports = router;

View File

@@ -0,0 +1,94 @@
const multer = require('multer');
const express = require('express');
const { PermissionTypes, Permissions } = require('librechat-data-provider');
const { requireJwtAuth, generateCheckAccess } = require('~/server/middleware');
const v1 = require('~/server/controllers/agents/v1');
const actions = require('./actions');
const upload = multer();
const router = express.Router();
const checkAgentAccess = generateCheckAccess(PermissionTypes.AGENTS, [Permissions.USE]);
const checkAgentCreate = generateCheckAccess(PermissionTypes.AGENTS, [
Permissions.USE,
Permissions.CREATE,
]);
const checkGlobalAgentShare = generateCheckAccess(
PermissionTypes.AGENTS,
[Permissions.USE, Permissions.CREATE],
{
[Permissions.SHARED_GLOBAL]: ['projectIds', 'removeProjectIds'],
},
);
router.use(requireJwtAuth);
router.use(checkAgentAccess);
/**
* Agent actions route.
* @route GET|POST /agents/actions
*/
router.use('/actions', actions);
/**
* Get a list of available tools for agents.
* @route GET /agents/tools
* @returns {TPlugin[]} 200 - application/json
*/
router.use('/tools', (req, res) => {
res.json([]);
});
/**
* Creates an agent.
* @route POST /agents
* @param {AgentCreateParams} req.body - The agent creation parameters.
* @returns {Agent} 201 - Success response - application/json
*/
router.post('/', checkAgentCreate, v1.createAgent);
/**
* Retrieves an agent.
* @route GET /agents/:id
* @param {string} req.params.id - Agent identifier.
* @returns {Agent} 200 - Success response - application/json
*/
router.get('/:id', checkAgentAccess, v1.getAgent);
/**
* Updates an agent.
* @route PATCH /agents/:id
* @param {string} req.params.id - Agent identifier.
* @param {AgentUpdateParams} req.body - The agent update parameters.
* @returns {Agent} 200 - Success response - application/json
*/
router.patch('/:id', checkGlobalAgentShare, v1.updateAgent);
/**
* Deletes an agent.
* @route DELETE /agents/:id
* @param {string} req.params.id - Agent identifier.
* @returns {Agent} 200 - success response - application/json
*/
router.delete('/:id', checkAgentCreate, v1.deleteAgent);
/**
* Returns a list of agents.
* @route GET /agents
* @param {AgentListParams} req.query - The agent list parameters for pagination and sorting.
* @returns {AgentListResponse} 200 - success response - application/json
*/
router.get('/', checkAgentAccess, v1.getListAgents);
/**
* Uploads and updates an avatar for a specific agent.
* @route POST /avatar/:agent_id
* @param {string} req.params.agent_id - The ID of the agent.
* @param {Express.Multer.File} req.file - The avatar image file.
* @param {string} [req.body.metadata] - Optional metadata for the agent's avatar.
* @returns {Object} 200 - success response - application/json
*/
router.post('/avatar/:agent_id', checkAgentAccess, upload.single('file'), v1.uploadAgentAvatar);
module.exports = router;

View File

@@ -1,5 +1,5 @@
const { v4 } = require('uuid');
const express = require('express');
const { nanoid } = require('nanoid');
const { encryptMetadata, domainParser } = require('~/server/services/ActionService');
const { actionDelimiter, EModelEndpoint } = require('librechat-data-provider');
const { getOpenAIClient } = require('~/server/controllers/assistants/helpers');
@@ -9,20 +9,6 @@ const { logger } = require('~/config');
const router = express.Router();
/**
* Retrieves all user's actions
* @route GET /actions/
* @param {string} req.params.id - Assistant identifier.
* @returns {Action[]} 200 - success response - application/json
*/
router.get('/', async (req, res) => {
try {
res.json(await getActions());
} catch (error) {
res.status(500).json({ error: error.message });
}
});
/**
* Adds or updates actions for a specific assistant.
* @route POST /actions/:assistant_id
@@ -51,7 +37,7 @@ router.post('/:assistant_id', async (req, res) => {
return res.status(400).json({ message: 'No domain provided' });
}
const action_id = _action_id ?? v4();
const action_id = _action_id ?? nanoid();
const initialPromises = [];
const { openai } = await getOpenAIClient({ req, res });
@@ -178,6 +164,10 @@ router.delete('/:assistant_id/:action_id/:model', async (req, res) => {
domain = await domainParser(req, domain, true);
if (!domain) {
return res.status(400).json({ message: 'No domain provided' });
}
const updatedTools = tools.filter(
(tool) => !(tool.function && tool.function.name.includes(domain)),
);

View File

@@ -0,0 +1,13 @@
const express = require('express');
const controllers = require('~/server/controllers/assistants/v1');
const router = express.Router();
/**
* Returns a list of the user's assistant documents (metadata saved to database).
* @route GET /assistants/documents
* @returns {AssistantDocument[]} 200 - success response - application/json
*/
router.get('/', controllers.getAssistantDocuments);
module.exports = router;

View File

@@ -1,6 +1,7 @@
const multer = require('multer');
const express = require('express');
const controllers = require('~/server/controllers/assistants/v1');
const documents = require('./documents');
const actions = require('./actions');
const tools = require('./tools');
@@ -20,6 +21,13 @@ router.use('/actions', actions);
*/
router.use('/tools', tools);
/**
* Create an assistant.
* @route GET /assistants/documents
* @returns {AssistantDocument[]} 200 - application/json
*/
router.use('/documents', documents);
/**
* Create an assistant.
* @route POST /assistants
@@ -61,13 +69,6 @@ router.delete('/:id', controllers.deleteAssistant);
*/
router.get('/', controllers.listAssistants);
/**
* Returns a list of the user's assistant documents (metadata saved to database).
* @route GET /assistants/documents
* @returns {AssistantDocument[]} 200 - success response - application/json
*/
router.get('/documents', controllers.getAssistantDocuments);
/**
* Uploads and updates an avatar for a specific assistant.
* @route POST /avatar/:assistant_id

View File

@@ -2,6 +2,7 @@ const multer = require('multer');
const express = require('express');
const v1 = require('~/server/controllers/assistants/v1');
const v2 = require('~/server/controllers/assistants/v2');
const documents = require('./documents');
const actions = require('./actions');
const tools = require('./tools');
@@ -21,6 +22,13 @@ router.use('/actions', actions);
*/
router.use('/tools', tools);
/**
* Create an assistant.
* @route GET /assistants/documents
* @returns {AssistantDocument[]} 200 - application/json
*/
router.use('/documents', documents);
/**
* Create an assistant.
* @route POST /assistants
@@ -62,13 +70,6 @@ router.delete('/:id', v1.deleteAssistant);
*/
router.get('/', v1.listAssistants);
/**
* Returns a list of the user's assistant documents (metadata saved to database).
* @route GET /assistants/documents
* @returns {AssistantDocument[]} 200 - success response - application/json
*/
router.get('/documents', v1.getAssistantDocuments);
/**
* Uploads and updates an avatar for a specific assistant.
* @route POST /avatar/:assistant_id

View File

@@ -11,6 +11,7 @@ const {
checkBan,
loginLimiter,
requireJwtAuth,
checkInviteUser,
registerLimiter,
requireLdapAuth,
requireLocalAuth,
@@ -32,7 +33,14 @@ router.post(
loginController,
);
router.post('/refresh', refreshController);
router.post('/register', registerLimiter, checkBan, validateRegistration, registrationController);
router.post(
'/register',
registerLimiter,
checkBan,
checkInviteUser,
validateRegistration,
registrationController,
);
router.post(
'/requestPasswordReset',
resetPasswordLimiter,

View File

@@ -0,0 +1,36 @@
const express = require('express');
const router = express.Router();
const {
setHeaders,
handleAbort,
// validateModel,
// validateEndpoint,
buildEndpointOption,
} = require('~/server/middleware');
const { initializeClient } = require('~/server/services/Endpoints/bedrock');
const AgentController = require('~/server/controllers/agents/request');
const addTitle = require('~/server/services/Endpoints/bedrock/title');
router.post('/abort', handleAbort());
/**
* @route POST /
* @desc Chat with an assistant
* @access Public
* @param {express.Request} req - The request object, containing the request data.
* @param {express.Response} res - The response object, used to send back a response.
* @returns {void}
*/
router.post(
'/',
// validateModel,
// validateEndpoint,
buildEndpointOption,
setHeaders,
async (req, res, next) => {
await AgentController(req, res, next, initializeClient, addTitle);
},
);
module.exports = router;

View File

@@ -0,0 +1,19 @@
const express = require('express');
const router = express.Router();
const {
uaParser,
checkBan,
requireJwtAuth,
// concurrentLimiter,
// messageIpLimiter,
// messageUserLimiter,
} = require('~/server/middleware');
const chat = require('./chat');
router.use(requireJwtAuth);
router.use(checkBan);
router.use(uaParser);
router.use('/chat', chat);
module.exports = router;

View File

@@ -1,5 +1,5 @@
const express = require('express');
const { CacheKeys, defaultSocialLogins } = require('librechat-data-provider');
const { CacheKeys, defaultSocialLogins, Constants } = require('librechat-data-provider');
const { getLdapConfig } = require('~/server/services/Config/ldap');
const { getProjectByName } = require('~/models/Project');
const { isEnabled } = require('~/server/utils');
@@ -32,7 +32,7 @@ router.get('/', async function (req, res) {
return today.getMonth() === 1 && today.getDate() === 11;
};
const instanceProject = await getProjectByName('instance', '_id');
const instanceProject = await getProjectByName(Constants.GLOBAL_PROJECT_NAME, '_id');
const ldap = getLdapConfig();

View File

@@ -8,7 +8,6 @@ const requireJwtAuth = require('~/server/middleware/requireJwtAuth');
const { forkConversation } = require('~/server/utils/import/fork');
const { importConversations } = require('~/server/utils/import');
const { createImportLimiters } = require('~/server/middleware');
const { updateTagsForConversation } = require('~/models/ConversationTag');
const getLogStores = require('~/cache/getLogStores');
const { sleep } = require('~/server/utils');
const { logger } = require('~/config');
@@ -174,18 +173,4 @@ router.post('/fork', async (req, res) => {
}
});
router.put('/tags/:conversationId', async (req, res) => {
try {
const conversationTags = await updateTagsForConversation(
req.user.id,
req.params.conversationId,
req.body.tags,
);
res.status(200).json(conversationTags);
} catch (error) {
logger.error('Error updating conversation tags', error);
res.status(500).send('Error updating conversation tags');
}
});
module.exports = router;

View File

@@ -1,13 +1,18 @@
const fs = require('fs').promises;
const express = require('express');
const { isUUID, checkOpenAIStorage } = require('librechat-data-provider');
const {
isUUID,
checkOpenAIStorage,
FileSources,
EModelEndpoint,
} = require('librechat-data-provider');
const {
filterFile,
processFileUpload,
processDeleteRequest,
} = require('~/server/services/Files/process');
const { initializeClient } = require('~/server/services/Endpoints/assistants');
const { getStrategyFunctions } = require('~/server/services/Files/strategies');
const { getOpenAIClient } = require('~/server/controllers/assistants/helpers');
const { getFiles } = require('~/models/File');
const { logger } = require('~/config');
@@ -113,7 +118,15 @@ router.get('/download/:userId/:file_id', async (req, res) => {
if (checkOpenAIStorage(file.source)) {
req.body = { model: file.model };
const { openai } = await initializeClient({ req, res });
const endpointMap = {
[FileSources.openai]: EModelEndpoint.assistants,
[FileSources.azure]: EModelEndpoint.azureAssistants,
};
const { openai } = await getOpenAIClient({
req,
res,
overrideEndpoint: endpointMap[file.source],
});
logger.debug(`Downloading file ${file_id} from OpenAI`);
passThrough = await getDownloadStream(file_id, openai);
setHeaders();

View File

@@ -1,51 +1,55 @@
const ask = require('./ask');
const edit = require('./edit');
const assistants = require('./assistants');
const categories = require('./categories');
const tokenizer = require('./tokenizer');
const endpoints = require('./endpoints');
const staticRoute = require('./static');
const messages = require('./messages');
const convos = require('./convos');
const presets = require('./presets');
const prompts = require('./prompts');
const search = require('./search');
const tokenizer = require('./tokenizer');
const auth = require('./auth');
const keys = require('./keys');
const oauth = require('./oauth');
const endpoints = require('./endpoints');
const balance = require('./balance');
const models = require('./models');
const plugins = require('./plugins');
const user = require('./user');
const bedrock = require('./bedrock');
const search = require('./search');
const models = require('./models');
const convos = require('./convos');
const config = require('./config');
const assistants = require('./assistants');
const files = require('./files');
const staticRoute = require('./static');
const share = require('./share');
const categories = require('./categories');
const agents = require('./agents');
const roles = require('./roles');
const oauth = require('./oauth');
const files = require('./files');
const share = require('./share');
const tags = require('./tags');
const auth = require('./auth');
const edit = require('./edit');
const keys = require('./keys');
const user = require('./user');
const ask = require('./ask');
module.exports = {
search,
ask,
edit,
messages,
convos,
presets,
prompts,
auth,
keys,
oauth,
user,
tokenizer,
endpoints,
balance,
tags,
roles,
oauth,
files,
share,
agents,
bedrock,
convos,
search,
prompts,
config,
models,
plugins,
config,
presets,
balance,
messages,
endpoints,
tokenizer,
assistants,
files,
staticRoute,
share,
categories,
roles,
tags,
staticRoute,
};

View File

@@ -1,4 +1,5 @@
const express = require('express');
const { ContentTypes } = require('librechat-data-provider');
const { saveConvo, saveMessage, getMessages, updateMessage, deleteMessages } = require('~/models');
const { requireJwtAuth, validateMessageReq } = require('~/server/middleware');
const { countTokens } = require('~/server/utils');
@@ -54,11 +55,50 @@ router.get('/:conversationId/:messageId', validateMessageReq, async (req, res) =
router.put('/:conversationId/:messageId', validateMessageReq, async (req, res) => {
try {
const { messageId, model } = req.params;
const { text } = req.body;
const tokenCount = await countTokens(text, model);
const result = await updateMessage(req, { messageId, text, tokenCount });
res.status(200).json(result);
const { conversationId, messageId } = req.params;
const { text, index, model } = req.body;
if (index === undefined) {
const tokenCount = await countTokens(text, model);
const result = await updateMessage(req, { messageId, text, tokenCount });
return res.status(200).json(result);
}
if (typeof index !== 'number' || index < 0) {
return res.status(400).json({ error: 'Invalid index' });
}
const message = (await getMessages({ conversationId, messageId }, 'content tokenCount'))?.[0];
if (!message) {
return res.status(404).json({ error: 'Message not found' });
}
const existingContent = message.content;
if (!Array.isArray(existingContent) || index >= existingContent.length) {
return res.status(400).json({ error: 'Invalid index' });
}
const updatedContent = [...existingContent];
if (!updatedContent[index]) {
return res.status(400).json({ error: 'Content part not found' });
}
if (updatedContent[index].type !== ContentTypes.TEXT) {
return res.status(400).json({ error: 'Cannot update non-text content' });
}
const oldText = updatedContent[index].text;
updatedContent[index] = { type: ContentTypes.TEXT, text };
let tokenCount = message.tokenCount;
if (tokenCount !== undefined) {
const oldTokenCount = await countTokens(oldText, model);
const newTokenCount = await countTokens(text, model);
tokenCount = Math.max(0, tokenCount - oldTokenCount) + newTokenCount;
}
const result = await updateMessage(req, { messageId, content: updatedContent, tokenCount });
return res.status(200).json(result);
} catch (error) {
logger.error('Error updating message:', error);
res.status(500).json({ error: 'Internal server error' });

View File

@@ -24,6 +24,7 @@ const checkPromptCreate = generateCheckAccess(PermissionTypes.PROMPTS, [
Permissions.USE,
Permissions.CREATE,
]);
const checkGlobalPromptShare = generateCheckAccess(
PermissionTypes.PROMPTS,
[Permissions.USE, Permissions.CREATE],

View File

@@ -20,7 +20,10 @@ router.get('/:roleName', async (req, res) => {
// TODO: TEMP, use a better parsing for roleName
const roleName = _r.toUpperCase();
if (req.user.role !== SystemRoles.ADMIN && !roleDefaults[roleName]) {
if (
(req.user.role !== SystemRoles.ADMIN && roleName === SystemRoles.ADMIN) ||
(req.user.role !== SystemRoles.ADMIN && !roleDefaults[roleName])
) {
return res.status(403).send({ message: 'Unauthorized' });
}

View File

@@ -1,13 +1,21 @@
const express = require('express');
const { PermissionTypes, Permissions } = require('librechat-data-provider');
const {
getConversationTags,
updateConversationTag,
createConversationTag,
deleteConversationTag,
updateTagsForConversation,
} = require('~/models/ConversationTag');
const requireJwtAuth = require('~/server/middleware/requireJwtAuth');
const { requireJwtAuth, generateCheckAccess } = require('~/server/middleware');
const { logger } = require('~/config');
const router = express.Router();
const checkBookmarkAccess = generateCheckAccess(PermissionTypes.BOOKMARKS, [Permissions.USE]);
router.use(requireJwtAuth);
router.use(checkBookmarkAccess);
/**
* GET /
@@ -24,7 +32,7 @@ router.get('/', async (req, res) => {
res.status(404).end();
}
} catch (error) {
console.error('Error getting conversation tags:', error);
logger.error('Error getting conversation tags:', error);
res.status(500).json({ error: 'Internal server error' });
}
});
@@ -40,7 +48,7 @@ router.post('/', async (req, res) => {
const tag = await createConversationTag(req.user.id, req.body);
res.status(200).json(tag);
} catch (error) {
console.error('Error creating conversation tag:', error);
logger.error('Error creating conversation tag:', error);
res.status(500).json({ error: 'Internal server error' });
}
});
@@ -60,7 +68,7 @@ router.put('/:tag', async (req, res) => {
res.status(404).json({ error: 'Tag not found' });
}
} catch (error) {
console.error('Error updating conversation tag:', error);
logger.error('Error updating conversation tag:', error);
res.status(500).json({ error: 'Internal server error' });
}
});
@@ -80,9 +88,29 @@ router.delete('/:tag', async (req, res) => {
res.status(404).json({ error: 'Tag not found' });
}
} catch (error) {
console.error('Error deleting conversation tag:', error);
logger.error('Error deleting conversation tag:', error);
res.status(500).json({ error: 'Internal server error' });
}
});
/**
* PUT /convo/:conversationId
* Updates the tags for a conversation.
* @param {Object} req - Express request object
* @param {Object} res - Express response object
*/
router.put('/convo/:conversationId', async (req, res) => {
try {
const conversationTags = await updateTagsForConversation(
req.user.id,
req.params.conversationId,
req.body.tags,
);
res.status(200).json(conversationTags);
} catch (error) {
logger.error('Error updating conversation tags', error);
res.status(500).send('Error updating conversation tags');
}
});
module.exports = router;

View File

@@ -6,11 +6,15 @@ const {
verifyEmailController,
updateUserPluginsController,
resendVerificationController,
getTermsStatusController,
acceptTermsController,
} = require('~/server/controllers/UserController');
const router = express.Router();
router.get('/', requireJwtAuth, getUserController);
router.get('/terms', requireJwtAuth, getTermsStatusController);
router.post('/terms/accept', requireJwtAuth, acceptTermsController);
router.post('/plugins', requireJwtAuth, updateUserPluginsController);
router.delete('/delete', requireJwtAuth, canDeleteAccount, deleteUserController);
router.post('/verify', verifyEmailController);

View File

@@ -6,6 +6,7 @@ const {
isImageVisionTool,
actionDomainSeparator,
} = require('librechat-data-provider');
const { tool } = require('@langchain/core/tools');
const { encryptV2, decryptV2 } = require('~/server/utils/crypto');
const { getActions, deleteActions } = require('~/models/Action');
const { deleteAssistant } = require('~/models/Assistant');
@@ -101,7 +102,8 @@ async function domainParser(req, domain, inverse = false) {
*
* @param {Object} searchParams - The parameters for loading action sets.
* @param {string} searchParams.user - The user identifier.
* @param {string} searchParams.assistant_id - The assistant identifier.
* @param {string} [searchParams.agent_id]- The agent identifier.
* @param {string} [searchParams.assistant_id]- The assistant identifier.
* @returns {Promise<Action[] | null>} A promise that resolves to an array of actions or `null` if no match.
*/
async function loadActionSets(searchParams) {
@@ -114,10 +116,14 @@ async function loadActionSets(searchParams) {
* @param {Object} params - The parameters for loading action sets.
* @param {Action} params.action - The action set. Necessary for decrypting authentication values.
* @param {ActionRequest} params.requestBuilder - The ActionRequest builder class to execute the API call.
* @returns { { _call: (toolInput: Object) => unknown} } An object with `_call` method to execute the tool input.
* @param {string | undefined} [params.name] - The name of the tool.
* @param {string | undefined} [params.description] - The description for the tool.
* @param {import('zod').ZodTypeAny | undefined} [params.zodSchema] - The Zod schema for tool input validation/definition
* @returns { Promsie<typeof tool | { _call: (toolInput: Object | string) => unknown}> } An object with `_call` method to execute the tool input.
*/
async function createActionTool({ action, requestBuilder }) {
async function createActionTool({ action, requestBuilder, zodSchema, name, description }) {
action.metadata = await decryptMetadata(action.metadata);
/** @type {(toolInput: Object | string) => Promise<unknown>} */
const _call = async (toolInput) => {
try {
requestBuilder.setParams(toolInput);
@@ -142,6 +148,14 @@ async function createActionTool({ action, requestBuilder }) {
}
};
if (name) {
return tool(_call, {
name,
description: description || '',
schema: zodSchema,
});
}
return {
_call,
};
@@ -151,7 +165,7 @@ async function createActionTool({ action, requestBuilder }) {
* Encrypts sensitive metadata values for an action.
*
* @param {ActionMetadata} metadata - The action metadata to encrypt.
* @returns {ActionMetadata} The updated action metadata with encrypted values.
* @returns {Promise<ActionMetadata>} The updated action metadata with encrypted values.
*/
async function encryptMetadata(metadata) {
const encryptedMetadata = { ...metadata };
@@ -180,7 +194,7 @@ async function encryptMetadata(metadata) {
* Decrypts sensitive metadata values for an action.
*
* @param {ActionMetadata} metadata - The action metadata to decrypt.
* @returns {ActionMetadata} The updated action metadata with decrypted values.
* @returns {Promise<ActionMetadata>} The updated action metadata with decrypted values.
*/
async function decryptMetadata(metadata) {
const decryptedMetadata = { ...metadata };

View File

@@ -0,0 +1,87 @@
jest.mock('~/models/Role', () => ({
initializeRoles: jest.fn(),
updateAccessPermissions: jest.fn(),
getRoleByName: jest.fn(),
updateRoleByName: jest.fn(),
}));
jest.mock('~/config', () => ({
logger: {
info: jest.fn(),
warn: jest.fn(),
error: jest.fn(),
},
}));
jest.mock('./Config/loadCustomConfig', () => jest.fn());
jest.mock('./start/interface', () => ({
loadDefaultInterface: jest.fn(),
}));
jest.mock('./ToolService', () => ({
loadAndFormatTools: jest.fn().mockReturnValue({}),
}));
jest.mock('./start/checks', () => ({
checkVariables: jest.fn(),
checkHealth: jest.fn(),
checkConfig: jest.fn(),
checkAzureVariables: jest.fn(),
}));
const AppService = require('./AppService');
const { loadDefaultInterface } = require('./start/interface');
describe('AppService interface configuration', () => {
let app;
let mockLoadCustomConfig;
beforeEach(() => {
app = { locals: {} };
jest.resetModules();
jest.clearAllMocks();
mockLoadCustomConfig = require('./Config/loadCustomConfig');
});
it('should set prompts and bookmarks to true when loadDefaultInterface returns true for both', async () => {
mockLoadCustomConfig.mockResolvedValue({});
loadDefaultInterface.mockResolvedValue({ prompts: true, bookmarks: true });
await AppService(app);
expect(app.locals.interfaceConfig.prompts).toBe(true);
expect(app.locals.interfaceConfig.bookmarks).toBe(true);
expect(loadDefaultInterface).toHaveBeenCalled();
});
it('should set prompts and bookmarks to false when loadDefaultInterface returns false for both', async () => {
mockLoadCustomConfig.mockResolvedValue({ interface: { prompts: false, bookmarks: false } });
loadDefaultInterface.mockResolvedValue({ prompts: false, bookmarks: false });
await AppService(app);
expect(app.locals.interfaceConfig.prompts).toBe(false);
expect(app.locals.interfaceConfig.bookmarks).toBe(false);
expect(loadDefaultInterface).toHaveBeenCalled();
});
it('should not set prompts and bookmarks when loadDefaultInterface returns undefined for both', async () => {
mockLoadCustomConfig.mockResolvedValue({});
loadDefaultInterface.mockResolvedValue({});
await AppService(app);
expect(app.locals.interfaceConfig.prompts).toBeUndefined();
expect(app.locals.interfaceConfig.bookmarks).toBeUndefined();
expect(loadDefaultInterface).toHaveBeenCalled();
});
it('should set prompts and bookmarks to different values when loadDefaultInterface returns different values', async () => {
mockLoadCustomConfig.mockResolvedValue({ interface: { prompts: true, bookmarks: false } });
loadDefaultInterface.mockResolvedValue({ prompts: true, bookmarks: false });
await AppService(app);
expect(app.locals.interfaceConfig.prompts).toBe(true);
expect(app.locals.interfaceConfig.bookmarks).toBe(false);
expect(loadDefaultInterface).toHaveBeenCalled();
});
});

View File

@@ -45,7 +45,7 @@ const AppService = async (app) => {
const socialLogins =
config?.registration?.socialLogins ?? configDefaults?.registration?.socialLogins;
const interfaceConfig = loadDefaultInterface(config, configDefaults);
const interfaceConfig = await loadDefaultInterface(config, configDefaults);
const defaultLocals = {
paths,
@@ -94,18 +94,19 @@ const AppService = async (app) => {
);
}
if (endpoints?.[EModelEndpoint.openAI]) {
endpointLocals[EModelEndpoint.openAI] = endpoints[EModelEndpoint.openAI];
}
if (endpoints?.[EModelEndpoint.google]) {
endpointLocals[EModelEndpoint.google] = endpoints[EModelEndpoint.google];
}
if (endpoints?.[EModelEndpoint.anthropic]) {
endpointLocals[EModelEndpoint.anthropic] = endpoints[EModelEndpoint.anthropic];
}
if (endpoints?.[EModelEndpoint.gptPlugins]) {
endpointLocals[EModelEndpoint.gptPlugins] = endpoints[EModelEndpoint.gptPlugins];
}
const endpointKeys = [
EModelEndpoint.openAI,
EModelEndpoint.google,
EModelEndpoint.bedrock,
EModelEndpoint.anthropic,
EModelEndpoint.gptPlugins,
];
endpointKeys.forEach((key) => {
if (endpoints?.[key]) {
endpointLocals[key] = endpoints[key];
}
});
app.locals = {
...defaultLocals,

Some files were not shown because too many files have changed in this diff Show More