Compare commits

..

86 Commits

Author SHA1 Message Date
Danny Avila
2607f157d3 Release: 0.5.3 (#599)
* release: 0.5.3, add extra linting rule and minor fix for EndpointDialog

* chore: revert to deprecated Message Classes as weird behavior seen in linux

* chore(api): remove unused test scripts
chore(package.json): remove unused langchain dependency

* chore(.gitignore): add /images directory to the ignore list
2023-07-06 21:03:31 -04:00
Fuegovic
69d192bac3 Docs: assets clean up (#598)
* Update ngrok.md

* Update linode.md

* Update cloudflare.md

* Update testing.md

* Update google_search.md

* Update introduction.md

* Update stable_diffusion.md

* Update wolfram.md

* docs: assets clean up
2023-07-06 17:41:22 -04:00
Danny Avila
fabd85ff40 hotfix(Plugins): retain message history, improve completion prompt handling (#597) 2023-07-06 16:45:39 -04:00
Danny Avila
12e2826d39 Update azure.md 2023-07-06 14:29:03 -04:00
Danny Avila
206fc50bc9 docs(azure.md): update instructions for using Azure with the Plugins endpoint 2023-07-06 14:29:03 -04:00
Danny Avila
b6a2634a0a refactor(PluginsClient.js): remove unnecessary console.debug statement 2023-07-06 14:29:03 -04:00
Danny Avila
5ad0ef331f feat: user_provided token for plugins, temp remove user_provided azure creds for Plugins until solution is made 2023-07-06 14:29:03 -04:00
Danny Avila
5b30ab5d43 chore(api): update langchain dependency to version 0.0.103 2023-07-06 14:29:03 -04:00
Danny Avila
8b91145953 refactor(SetTokenDialog): refactor to TS, modularize config logic into separate components 2023-07-06 14:29:03 -04:00
Danny Avila
b6f21af69b chore(eslint): update max-len rule to limit code length to 120 characters
chore(eslint): update quotes rule to enforce the use of single quotes
2023-07-06 14:29:03 -04:00
David
684ffd8e66 Update styles to match to openai's panel button (#595) 2023-07-06 14:25:17 -04:00
John Cao
da17649557 Update apis_and_tokens.md (#594)
`accessToken` rather than `access_token`
2023-07-06 14:23:51 -04:00
Marco Beretta
c343ae16c6 Cloudflare tunnels doc (#592)
* Update cloudflare.md

* Update cloudflare.md
2023-07-06 14:22:35 -04:00
Danny Avila
77d5fb0c58 refactor(BaseClient, GoogleClient): make sendCompletion required, refactor Google to use Base sendMessage (#591) 2023-07-05 14:00:12 -04:00
Dan Orlando
4e317c85fd fix: Remove script for migrateDataToFirstUser (#590) 2023-07-05 12:45:25 -04:00
Marco Beretta
3c1aeab340 Ngrok Documentation (#586)
* Ngrok doc

* Add files via upload

* Update README.md

* Update mkdocs.yml
2023-07-05 09:20:23 -04:00
Danny Avila
75250f3a5f Fix: Azure "user_provided" Frontend Credentials and Save Last Selected Bing Settings (#587)
* fix(Messages.jsx): fix <body> tag warning

* fix(NewConversationMenu): update localStorage with lastBingSettings when endpoint is 'bingAI'
fix(getDefaultConversation): retrieve lastBingSettings from localStorage and use it to set default values for jailbreak and toneStyle when endpoint is 'bingAI'
feat(settings.spec.js): add test to check if the active class is set on the selected endpoint in the settings menu

* fix(BingAIOptions): add data-testid to BingAIOptions SelectDropDown component
fix(settings.spec.js): update test to include additional steps for testing settings persistence

* fix(azure): support user_provided credentials from client
2023-07-05 09:18:45 -04:00
Dan Orlando
04e4259005 Move data provider to shared package (#582)
* create data-provider package and move code from data-provider folder to be shared between apps

* fix type issues

* add packages to ignore

* add new data-provider package to apps

* refactor: change client imports to use @librechat/data-provider package

* include data-provider build script in frontend build

* fix type issue after rebasing

* delete admin/package.json from this branch

* update test ci script to include building of data-provider package

* Try using regular build for test action

* Switch frontend-review back to build:ci

* Remove loginRedirect from Login.tsx

* Add ChatGPT back to EModelEndpoint
2023-07-04 15:47:41 -04:00
Marco Beretta
d0078d478d GIthub Login (#578)
* Add files via upload

* Create linode-setup.md

* Create cloudflare-setup.md

* Update cloudflare-setup.md

* Delete 4-linode.png

* Delete 3-linode.png

* Add files via upload

* Add files via upload

* Update cloudflare-setup.md

* Update linode-setup.md

* Rename cloudflare-setup.md to cloudflare.md

* Rename linode-setup.md to linode.md

* Update mkdocs.yml

* Update cloudflare.md

* Update linode.md

* Update README.md

* Update README.md

* Update linode.md

sentence in Italian

* v1

The frontend has been completed, along with the .env variables.

However, there is an issue of infinite loading thereafter.

* Fix email and remove deprecated GitHub passport

* Update user_auth_system.md

add How to Set Up a Github Authentication

* Update .env.example

Improved the comment above the GitHub client ID and secret.

* Update user_auth_system.md

* Update package.json

* Remove unnecessary passport GitHub package

* fixed conflicts

 fixed conflicts between Berry-13:main and danny-avila:main

in api/server/index.js 45:54

* Delete e -i HEAD~2
2023-07-04 15:23:42 -04:00
Danny Avila
8819e83d2c refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532)
* refactor: start new client classes, test localAi support

* feat: create base class, extend chatgpt from base

* refactor(BaseClient.js): change userId parameter to user
refactor(BaseClient.js): change userId parameter to user
feat(OpenAIClient.js): add sendMessage method
refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId
refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId
refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages
feat(index.js): export client classes
refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key
refactor(index.js): comment out askOpenAI route
feat(index.js): add openAI route

feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests.

* refactor(BaseClient.js): use optional chaining operator to access messageId property
refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt
refactor(OpenAIClient.js): use optional chaining operator to access messageId property
refactor(fetch-polyfill.js): remove fetch polyfill
refactor(openAI.js): comment out debug option in clientOptions

* refactor: update import statements and remove unused imports in several files
feat: add getAzureCredentials function to azureUtils module
docs: update comments in azureUtils module

* refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency

* feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials
feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl
feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used
feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods
feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model
feat(openAI.js): modify endpointOption to include modelOptions instead of individual options
refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property
refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter
refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter
refactor(OpenAIClient.js): modify ask method to include endpointOption parameter

* chore: delete draft file

* refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability

* refactor(BaseClient.js): move sendMessage method to BaseClient class
feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class.

* refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions
feat(BaseClient.js): add buildMessages method to BaseClient class
fix(ChatGPTClient.js): use message.text instead of message.message
refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody
refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable

refactor(OpenAIClient.js): move setOptions method to the bottom of the class
feat(OpenAIClient.js): add support for cl100k_base encoding
feat(OpenAIClient.js): add support for unofficial chat GPT models
feat(OpenAIClient.js): add support for custom modelOptions
feat(OpenAIClient.js): add caching for tokenizers
feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers
refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions
refactor(OpenAIClient.js): rename buildPrompt to buildMessages
refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js

* refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency

- In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'.
- In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method.

* refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages
fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty

* refactor(BaseClient.js): extract addInstructions method from sendMessage method
feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model
refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list

* refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method

* feat(BaseClient.js): add support for token count tracking and context strategy
feat(OpenAIClient.js): add support for token count tracking and context strategy
feat(Message.js): add tokenCount field to Message schema and updateMessage function

* refactor(BaseClient.js): add support for refining messages based on token limit
feat(OpenAIClient.js): add support for context refinement strategy
refactor(OpenAIClient.js): use context refinement strategy in message sending
refactor(server/index.js): improve code readability by breaking long lines

* refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity
feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement
feat(BaseClient.js): add `refineMessages` method to refine messages
feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy
feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options
refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method
refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency
refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency
refactor(BaseClient.js): change `remainingContext` to `remainingContext

* chore(openAI.js): comment out contextStrategy option in clientOptions

* chore(openAI.js): comment out debug option in clientOptions object

* test: BaseClient tests in progress

* test: Complete OpenAIClient & BaseClient tests

* fix(OpenAIClient.js): remove unnecessary whitespace
fix(OpenAIClient.js): remove unused variables and comments
fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests

* chore(.eslintrc.js): add rule for maximum of 1 empty line
feat(ask/openAI.js): add abortMessage utility function
fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters
feat(utils/index.js): export abortMessage utility function

* test: complete additional tests

* feat: Azure OpenAI as a separate endpoint

* chore: remove extraneous console logs

* fix(azureOpenAI): use chatCompletion endpoint

* chore(initializeClient.js): delete initializeClient.js file

chore(askOpenAI.js): delete old OpenAI route handler

chore(handlers.js): remove trailing whitespace in thought variable assignment

* chore(chatgpt-client.js): remove unused chatgpt-client.js file
refactor(index.js): remove askClient import and export from index.js

* chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance

The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second.

The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000.

A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again.

The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests.

Note: This is a test script and should not be used in production

* feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client
feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client
feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages
fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils
feat(BaseClient): return promptTokens and completionTokens

* refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient

* refactor: client paths

* chore(jest.config.js): remove jest.config.js file

* fix(PluginController.js): update file path to manifest.json
feat(gptPlugins.js): add support for aborting messages

refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency

* fix(BaseClient.js): fix spacing in generateTextStream function signature
refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function
refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function
refactor(PluginsClient.js): remove unused variables and date formatting in constructor
refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function

* refactor(GoogleClient): GoogleClient now extends BaseClient

* chore(.env.example): add AZURE_OPENAI_MODELS variable
fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true
fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true
fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement
docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables

* fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
Danny Avila
10c772c9f2 fix(e2e): add support for setting localStorage before navigating to the page due to new Nav behavior (#579) 2023-07-03 16:37:23 -04:00
David
8e1473c3d8 feature: ChatGPT style show/hide panel button, panel state saving in local storage (#575)
* Add nav bar state saving to local storage

* Add ChatGPT style hide side panel button
2023-07-03 16:26:00 -04:00
Danny Avila
88683b9cc5 fix(Chat.jsx): Improve Message Creation UX by Eliminating Screen Flicker (#577)
* fix(Chat.jsx): conversation no longer navigates upon message creation, which would cause re-render/flicker

* chore(.gitignore): ignore storageState.json in all directories
chore(storageState.json): delete e2e/storageState.json file

* test(e2e): fix old tests with new playwright setup & add helper script for codegen

* fix(Conversation.jsx): add data-testid attribute to <a> element
test(messages.spec.js): add test for expected navigation after receiving message
test(messages.spec.js): add test for page navigations

* chore(Plugin.jsx): import Spinner from '~/components' instead of '../svg/Spinner'
chore(index.jsx): import Spinner from '~/components' instead of '../svg/Spinner'
chore(Spinner.jsx): change classProp prop to className prop in Spinner component
feat(index.ts): export Spinner component from './Spinner'
2023-07-03 16:00:04 -04:00
Danny Avila
6b843429c5 fix(Content.jsx): enable single dollar text math in remarkMath plugin (#574) 2023-07-03 08:57:43 -04:00
Fuegovic
b89d46dac0 Update mkdocs.yml (#569)
fix the "breaking changes" page 404 error
2023-07-03 01:08:29 -04:00
bsu3338
e61eb7b5bc Create CNAME (#568)
Need to not break custom domain on mkdoc update
2023-07-02 09:48:43 -04:00
Fuegovic
d2ce2ef2cd Fix: Increase password max length and accept '-' for username in regex (#564)
* fix: increase username max length and accept '-' in regex

* fix: increase username max length and accept '-' in regex

* fix: increase username max length and accept '-' in regex
2023-07-01 20:12:45 -04:00
Fuegovic
df2a68e1e7 Docs: updates & enhancements for MKDocs (#555)
* Update documents for mkdocs compatibility

* documents update

* documents update

* Update README.md

* Update README.md

add link to "https://docs.librechat.ai" on the logo

* document updates

* docs - badge updates

* docs - badge updates

* docs - badge updates

* Update docker_install.md

* Update .env.example

update default MONGO_URI to port 27018 so local install can communicate with the docker db

* Update windows_install.md

fix typo
2023-07-01 20:11:37 -04:00
bsu3338
d7270a1676 Update .env.example (#563)
changed to match api/server/routes/config.js
2023-07-01 20:10:53 -04:00
Danny Avila
8f6855116d feat: update compose file to automatically tagged gh container image (#559) 2023-06-27 09:44:51 -04:00
Danny Avila
83eea4825b Release: 0.5.2 (#558) 2023-06-27 09:21:22 -04:00
bsu3338
4a0cf11c90 Add social sites to MkDocs (#554) 2023-06-27 08:57:40 -04:00
bsu3338
5f3266c1eb Update user_auth_system.md (#553)
Changed fqdn to localhost:3080
2023-06-27 08:56:42 -04:00
Marco Beretta
abd1b10b46 Enhanced Documentation: Added Cloudflare and Linode Setup (#549)
* Add files via upload

* Create linode-setup.md

* Create cloudflare-setup.md

* Update cloudflare-setup.md

* Delete 4-linode.png

* Delete 3-linode.png

* Add files via upload

* Add files via upload

* Update cloudflare-setup.md

* Update linode-setup.md

* Rename cloudflare-setup.md to cloudflare.md

* Rename linode-setup.md to linode.md

* Update mkdocs.yml

* Update cloudflare.md

* Update linode.md

* Update README.md

* Update README.md

* Update linode.md

sentence in Italian
2023-06-26 09:23:50 -04:00
Dan Orlando
25211d6f23 fix: #546 issue with closing registration (#547)
* fix: #546 issue with closing registration

* refactor: change casing of controller files for consistency

* fix: ensure registrationEnabled is sending a boolean value

* refactor: modifications to openId code
2023-06-25 15:40:31 -04:00
bsu3338
fdc5265f48 MkDocs for Material (#545)
* Create mkdocs.yaml

* Create mkdocs.yml

* Update mkdocs.yml

* Update mkdocs.yml

* Update mkdocs.yml

* Update mkdocs.yml

* Update mkdocs.yml

* Update README.md

* Update coding_conventions.md

* Update documentation_guidelines.md

* Update testing.md

* Update heroku.md

* Update hetzner_ubuntu.md

* Update google_search.md

* Update introduction.md

* Update make_your_own.md

* Update stable_diffusion.md

* Update wolfram.md

* Update proxy.md

* Update user_auth_system.md

* Update bing_jailbreak_info.md

* Update breaking_changes.md

* Update multilingual_information.md

* Update project_origin.md

* Update tech_stack.md

* Update apis_and_tokens.md

* Update docker_install.md

* Update linux_install.md

* Update mac_install.md

* Update windows_install.md

* Update mkdocs.yml

* Update mkdocs.yml

* Update documentation_guidelines.md

* Add files via upload

* Create temp.txt

* Add files via upload

* Delete logo.png

* Create index.md

* Update mkdocs.yml

* Update mkdocs.yml

* Delete temp.txt

* Update README.md

* Update README.md

---------

Co-authored-by: Danny Avila <110412045+danny-avila@users.noreply.github.com>
2023-06-24 23:00:10 -04:00
bsu3338
eceba36f54 OpenID Authentication (#495)
* Squashed commit of the following:

commit 26ab03fb36fcc7fcee63fdf3ae8c2dfb29027eff
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Tue Jun 13 00:23:23 2023 -0500

    Update Registration.spec.tsx

commit e908dd82fe9ef1b43c75ee64c183d2f654bdac1c
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Tue Jun 13 00:23:01 2023 -0500

    Update Login.spec.tsx

commit 223734820fb77d7fb5af4802af642d1c1fd7c1f5
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Tue Jun 13 00:22:39 2023 -0500

    Update Registration.tsx

commit 7036d3dd0538979ee397d958ebc113bb0ea32411
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Tue Jun 13 00:21:55 2023 -0500

    Update Login.tsx

commit 76bb78221db3195fd930fe9cfd6a5da7194fa759
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Tue Jun 13 00:21:03 2023 -0500

    Update envConstants.js

commit ee2f69f33d75fbb57022afbcd9564bca38a46bee
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Tue Jun 13 00:20:08 2023 -0500

    Update docker-compose.yml

commit 5ac72d789b3446884c6e2f4f595cbf67d731d43c
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Tue Jun 13 00:18:41 2023 -0500

    Update Dockerfile

commit d24341db2bd5b17eb89ab01e171a5f51f3beab0a
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Tue Jun 13 00:16:38 2023 -0500

    Update .env.example

commit 22154f4a09c5fcdfee95d43609fb01a5a883b7a9
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Tue Jun 13 00:07:48 2023 -0500

    Update Registration.spec.tsx

commit 5163f7d372a6a03c94f4357b358211a03369456e
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Tue Jun 13 00:07:30 2023 -0500

    Update Login.spec.tsx

commit 61da49e330a9376e130b24dc944854f97ab58d80
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Tue Jun 13 00:07:00 2023 -0500

    Update Registration.tsx

commit 0e45d3f0dbde34388ff2f0b2dc51b983b472eb05
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Tue Jun 13 00:06:18 2023 -0500

    Update Login.tsx

commit dca1e5367e5f3b468c7964218cc5914ca53095af
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Tue Jun 13 00:05:07 2023 -0500

    Update envConstants.js

commit f48c058465d82b03716ba85224e9f97007e014d2
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Tue Jun 13 00:04:05 2023 -0500

    Update .env.example

commit 818226c9cb079acae4fcbfe5997e4aa9e3c6d2cc
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Mon Jun 12 23:59:08 2023 -0500

    Update .env.example

commit 9a805439189b352a38ac7654d7a31bb28f0f58dd
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Mon Jun 12 23:58:31 2023 -0500

    Update env.d.ts

commit 3f37ce54758b017c9281b7fad9b040a47630ec66
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Mon Jun 12 23:57:04 2023 -0500

    Update .env.example

commit 1026036f4dd529e9531c53084450ce768cfca4c1
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Mon Jun 12 23:50:36 2023 -0500

    Update docker-compose.yml

commit a61cf7b8c51d4a9bd73a20bd67abc29891c11463
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Mon Jun 12 23:50:00 2023 -0500

    Update Dockerfile

commit 79610d6648755cd5ec45215b9fdbe04ba8242fcf
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Mon Jun 12 23:35:34 2023 -0500

    Update package-lock.json

commit e40853fd2b77f2db5be1c3dfd8b170d650e23271
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Mon Jun 12 23:30:17 2023 -0500

    Update envConstants.js

commit 5529bc61b43f279fb4418c3851be2f9011b6454d
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Mon Jun 12 23:25:58 2023 -0500

    Update docker-compose.yml

commit 07848cc464a64f7cad484e24a1310dc61aa03b18
Merge: ec628a3 72e9828
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Mon Jun 12 23:24:03 2023 -0500

    Merge branch 'danny-avila:main' into openid-client

commit ec628a3044ba963b4e733c72229400074e7c2bc4
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Mon Jun 12 23:23:16 2023 -0500

    Update envConstants.js

commit 21272221db0f58c244f08335482d45b177d338ab
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Mon Jun 12 23:21:59 2023 -0500

    Update Registration.spec.tsx

commit d3f2949c0484d5760e7b689501852f86209992a3
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Mon Jun 12 23:21:12 2023 -0500

    Update Login.spec.tsx

commit f2cf23ddd6708a3bb8d032dde5f1ce300dbe8cad
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Mon Jun 12 23:20:15 2023 -0500

    Update Registration.tsx

commit 482c346b2a7baf958665c9474223d2557504dee5
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Mon Jun 12 23:17:53 2023 -0500

    Update Login.tsx

commit 2f017aa5bf4ef91b73fe027fb346132e1a5d8b87
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Mon Jun 12 23:14:17 2023 -0500

    Update env.d.ts

commit addfd95cf93ef19cae05bab652d634af64313e6a
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Mon Jun 12 23:13:16 2023 -0500

    Create openidStrategy.js

commit 84c3b5c2f078494d8380f3a02e3ba2d935d8d79f
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Mon Jun 12 23:09:02 2023 -0500

    Update oauth.js

commit 63225cdf33b7f42005b4a446797acbd91b7ee4a7
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Mon Jun 12 23:07:35 2023 -0500

    Update index.js

commit 6efe4dafd4359ed1c3139468bf9d43f70bbaf6aa
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Mon Jun 12 23:04:55 2023 -0500

    Update package.json

commit 201badbbb5a5c8d48f5c4cba3a1349d4cfc7a070
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Mon Jun 12 23:03:37 2023 -0500

    Update User.js

commit 7d13d5c303465be9b1268e5f6d9bdf7bb8dfb2e4
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Mon Jun 12 23:02:29 2023 -0500

    Update Dockerfile

commit 2ef7f84ea77f281c3dce61211d9fd841a6424e65
Author: bsu3338 <bsu3338@users.noreply.github.com>
Date:   Mon Jun 12 23:00:42 2023 -0500

    Update .env.example

* Update openidStrategy.js

* Update .env.example

* Update .env.example

* Update docker-compose.yml

* Update env.d.ts

* Update .env.example

* Update .env.example

* Update config.js

* Update Login.tsx

* Update config.js

* Update Login.tsx

* Update Registration.tsx

* Update docker-compose.yml

* Update openidStrategy.js

* Update docker-compose.yml

* Update config.spec.js

* Update Login.spec.tsx

* Update Registration.spec.tsx

* Update types.ts

* Update .env.example

* Update package-lock.json

* Update openidStrategy.js

* Update openidStrategy.js

* Update config.js

* Update config.js

* Update Login.tsx

* Update Registration.tsx

* Update oauth.js

* Update openidStrategy.js

* Update openidStrategy.js

* Update Registration.tsx

* Update Login.tsx

* Update Login.tsx

* Update Registration.tsx

* Update Registration.tsx

* Update index.js

* Update index.js

* Update .env.example

* Update user_auth_system.md

updated instruction that includes OpenID set up

* Update package.json

* Update package-lock.json

* Update package-lock.json

* Update package-lock.json

* Update package-lock.json

* Update package-lock.json

* Update package-lock.json

* Update package-lock.json

* Update package-lock.json

* Update openidStrategy.js

* Update openidStrategy.js

Lookup user based on openID instead of email.  This is because not all AzureAD users may have an email tied to their account

* Update openidStrategy.js

First try to match an email, then try openIdID

* Update openidStrategy.js

* Update openidStrategy.js

Consider a family name or given name is not provided

---------

Co-authored-by: Fuegovic <32828263+fuegovic@users.noreply.github.com>
2023-06-24 22:45:52 -04:00
Danny Avila
7efb90366f refactor(initializeFunctionsAgent.js): remove unused code and comments (#544)
feat(initializeFunctionsAgent.js): add support for openai-functions agent type
feat(askGPTPlugins.js): change default agent to functions and skip completion
feat(cleanupPreset.js): change default agent to functions and skip completion
feat(getDefaultConversation.js): change default agent to functions and skip completion
feat(handleSubmit.js): change default agent to functions and skip completion
2023-06-22 20:40:23 -04:00
Fuegovic
731304f96a refactor: update references from chatgpt-clone to LibreChat (#541)
* refactor: update references from chatgpt-clone to LibreChat

* refactor: update references from chatgpt-clone to LibreChat
2023-06-22 20:12:25 -04:00
Shi Jin
3d40dce76a feat: update Dockerfile to include curl (#539)
* Update Dockerfile for include curl

* missing RUN
2023-06-22 12:43:12 -04:00
Danny Avila
d1d7f61fe1 style(SubmitButton.jsx): fix formatting and indentation of code blocks (#540)
feat(SubmitButton.jsx): add z-index to button to ensure it is on top of other elements
2023-06-21 11:13:31 -04:00
Danny Avila
f84da37c9c feat(Functions Agent): use official langchain function executor/agent for better output handling (#538)
* style(FunctionsAgent.js): remove unnecessary comments and update PREFIX variable
refactor(initializeFunctionsAgent.js): update to use initializeAgentExecutorWithOptions
deps(package.json): update langchain to v0.0.95
refactor(askGPTPlugins.js): pass endpointOption to onStart function

* fix(ChatAgent.js): handle undefined delta content in progressMessage.choices array
2023-06-19 14:15:56 -04:00
Shi Jin
49e2cdf76c Create container.yml: build and push docker image upon tagging (#536) 2023-06-18 20:19:08 -04:00
Fuegovic
76e51b8ac5 Update[logo] README.md (#535) 2023-06-18 15:48:35 -04:00
Danny Avila
ac537b96f6 style: mobile optimizations, use fixed dialogs, and prevent auto-scroll for presets (#534)
* style(client): adjust height and add overflow to EditPresetDialog, AgentSettings, and Settings components
style(client): adjust size and add overflow to NewConversationMenu and PresetItems components
style(ui): add overflow to DialogContent component

* style(Settings.jsx): change height of settings component to md:h-[350px] h-[490px]
2023-06-18 11:24:22 -04:00
Danny Avila
4f47da8f0d build(docker-compose.yml): change the image name to librechat (#530)
fix(docker-compose.yml): add target node to build command
2023-06-17 16:41:22 -04:00
Danny Avila
f1f33de4db chore: Update docker, Minor Styling fix (#528)
* chore(docker): add .env and **/.env to .dockerignore
refactor(docker): remove unnecessary .env file copy and removal in Dockerfile

* style(AgentSettings): adjust Switch placement fix(EditPresetDialog): correctly show functions setting in preset
2023-06-17 11:38:48 -04:00
Daniel Avila
9778e73087 style(OptionHover.jsx): add descriptions for new types
feat(AgentSettings.jsx): change OptionHover type from 'temp' to 'func' and from 'temp' to 'skip'
2023-06-16 00:04:35 -04:00
Daniel Avila
4353d42035 feat(tools): add structured Wolfram tool 2023-06-16 00:04:35 -04:00
Daniel Avila
d0be2e6f4a feat(ChatAgent.js): add support for skipping completion mode in ChatAgent
feat(ChatAgent.js): add a check for images when completion is skipped to add to response
feat(askGPTPlugins.js): add skipCompletion option to agentOptions
feat(client): add Switch component to ui components and use for new Agent Settings
chore(package.json): ignore client directory in nodemonConfig
2023-06-16 00:04:35 -04:00
Daniel Avila
5b1efc48d1 refactor(FunctionsAgent.js): remove unnecessary text from PREFIX constant 2023-06-16 00:04:35 -04:00
Daniel Avila
7053d76f48 feat(FunctionsAgent): improve LLM instructions for more reliable results 2023-06-16 00:04:35 -04:00
Daniel Avila
9f930ecf7d fix: remove public/images directory and image created during testing 2023-06-16 00:04:35 -04:00
Daniel Avila
dfec4bfe3a refactor(ChatAgent.js, handlers.js): stringify toolInput object in logs
The `toolInput` object was not being properly logged in the `ChatAgent.js` and `handlers.js` files. The `JSON.stringify()` method was added to properly log the object.
2023-06-16 00:04:35 -04:00
Daniel Avila
7541e9b3d3 refactor(StableDiffusion.js): update prompt and negative_prompt schema descriptions to include minimum number of keywords
fix(StableDiffusion.js): fix output path for generated image
feat(askGPTPlugins.js): update import path for validateTools function
2023-06-16 00:04:35 -04:00
Daniel Avila
bffa9ad016 refactor(handleTools.js): change loadTools function signature to include functions parameter
feat(handleTools.test.js): add test for loading StructuredSD tool with functions parameter
2023-06-16 00:04:35 -04:00
Daniel Avila
d339c291fa refactor(langchain/tools): move availableTools import to tools/index.js 2023-06-16 00:04:35 -04:00
Daniel Avila
a42ef2944c feat(StableDiffusion.js): add StableDiffusionAPI as a StructuredTool for Functions Agent 2023-06-16 00:04:35 -04:00
Daniel Avila
1b3215c55d refactor(tools): restructure tool dir 2023-06-16 00:04:35 -04:00
Daniel Avila
71d812403e refactor(ChatAgent.js, handlers.js): improve logging format and add support for functionsAgent
- Improve logging format in ChatAgent.js by adding more details to the log
- Add support for functionsAgent in ChatAgent.js to format the log differently
- Improve formatAction function in handlers.js to handle empty thoughts and add support for functionsAgent
2023-06-16 00:04:35 -04:00
Daniel Avila
6e183b91e1 refactor(FunctionsAgent.js): change var to const in plan function 2023-06-16 00:04:35 -04:00
Daniel Avila
198f60c536 feat(frontend): add support for agent selection in GPT plugins in adding functions agent
Add support for selecting an agent in the GPT plugins endpoint. The agent can be selected from a dropdown menu in the AgentSettings component. The default agent is set to 'classic' in the cleanupPreset, getDefaultConversation, and handleSubmit functions.
2023-06-16 00:04:35 -04:00
Daniel Avila
3caddd6854 feat(experimental): FunctionsAgent, uses new function payload for tooling 2023-06-16 00:04:35 -04:00
Fuegovic
550e566097 docs: fix/update (#525)
* Update README.md

* Update Hetzner doc

* Update heroku.md

* Update README.md

* Create breaking_changes.md

* Update README.md

* Update breaking_changes.md
2023-06-16 00:02:29 -04:00
Dan Orlando
3634d8691a Feat/startup config api (#518)
* feat: add api for config

* feat: add data service to client

* feat: update client pages with values from config endpoint

* test: update tests

* Update configurations and documentation to remove VITE_SHOW_GOOGLE_LOGIN_OPTION and change VITE_APP_TITLE to APP_TITLE

* include APP_TITLE with startup config

* Add test for new route

* update backend-review pipeline

* comment out test until we can figure out testing routes in CI

* update: .env.example

---------

Co-authored-by: fuegovic <32828263+fuegovic@users.noreply.github.com>
2023-06-15 12:36:34 -04:00
Alex
2da81db440 Update stable_diffusion.md (#523)
* Update stable_diffusion.md

added description for dockerized stable diffusion deployment.

* Update stable_diffusion.md

minor changes to documentation
2023-06-14 10:17:11 -04:00
Alex
3e98486190 fully dockerized development with VS-code devcontainers (#524)
Co-authored-by: bll <bll@tgw-group.com>
2023-06-14 10:14:24 -04:00
Danny Avila
ff2c8e6614 style(Input): remove unnecessary z-index class from input field (#522) 2023-06-13 23:52:17 -04:00
Danny Avila
36a524a630 feat(OpenAI, PaLM): Add model support for new OpenAI models and codechat-bison (#516)
* feat(OpenAI, PaLM): add new models
refactor(chatgpt-client.js): use object to map max tokens for each model
refactor(askChatGPTBrowser.js, askGPTPlugins.js, askOpenAI.js): comment out unused function calls and error handling
feat(askGoogle.js): add support for codechat-bison model
refactor(endpoints.js): add gpt-4-0613 and gpt-3.5-turbo-16k to available models for OpenAI and GPT plugins
refactor(EditPresetDialog.jsx): hide examples for codechat-bison model in google endpoint

style(EndpointOptionsPopover.jsx): add cn utility function import and use it to set additionalButton className

refactor(Google/Settings.jsx): conditionally render custom name and prompt prefix fields based on model type

The code has been refactored to conditionally render the custom name and prompt prefix fields based on the type of model selected. If the model starts with 'codechat-', the fields will not be rendered.

refactor(Settings.jsx): remove duplicated code and wrap a section in a conditional statement based on a variable

style(Input): add z-index to Input component to fix overlapping issue
feat(GoogleOptions): disable Examples button when model starts with 'codechat-' prefix

* feat(.env.example, endpoints.js): add PLUGIN_MODELS environment variable and use it to get plugin models in endpoints.js
2023-06-13 16:42:01 -04:00
heathriel
42583e7344 Create HetznerUbuntuSetup.md (#492)
* Create HetznerUbuntuSetup.md

Step-by-step guide for someone who is starting from scratch on this project with a bare server.

* Updated Readme & Heroku

I submitted the original Heroku.md (to the discord) and they are way out of date. Just corrected them, moved the Hetzner file to the cloud deploy, and updated the readme to point to the file.

* Update HetznerUbuntuSetup.md

* Update README.md
2023-06-13 14:36:13 -04:00
Danny Avila
bccd0cb3dd fix(nodemon): will now follow nodemonConfig in package file (#514) 2023-06-13 14:31:45 -04:00
Alex
07fec3b958 Update .dockerignore (#510)
fixes bug: #505

as described here: https://devpress.csdn.net/cloudnative/63054374c67703293080f1eb.html

Co-authored-by: Danny Avila <110412045+danny-avila@users.noreply.github.com>
2023-06-13 14:29:25 -04:00
Fuegovic
4dc3c31df8 docs update (#508)
* doc update: stable_diffusion.md

update docker specific instruction for Stable Diffusion

* doc update: .env.example

* Update README.md

update sponsors list

* Update stable_diffusion.md

* Update stable_diffusion.md

* Update stable_diffusion.md

* Update .env.example

* Update docker-compose.yml

* Update .env.example
2023-06-13 14:27:57 -04:00
Danny Avila
821b507e0e fix(docker): handle .env file to read frontend vars during build easily (#513)
- Remove api/.env and .env from .dockerignore file
- Add COPY .env .env to Dockerfile to copy .env file to docker build
- Add RUN rm .env to Dockerfile to remove .env file after build
- Remove build args from docker-compose.yml file
2023-06-13 12:12:27 -04:00
Danny Avila
9d3e749104 feat(prepare.js): add script to install husky only if NODE_ENV is not 'CI' (#512)
refactor(package.json): change prepare script to run prepare.js script instead of husky install directly
2023-06-13 11:36:54 -04:00
Dan Orlando
2003480fed Change prepare script to not run in CI mode and remove --ignore-scripts flag from workflow (#491)
* Change prepare script to not run in CI mode and remove --ignore-scripts flag from workflow

* add release/* to branches for workflows
2023-06-13 09:20:47 -04:00
Danny Avila
ee52533339 refactor(DALL-E.js): remove unused genAzureEndpoint import and commented code (#506)
feat(DALL-E.js): add getApiKey method to retrieve DALLE_API_KEY from environment variable
2023-06-13 08:58:09 -04:00
Danny Avila
72e9828b76 tests(api): refactor to mock database and network operations (#494) 2023-06-13 00:04:01 -04:00
LaraClara
f92e4f28be bugfix(install): Handle missing .env.example 2023-06-12 20:59:46 -04:00
LaraClara
60bcd7ae49 chore: Add more comments and user messages to create-user cmd 2023-06-12 20:59:46 -04:00
LaraClara
87fa9f9ab0 feature: Create a user from the command line 2023-06-12 20:59:46 -04:00
LaraClara
6c5bea0096 chore: Rename color-console.js to helper.js and add askQuestion as exported fn 2023-06-12 20:59:46 -04:00
LaraClara
c5325cee3a chore: Change package names to be meaningful
- Makes it easier to call them from the outside or even include the package in a different project (ie frontend only)
2023-06-12 20:59:46 -04:00
LaraClara
e726e42cb5 chore: Add color to the install script
- This will only work on terminals setup to use color
2023-06-12 20:59:46 -04:00
LaraClara
4c340fd0ba chore: Add warning if mongodb url looks incorrect in install 2023-06-12 20:59:46 -04:00
LaraClara
cefdd1fb88 feature: Ask for mongodb url during the npm install 2023-06-12 20:59:46 -04:00
266 changed files with 10992 additions and 12083 deletions

View File

@@ -0,0 +1,57 @@
// {
// "name": "LibreChat_dev",
// // Update the 'dockerComposeFile' list if you have more compose files or use different names.
// "dockerComposeFile": "docker-compose.yml",
// // The 'service' property is the name of the service for the container that VS Code should
// // use. Update this value and .devcontainer/docker-compose.yml to the real service name.
// "service": "librechat",
// // The 'workspaceFolder' property is the path VS Code should open by default when
// // connected. Corresponds to a volume mount in .devcontainer/docker-compose.yml
// "workspaceFolder": "/workspace"
// //,
// // // Set *default* container specific settings.json values on container create.
// // "settings": {},
// // // Add the IDs of extensions you want installed when the container is created.
// // "extensions": [],
// // Uncomment the next line if you want to keep your containers running after VS Code shuts down.
// // "shutdownAction": "none",
// // Uncomment the next line to use 'postCreateCommand' to run commands after the container is created.
// // "postCreateCommand": "uname -a",
// // Comment out to connect as root instead. To add a non-root user, see: https://aka.ms/vscode-remote/containers/non-root.
// // "remoteUser": "vscode"
// }
{
// "name": "LibreChat_dev",
"dockerComposeFile": "docker-compose.yml",
"service": "app",
// "image": "node:19-alpine",
// "workspaceFolder": "/workspaces",
"workspaceFolder": "/workspace",
// Set *default* container specific settings.json values on container create.
// "overrideCommand": true,
"customizations": {
"vscode": {
"extensions": [],
"settings": {
"terminal.integrated.profiles.linux": {
"bash": null
}
}
}
},
"postCreateCommand": ""
// "workspaceMount": "src=${localWorkspaceFolder},dst=/code,type=bind,consistency=cached"
// "runArgs": [
// "--cap-add=SYS_PTRACE", "--security-opt", "seccomp=unconfined",
// "-v", "/tmp/.X11-unix:/tmp/.X11-unix",
// "-v", "${env:XAUTHORITY}:/root/.Xauthority:rw",
// "-v", "/home/${env:USER}/.cdh:/root/.cdh",
// "-e", "DISPLAY=${env:DISPLAY}",
// "--name=tgw_assistant_backend_dev",
// "--network=host"
// ],
// "settings": {
// "terminal.integrated.shell.linux": "/bin/bash"
// },
}

View File

@@ -0,0 +1,76 @@
version: '3.4'
services:
app:
# container_name: LibreChat_dev
image: node:19-alpine
# Using a Dockerfile is optional, but included for completeness.
# build:
# context: .
# dockerfile: Dockerfile
# # [Optional] You can use build args to set options. e.g. 'VARIANT' below affects the image in the Dockerfile
# args:
# VARIANT: buster
network_mode: "host"
# ports:
# - 3080:3080 # Change it to 9000:3080 to use nginx
extra_hosts: # if you are running APIs on docker you need access to, you will need to uncomment this line and next
- "host.docker.internal:host-gateway"
volumes:
# # This is where VS Code should expect to find your project's source code and the value of "workspaceFolder" in .devcontainer/devcontainer.json
- ..:/workspace:cached
# # - /app/client/node_modules
# # - ./api:/app/api
# # - ./.env:/app/.env
# # - ./.env.development:/app/.env.development
# # - ./.env.production:/app/.env.production
# # - /app/api/node_modules
# # Uncomment the next line to use Docker from inside the container. See https://aka.ms/vscode-remote/samples/docker-from-docker-compose for details.
# # - /var/run/docker.sock:/var/run/docker.sock
# Runs app on the same network as the service container, allows "forwardPorts" in devcontainer.json function.
# network_mode: service:another-service
# Use "forwardPorts" in **devcontainer.json** to forward an app port locally.
# (Adding the "ports" property to this file will not forward from a Codespace.)
# Uncomment the next line to use a non-root user for all processes - See https://aka.ms/vscode-remote/containers/non-root for details.
# user: vscode
# Uncomment the next four lines if you will use a ptrace-based debugger like C++, Go, and Rust.
# cap_add:
# - SYS_PTRACE
# security_opt:
# - seccomp:unconfined
# Overrides default command so things don't shut down after the process ends.
command: /bin/sh -c "while sleep 1000; do :; done"
mongodb:
container_name: chat-mongodb
network_mode: "host"
# ports:
# - 27018:27017
image: mongo
# restart: always
volumes:
- ./data-node:/data/db
command: mongod --noauth
meilisearch:
container_name: chat-meilisearch
image: getmeili/meilisearch:v1.0
network_mode: "host"
# ports:
# - 7700:7700
# env_file:
# - .env
environment:
- SEARCH=false
- MEILI_HOST=http://0.0.0.0:7700
- MEILI_HTTP_ADDR=0.0.0.0:7700
- MEILI_MASTER_KEY=5c71cf56d672d009e36070b5bc5e47b743535ae55c818ae3b735bb6ebfb4ba63
volumes:
- ./meili_data:/meili_data

View File

@@ -1,4 +1,5 @@
**/node_modules
api/.env
client/dist/images
data-node
.env
client/dist/images
**/.env

View File

@@ -2,6 +2,8 @@
# Server configuration:
##########################
APP_TITLE=LibreChat
# The server will listen to localhost:3080 by default. You can change the target IP as you want.
# If you want to make this server available externally, for example to share the server with others
# or expose this from a Docker container, set host to 0.0.0.0 or your external IP interface.
@@ -16,7 +18,7 @@ PORT=3080
# PROXY=
# Change this to your MongoDB URI if different. I recommend appending LibreChat.
MONGO_URI=mongodb://127.0.0.1:27017/LibreChat
MONGO_URI=mongodb://127.0.0.1:27018/LibreChat
##########################
# OpenAI Endpoint:
@@ -25,12 +27,12 @@ MONGO_URI=mongodb://127.0.0.1:27017/LibreChat
# Access key from OpenAI platform.
# Leave it blank to disable this feature.
# Set to "user_provided" to allow the user to provide their API key from the UI.
OPENAI_API_KEY=user_provided
OPENAI_API_KEY="user_provided"
# Identify the available models, separated by commas *without spaces*.
# The first will be default.
# Leave it blank to use internal settings.
OPENAI_MODELS=gpt-3.5-turbo,gpt-3.5-turbo-0301,text-davinci-003,gpt-4,gpt-4-0314
OPENAI_MODELS=gpt-3.5-turbo,gpt-3.5-turbo-16k,gpt-3.5-turbo-0301,text-davinci-003,gpt-4,gpt-4-0314,gpt-4-0613
# Reverse proxy settings for OpenAI:
# https://github.com/waylaidwanderer/node-chatgpt-api#using-a-reverse-proxy
@@ -47,13 +49,24 @@ OPENAI_MODELS=gpt-3.5-turbo,gpt-3.5-turbo-0301,text-davinci-003,gpt-4,gpt-4-0314
# Note: I've noticed that the Azure API is much faster than the OpenAI API, so the streaming looks almost instantaneous.
# Note "AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME" and "AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME" are optional but might be used in the future
# AZURE_OPENAI_API_KEY=
# AZURE_API_KEY=
# AZURE_OPENAI_API_INSTANCE_NAME=
# AZURE_OPENAI_API_DEPLOYMENT_NAME=
# AZURE_OPENAI_API_VERSION=
# AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME=
# AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME=
# Identify the available models, separated by commas *without spaces*.
# The first will be default.
# Leave it blank to use internal settings.
AZURE_OPENAI_MODELS=gpt-3.5-turbo,gpt-4
# To use Azure with the Plugins endpoint, you need the variables above, and uncomment the following variable:
# NOTE: This may not work as expected and Azure OpenAI may not support OpenAI Functions yet
# Omit/leave it commented to use the default OpenAI API
# PLUGINS_USE_AZURE="true"
##########################
# BingAI Endpoint:
##########################
@@ -98,6 +111,11 @@ CHATGPT_MODELS=text-davinci-002-render-sha,gpt-4
# Plugins:
#############################
# Identify the available models, separated by commas *without spaces*.
# The first will be default.
# Leave it blank to use internal settings.
PLUGIN_MODELS=gpt-3.5-turbo,gpt-3.5-turbo-16k,gpt-3.5-turbo-0301,gpt-4,gpt-4-0314,gpt-4-0613
# For securely storing credentials, you need a fixed key and IV. You can set them here for prod and dev environments
# If you don't set them, the app will crash on startup.
# You need a 32-byte key (64 characters in hex) and 16-byte IV (32 characters in hex)
@@ -109,13 +127,15 @@ CREDS_IV=e2341419ec3dd3d19b13a1a87fafcbfb
# AI-Assisted Google Search
# This bot supports searching google for answers to your questions with assistance from GPT!
# See detailed instructions here: https://github.com/danny-avila/chatgpt-clone/blob/main/docs/features/plugins/google_search.md
# See detailed instructions here: https://github.com/danny-avila/LibreChat/blob/main/docs/features/plugins/google_search.md
GOOGLE_API_KEY=
GOOGLE_CSE_ID=
# StableDiffusion WebUI
# This bot supports StableDiffusion WebUI, using it's API to generated requested images.
SD_WEBUI_URL=http://0.0.0.0:7860
# See detailed instructions here: https://github.com/danny-avila/LibreChat/blob/main/docs/features/plugins/stable_diffusion.md
# Use "http://127.0.0.1:7860" with local install and "http://host.docker.internal:7860" for docker
SD_WEBUI_URL=http://host.docker.internal:7860
##########################
# PaLM (Google) Endpoint:
@@ -142,7 +162,7 @@ PROXY=
# ENABLING SEARCH MESSAGES/CONVOS
# Requires the installation of the free self-hosted Meilisearch or a paid Remote Plan (Remote not tested)
# The easiest setup for this is through docker-compose, which takes care of it for you.
SEARCH=false
SEARCH=true
# REQUIRED FOR SEARCH: MeiliSearch Host, mainly for the API server to connect to the search server.
# Replace '0.0.0.0' with 'meilisearch' if serving MeiliSearch with docker-compose.
@@ -164,6 +184,9 @@ MEILI_MASTER_KEY=DrhYf7zENyR6AlUCKmnz0eYASOQdl6zxH7s7MKFSfFCt
# User System:
##########################
# Allow Public Registration
ALLOW_REGISTRATION=true
# JWT Secrets
JWT_SECRET=secret
JWT_REFRESH_SECRET=secret
@@ -175,32 +198,42 @@ GOOGLE_CLIENT_ID=
GOOGLE_CLIENT_SECRET=
GOOGLE_CALLBACK_URL=/oauth/google/callback
# OpenID:
# See OpenID provider to get the below values
# Create random string for OPENID_SESSION_SECRET
# For Azure AD
# ISSUER: https://login.microsoftonline.com/(tenant id)/v2.0/
# SCOPE: openid profile email
OPENID_CLIENT_ID=
OPENID_CLIENT_SECRET=
OPENID_ISSUER=
OPENID_SESSION_SECRET=
OPENID_SCOPE="openid profile email"
OPENID_CALLBACK_URL=/oauth/openid/callback
# If LABEL and URL are left empty, then the default OpenID label and logo are used.
OPENID_BUTTON_LABEL=
OPENID_IMAGE_URL=
# Set the expiration delay for the secure cookie with the JWT token
# Delay is in millisecond e.g. 7 days is 1000*60*60*24*7
SESSION_EXPIRY=(1000 * 60 * 60 * 24) * 7
# Github:
# Get the Client ID and Secret from your Github Application
# Add your Github Client ID and Client Secret here:
GITHUB_CLIENT_ID=
GITHUB_CLIENT_SECRET=
GITHUB_CALLBACK_URL=/oauth/github/callback
###########################
# Application Domains
###########################
# Note: server = backend, client = public (the client is the url you visit)
# For the google login to work in dev mode, you will likely need to change DOMAIN_SERVER to localhost:3090 or place it in .env.development
# Note:
# Server = Backend
# Client = Public (the client is the url you visit)
# For the Google login to work in dev mode, you will need to change DOMAIN_SERVER to localhost:3090 or place it in .env.development
DOMAIN_CLIENT=http://localhost:3080
DOMAIN_SERVER=http://localhost:3080
###########################
# Frontend Configuration (Vite):
###########################
# Custom app name, this text will be displayed in the landing page and the footer.
VITE_APP_TITLE="LibreChat"
# Enable Social Login
# This enables/disables the Login with Google button on the login page.
# Set to true if you have registered the app with google cloud services
# and have set the GOOGLE_CLIENT_ID and GOOGLE_CLIENT_SECRET above
VITE_SHOW_GOOGLE_LOGIN_OPTION=false
# Allow Public Registration
ALLOW_REGISTRATION=true

View File

@@ -10,7 +10,7 @@ module.exports = {
'eslint:recommended',
'plugin:react/recommended',
'plugin:react-hooks/recommended',
"plugin:jest/recommended",
'plugin:jest/recommended',
'prettier'
],
parser: '@typescript-eslint/parser',
@@ -24,34 +24,32 @@ module.exports = {
plugins: ['react', 'react-hooks', '@typescript-eslint'],
rules: {
'react/react-in-jsx-scope': 'off',
'@typescript-eslint/ban-ts-comment': ['error', { 'ts-ignore': 'allow-with-description' }],
'@typescript-eslint/ban-ts-comment': ['error', { 'ts-ignore': 'allow' }],
indent: ['error', 2, { SwitchCase: 1 }],
'max-len': [
'error',
{
code: 150,
code: 120,
ignoreStrings: true,
ignoreTemplateLiterals: true,
ignoreComments: true
}
],
'linebreak-style': 0,
'object-curly-spacing': ['error', 'always'],
'no-trailing-spaces': 'error',
'no-multiple-empty-lines': ['error', { 'max': 1 }],
// "arrow-parens": [2, "as-needed", { requireForBlockBody: true }],
// 'no-plusplus': ['error', { allowForLoopAfterthoughts: true }],
'no-console': 'off',
'import/extensions': 'off',
'no-use-before-define': [
'error',
{
functions: false
}
],
'no-promise-executor-return': 'off',
'no-param-reassign': 'off',
'no-continue': 'off',
'no-restricted-syntax': 'off',
'react/prop-types': ['off'],
'react/display-name': ['off']
'react/display-name': ['off'],
'quotes': ['error', 'single'],
},
overrides: [
{
@@ -101,6 +99,18 @@ module.exports = {
'plugin:@typescript-eslint/eslint-recommended',
'plugin:@typescript-eslint/recommended'
]
},
{
files: './packages/data-provider/**/*.ts',
overrides: [
{
files: '**/*.ts',
parser: '@typescript-eslint/parser',
parserOptions: {
project: './packages/data-provider/tsconfig.json'
}
}
]
}
],
settings: {

View File

@@ -58,7 +58,7 @@ body:
id: terms
attributes:
label: Code of Conduct
description: By submitting this issue, you agree to follow our [Code of Conduct](https://github.com/danny-avila/chatgpt-clone/blob/main/documents/contributions/code_of_conduct.md)
description: By submitting this issue, you agree to follow our [Code of Conduct](https://github.com/danny-avila/LibreChat/blob/main/CODE_OF_CONDUCT.md)
options:
- label: I agree to follow this project's Code of Conduct
required: true

View File

@@ -51,7 +51,7 @@ body:
id: terms
attributes:
label: Code of Conduct
description: By submitting this issue, you agree to follow our [Code of Conduct](https://github.com/danny-avila/chatgpt-clone/blob/main/documents/contributions/code_of_conduct.md)
description: By submitting this issue, you agree to follow our [Code of Conduct](https://github.com/danny-avila/LibreChat/blob/main/CODE_OF_CONDUCT.md)
options:
- label: I agree to follow this project's Code of Conduct
required: true

View File

@@ -52,7 +52,7 @@ body:
id: terms
attributes:
label: Code of Conduct
description: By submitting this issue, you agree to follow our [Code of Conduct](https://github.com/danny-avila/chatgpt-clone/blob/main/documents/contributions/code_of_conduct.md)
description: By submitting this issue, you agree to follow our [Code of Conduct](https://github.com/danny-avila/LibreChat/blob/main/CODE_OF_CONDUCT.md)
options:
- label: I agree to follow this project's Code of Conduct
required: true

View File

@@ -1,10 +1,15 @@
name: Backend Unit Tests
on:
push:
branches: [feat/playwright-jest-cicd]
branches:
- main
- dev
- release/*
pull_request:
branches: [ feat/playwright-jest-cicd ]
branches:
- main
- dev
- release/*
jobs:
tests_Backend:
name: Run Backend unit tests
@@ -25,10 +30,10 @@ jobs:
cache: 'npm'
- name: Install dependencies
run: npm ci --ignore-scripts
run: npm ci
# - name: Install Linux X64 Sharp
# run: npm install --platform=linux --arch=x64 --verbose sharp
# run: npm install --platform=linux --arch=x64 --verbose sharp
- name: Run unit tests
run: cd api && npm run test:ci
run: cd api && npm run test:ci

47
.github/workflows/container.yml vendored Normal file
View File

@@ -0,0 +1,47 @@
name: Docker Compose Build on Tag
# The workflow is triggered when a tag is pushed
on:
push:
tags:
- "*"
jobs:
build:
runs-on: ubuntu-latest
steps:
# Check out the repository
- name: Checkout
uses: actions/checkout@v2
# Set up Docker
- name: Set up Docker
uses: docker/setup-buildx-action@v1
# Log in to GitHub Container Registry
- name: Log in to GitHub Container Registry
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
# Run docker-compose build
- name: Build Docker images
run: |
cp .env.example .env
docker-compose build
# Get Tag Name
- name: Get Tag Name
id: tag_name
run: echo "TAG_NAME=${GITHUB_REF/refs\/tags\//}" >> $GITHUB_ENV
# Tag it properly before push to github
- name: tag image and push
run: |
docker tag librechat:latest ghcr.io/${{ github.repository_owner }}/librechat:${{ env.TAG_NAME }}
docker push ghcr.io/${{ github.repository_owner }}/librechat:${{ env.TAG_NAME }}
docker tag librechat:latest ghcr.io/${{ github.repository_owner }}/librechat:latest
docker push ghcr.io/${{ github.repository_owner }}/librechat:latest

View File

@@ -2,9 +2,15 @@
name: Frontend Unit Tests
on:
push:
branches: [main, dev]
branches:
- main
- dev
- release/*
pull_request:
branches: [main, dev]
branches:
- main
- dev
- release/*
jobs:
tests_frontend:
name: Run frontend unit tests
@@ -19,10 +25,10 @@ jobs:
cache: 'npm'
- name: Install dependencies
run: npm ci --ignore-scripts
run: npm ci
- name: Build Client
run: cd client && npm run build:ci
run: npm run frontend:ci
- name: Run unit tests
run: cd client && npm run test:ci

24
.github/workflows/mkdocs.yaml vendored Normal file
View File

@@ -0,0 +1,24 @@
name: mkdocs
on:
push:
branches:
- main
permissions:
contents: write
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: 3.x
- run: echo "cache_id=$(date --utc '+%V')" >> $GITHUB_ENV
- uses: actions/cache@v3
with:
key: mkdocs-material-${{ env.cache_id }}
path: .cache
restore-keys: |
mkdocs-material-
- run: pip install mkdocs-material
- run: mkdocs gh-deploy --force

12
.gitignore vendored
View File

@@ -39,7 +39,7 @@ meili_data/
api/node_modules/
client/node_modules/
bower_components/
.turbo
types/
# Floobits
.floo
@@ -49,10 +49,9 @@ bower_components/
# Environment
.npmrc
.env
!.env.example
!.env.test.example
.env*
!**/.env.example
!**/.env.test.example
cache.json
api/data/
owner.yml
@@ -67,10 +66,13 @@ src/style - official.css
.idea
*.pem
config.local.ts
storageState.json
**/storageState.json
junit.xml
# meilisearch
meilisearch
data.ms/*
auth.json
/packages/ux-shared/
/images

View File

@@ -1,18 +1,14 @@
# Base node image
FROM node:19-alpine AS node
# Install curl for health check
RUN apk --no-cache add curl
COPY . /app
# Install dependencies
WORKDIR /app
RUN npm ci
# Frontend variables as build args
ARG VITE_APP_TITLE
ARG VITE_SHOW_GOOGLE_LOGIN_OPTION
# You will need to add your VITE variables to the docker-compose file
ENV VITE_APP_TITLE=$VITE_APP_TITLE
ENV VITE_SHOW_GOOGLE_LOGIN_OPTION=$VITE_SHOW_GOOGLE_LOGIN_OPTION
# React client build
ENV NODE_OPTIONS="--max-old-space-size=2048"
RUN npm run frontend

View File

@@ -1,19 +1,28 @@
<p align="center">
<a href="https://discord.gg/NGaa9RPCft">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://user-images.githubusercontent.com/110412045/228325485-9d3e618f-a980-44fe-89e9-d6d39164680e.png">
<img src="https://user-images.githubusercontent.com/110412045/228325485-9d3e618f-a980-44fe-89e9-d6d39164680e.png" height="128">
</picture>
<a href="https://docs.librechat.ai">
<img src="docs/assets/LibreChat.svg" height="256">
</a>
<a href="https://docs.librechat.ai">
<h1 align="center">LibreChat</h1>
</a>
</p>
<p align="center">
<a href="https://discord.gg/NGaa9RPCft">
<img src="https://img.shields.io/discord/1086345563026489514?label=&logo=discord&style=for-the-badge&logoWidth=20&labelColor=000000&color=blueviolet">
<img
src="https://img.shields.io/discord/1086345563026489514?label=&logo=discord&style=for-the-badge&logoWidth=20&logoColor=white&labelColor=000000&color=blueviolet">
</a>
<a href="https://www.youtube.com/@LibreChat">
<img
src="https://img.shields.io/badge/YOUTUBE-red.svg?style=for-the-badge&logo=youtube&logoColor=white&labelColor=000000&logoWidth=20">
</a>
<a href="https://docs.librechat.ai">
<img
src="https://img.shields.io/badge/DOCS-blue.svg?style=for-the-badge&logo=read-the-docs&logoColor=white&labelColor=000000&logoWidth=20">
</a>
<a aria-label="Sponsors" href="#sponsors">
<img alt="" src="https://img.shields.io/badge/SPONSORS-brightgreen.svg?style=for-the-badge&labelColor=000000&logoWidth=20">
<img
src="https://img.shields.io/badge/SPONSORS-brightgreen.svg?style=for-the-badge&logo=github-sponsors&logoColor=white&labelColor=000000&logoWidth=20">
</a>
</p>
@@ -22,47 +31,26 @@ LibreChat brings together the future of assistant AIs with the revolutionary tec
With LibreChat, you no longer need to opt for ChatGPT Plus and can instead use free or pay-per-call APIs. We welcome contributions, cloning, and forking to enhance the capabilities of this advanced chatbot platform.
<!-- ![clone3](https://user-images.githubusercontent.com/110412045/230538752-9b99dc6e-cd02-483a-bff0-6c6e780fa7ae.gif) -->
https://github.com/danny-avila/LibreChat/assets/110412045/c1eb0c0f-41f6-4335-b982-84b278b53d59
# Features
- Response streaming identical to ChatGPT through server-sent events
- UI from original ChatGPT, including Dark mode
- AI model selection (through 5 endpoints: OpenAI API, BingAI, ChatGPT Browser, PaLM2, Plugins)
- Create, Save, & Share custom presets - [More info on prompt presets here](https://github.com/danny-avila/chatgpt-clone/releases/tag/v0.3.0)
- Create, Save, & Share custom presets - [More info on prompt presets here](https://github.com/danny-avila/LibreChat/releases/tag/v0.3.0)
- Edit and Resubmit messages with conversation branching
- Search all messages/conversations - [More info here](https://github.com/danny-avila/chatgpt-clone/releases/tag/v0.1.0)
- Search all messages/conversations - [More info here](https://github.com/danny-avila/LibreChat/releases/tag/v0.1.0)
- Plugins now available (including web access, image generation and more)
---
# ⚠️ **Breaking Changes** ⚠️
Note: These changes only apply to users who are updating from a previous version of the app.
- We have simplified the configuration process by using a single `.env` file in the root folder instead of separate `/api/.env` and `/client/.env` files.
- If you had installed a previous version, you can run `npm run upgrade` to automatically copy the content of both files to the new `.env` file and backup the old ones in the root dir.
- If you are installing the project for the first time, it's recommend you run the installation script `npm run install` to guide your local setup (otherwise continue to use docker)
- The docker-compose file had some change. Review the [new docker instructions](docs\install\docker_install.md) to make sure you are setup properly. This is still the simplest and most effective method.
- The upgrade script requires both `/api/.env` and `/client/.env` files to run properly. If you get an error about a missing client env file, just rename the `/client/.env.example` file to `/client/.env` and run the script again.
- We have renamed the `OPENAI_KEY` variable to `OPENAI_API_KEY` to match the official documentation. The upgrade script should do this automatically for you, but please double-check that your key is correct in the new `.env` file.
- After running the upgrade script, the `OPENAI_API_KEY` variable might be placed in a different section in the new `.env` file than before. This does not affect the functionality of the app, but if you want to keep it organized, you can look for it near the bottom of the file and move it to its usual section.
##
- For enhanced security, we are now asking for crypto keys for securely storing credentials in the `.env` file. Crypto keys are used to encrypt and decrypt sensitive data such as passwords and access keys. If you don't set them, the app will crash on startup.
- You need to fill the following variables in the `.env` file with 32-byte (64 characters in hex) or 16-byte (32 characters in hex) values:
- `CREDS_KEY` (32-byte)
- `CREDS_IV` (16-byte)
- `JWT_SECRET` (32-byte, optional but recommended)
- You can use this replit to generate some crypto keys quickly: https://replit.com/@daavila/crypto#index.js
- Make sure you keep your crypto keys safe and don't share them with anyone.
We apologize for any inconvenience caused by these changes. We hope you enjoy the new and improved version of our app!
## ⚠️ [Breaking Changes as of v0.5.0](docs/general_info/breaking_changes.md#v050) ⚠️
**Please read this before updating from a previous version**
---
## Changelog
- Keep up with the latest updates by visiting the releases page - [Releases](https://github.com/danny-avila/LibreChat/releases)
Keep up with the latest updates by visiting the releases page - [Releases](https://github.com/danny-avila/LibreChat/releases)
---
@@ -71,11 +59,12 @@ We apologize for any inconvenience caused by these changes. We hope you enjoy th
<details open>
<summary><strong>Getting Started</strong></summary>
* [Docker Install](/docs/install/docker_install.md)
* [Docker Install](docs/install/docker_install.md)
* [Linux Install](docs/install/linux_install.md)
* [Mac Install](docs/install/mac_install.md)
* [Windows Install](docs/install/windows_install.md)
* [APIs and Tokens](docs/install/apis_and_tokens.md)
* [User Auth System](docs/install/user_auth_system.md)
</details>
<details>
@@ -84,8 +73,7 @@ We apologize for any inconvenience caused by these changes. We hope you enjoy th
* [Code of Conduct](CODE_OF_CONDUCT.md)
* [Project Origin](docs/general_info/project_origin.md)
* [Multilingual Information](docs/general_info/multilingual_information.md)
* [Tech Stack](docs/general_info/tech_stack.md)
* [Bing Jailbreak Info](docs/general_info/bing_jailbreak_info.md)
* [Tech Stack](docs/general_info/tech_stack.md)
</details>
<details>
@@ -98,14 +86,18 @@ We apologize for any inconvenience caused by these changes. We hope you enjoy th
* [Wolfram](docs/features/plugins/wolfram.md)
* [Make Your Own Plugin](docs/features/plugins/make_your_own.md)
* [User Auth System](docs/features/user_auth_system.md)
* [Proxy](docs/features/proxy.md)
* [Bing Jailbreak](docs/features/bing_jailbreak.md)
</details>
<details>
<summary><strong>Cloud Deployment</strong></summary>
* [Hetzner](docs/deployment/hetzner_ubuntu.md)
* [Heroku](docs/deployment/heroku.md)
* [Linode](docs/deployment/linode.md)
* [Cloudflare](docs/deployment/cloudflare.md)
* [Ngrok](docs/deployment/ngrok.md)
</details>
<details>
@@ -116,7 +108,7 @@ We apologize for any inconvenience caused by these changes. We hope you enjoy th
* [Code Standards and Conventions](docs/contributions/coding_conventions.md)
* [Testing](docs/contributions/testing.md)
* [Security](SECURITY.md)
* [Trello Board](https://trello.com/b/17z094kq/chatgpt-clone)
* [Trello Board](https://trello.com/b/17z094kq/LibreChate)
</details>
@@ -124,14 +116,14 @@ We apologize for any inconvenience caused by these changes. We hope you enjoy th
## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=danny-avila/chatgpt-clone&type=Date)](https://star-history.com/#danny-avila/chatgpt-clone&Date)
[![Star History Chart](https://api.star-history.com/svg?repos=danny-avila/LibreChat&type=Date)](https://star-history.com/#danny-avila/LibreChat&Date)
---
## Sponsors
Sponsored by <a href="https://github.com/DavidDev1334"><b>@DavidDev1334</b></a>, <a href="https://github.com/mjtechguy"><b>@mjtechguy</b></a>, <a href="https://github.com/Pharrcyde"><b>@Pharrcyde</b></a>, & <a href="https://github.com/fuegovic"><b>@fuegovic</b></a>
Sponsored by <a href="https://github.com/mjtechguy"><b>@mjtechguy</b></a>, <a href="https://github.com/SphaeroX"><b>@SphaeroX</b></a>, <a href="https://github.com/DavidDev1334"><b>@DavidDev1334</b></a>, <a href="https://github.com/fuegovic"><b>@fuegovic</b></a>, <a href="https://github.com/Pharrcyde"><b>@Pharrcyde</b></a>
---
## Contributors
@@ -146,6 +138,6 @@ For new features, components, or extensions, please open an issue and discuss be
This project exists in its current state thanks to all the people who contribute
---
<a href="https://github.com/danny-avila/chatgpt-clone/graphs/contributors">
<img src="https://contrib.rocks/image?repo=danny-avila/chatgpt-clone" />
<a href="https://github.com/danny-avila/LibreChat/graphs/contributors">
<img src="https://contrib.rocks/image?repo=danny-avila/LibreChat" />
</a>

View File

@@ -0,0 +1,550 @@
const crypto = require('crypto');
const TextStream = require('./TextStream');
const { RecursiveCharacterTextSplitter } = require('langchain/text_splitter');
const { ChatOpenAI } = require('langchain/chat_models/openai');
const { loadSummarizationChain } = require('langchain/chains');
const { refinePrompt } = require('./prompts/refinePrompt');
const { getConvo, getMessages, saveMessage, updateMessage, saveConvo } = require('../../models');
class BaseClient {
constructor(apiKey, options = {}) {
this.apiKey = apiKey;
this.sender = options.sender || 'AI';
this.contextStrategy = null;
this.currentDateString = new Date().toLocaleDateString('en-us', {
year: 'numeric',
month: 'long',
day: 'numeric'
});
}
setOptions() {
throw new Error('Method \'setOptions\' must be implemented.');
}
getCompletion() {
throw new Error('Method \'getCompletion\' must be implemented.');
}
async sendCompletion() {
throw new Error('Method \'sendCompletion\' must be implemented.');
}
getSaveOptions() {
throw new Error('Subclasses must implement getSaveOptions');
}
async buildMessages() {
throw new Error('Subclasses must implement buildMessages');
}
getBuildMessagesOptions() {
throw new Error('Subclasses must implement getBuildMessagesOptions');
}
async generateTextStream(text, onProgress, options = {}) {
const stream = new TextStream(text, options);
await stream.processTextStream(onProgress);
}
async setMessageOptions(opts = {}) {
if (opts && typeof opts === 'object') {
this.setOptions(opts);
}
const user = opts.user || null;
const conversationId = opts.conversationId || crypto.randomUUID();
const parentMessageId = opts.parentMessageId || '00000000-0000-0000-0000-000000000000';
const userMessageId = opts.overrideParentMessageId || crypto.randomUUID();
const responseMessageId = crypto.randomUUID();
const saveOptions = this.getSaveOptions();
this.abortController = opts.abortController || new AbortController();
this.currentMessages = await this.loadHistory(conversationId, parentMessageId) ?? [];
return {
...opts,
user,
conversationId,
parentMessageId,
userMessageId,
responseMessageId,
saveOptions,
};
}
createUserMessage({ messageId, parentMessageId, conversationId, text}) {
const userMessage = {
messageId,
parentMessageId,
conversationId,
sender: 'User',
text,
isCreatedByUser: true
};
return userMessage;
}
async handleStartMethods(message, opts) {
const {
user,
conversationId,
parentMessageId,
userMessageId,
responseMessageId,
saveOptions,
} = await this.setMessageOptions(opts);
const userMessage = this.createUserMessage({
messageId: userMessageId,
parentMessageId,
conversationId,
text: message,
});
if (typeof opts?.getIds === 'function') {
opts.getIds({
userMessage,
conversationId,
responseMessageId
});
}
if (typeof opts?.onStart === 'function') {
opts.onStart(userMessage);
}
return {
...opts,
user,
conversationId,
responseMessageId,
saveOptions,
userMessage,
};
}
addInstructions(messages, instructions) {
const payload = [];
if (!instructions) {
return messages;
}
if (messages.length > 1) {
payload.push(...messages.slice(0, -1));
}
payload.push(instructions);
if (messages.length > 0) {
payload.push(messages[messages.length - 1]);
}
return payload;
}
async handleTokenCountMap(tokenCountMap) {
if (this.currentMessages.length === 0) {
return;
}
for (let i = 0; i < this.currentMessages.length; i++) {
// Skip the last message, which is the user message.
if (i === this.currentMessages.length - 1) {
break;
}
const message = this.currentMessages[i];
const { messageId } = message;
const update = {};
if (messageId === tokenCountMap.refined?.messageId) {
if (this.options.debug) {
console.debug(`Adding refined props to ${messageId}.`);
}
update.refinedMessageText = tokenCountMap.refined.content;
update.refinedTokenCount = tokenCountMap.refined.tokenCount;
}
if (message.tokenCount && !update.refinedTokenCount) {
if (this.options.debug) {
console.debug(`Skipping ${messageId}: already had a token count.`);
}
continue;
}
const tokenCount = tokenCountMap[messageId];
if (tokenCount) {
message.tokenCount = tokenCount;
update.tokenCount = tokenCount;
await this.updateMessageInDatabase({ messageId, ...update });
}
}
}
concatenateMessages(messages) {
return messages.reduce((acc, message) => {
const nameOrRole = message.name ?? message.role;
return acc + `${nameOrRole}:\n${message.content}\n\n`;
}, '');
}
async refineMessages(messagesToRefine, remainingContextTokens) {
const model = new ChatOpenAI({ temperature: 0 });
const chain = loadSummarizationChain(model, { type: 'refine', verbose: this.options.debug, refinePrompt });
const splitter = new RecursiveCharacterTextSplitter({
chunkSize: 1500,
chunkOverlap: 100,
});
const userMessages = this.concatenateMessages(messagesToRefine.filter(m => m.role === 'user'));
const assistantMessages = this.concatenateMessages(messagesToRefine.filter(m => m.role !== 'user'));
const userDocs = await splitter.createDocuments([userMessages],[],{
chunkHeader: 'DOCUMENT NAME: User Message\n\n---\n\n',
appendChunkOverlapHeader: true,
});
const assistantDocs = await splitter.createDocuments([assistantMessages],[],{
chunkHeader: 'DOCUMENT NAME: Assistant Message\n\n---\n\n',
appendChunkOverlapHeader: true,
});
// const chunkSize = Math.round(concatenatedMessages.length / 512);
const input_documents = userDocs.concat(assistantDocs);
if (this.options.debug ) {
console.debug('Refining messages...');
}
try {
const res = await chain.call({
input_documents,
signal: this.abortController.signal,
});
const refinedMessage = {
role: 'assistant',
content: res.output_text,
tokenCount: this.getTokenCount(res.output_text),
}
if (this.options.debug ) {
console.debug('Refined messages', refinedMessage);
console.debug(`remainingContextTokens: ${remainingContextTokens}, after refining: ${remainingContextTokens - refinedMessage.tokenCount}`);
}
return refinedMessage;
} catch (e) {
console.error('Error refining messages');
console.error(e);
return null;
}
}
/**
* This method processes an array of messages and returns a context of messages that fit within a token limit.
* It iterates over the messages from newest to oldest, adding them to the context until the token limit is reached.
* If the token limit would be exceeded by adding a message, that message and possibly the previous one are added to a separate array of messages to refine.
* The method uses `push` and `pop` operations for efficient array manipulation, and reverses the arrays at the end to maintain the original order of the messages.
* The method also includes a mechanism to avoid blocking the event loop by waiting for the next tick after each iteration.
*
* @param {Array} messages - An array of messages, each with a `tokenCount` property. The messages should be ordered from oldest to newest.
* @returns {Object} An object with three properties: `context`, `remainingContextTokens`, and `messagesToRefine`. `context` is an array of messages that fit within the token limit. `remainingContextTokens` is the number of tokens remaining within the limit after adding the messages to the context. `messagesToRefine` is an array of messages that were not added to the context because they would have exceeded the token limit.
*/
async getMessagesWithinTokenLimit(messages) {
let currentTokenCount = 0;
let context = [];
let messagesToRefine = [];
let refineIndex = -1;
let remainingContextTokens = this.maxContextTokens;
for (let i = messages.length - 1; i >= 0; i--) {
const message = messages[i];
const newTokenCount = currentTokenCount + message.tokenCount;
const exceededLimit = newTokenCount > this.maxContextTokens;
let shouldRefine = exceededLimit && this.shouldRefineContext;
let refineNextMessage = i !== 0 && i !== 1 && context.length > 0;
if (shouldRefine) {
messagesToRefine.push(message);
if (refineIndex === -1) {
refineIndex = i;
}
if (refineNextMessage) {
refineIndex = i + 1;
const removedMessage = context.pop();
messagesToRefine.push(removedMessage);
currentTokenCount -= removedMessage.tokenCount;
remainingContextTokens = this.maxContextTokens - currentTokenCount;
refineNextMessage = false;
}
continue;
} else if (exceededLimit) {
break;
}
context.push(message);
currentTokenCount = newTokenCount;
remainingContextTokens = this.maxContextTokens - currentTokenCount;
await new Promise(resolve => setImmediate(resolve));
}
return {
context: context.reverse(),
remainingContextTokens,
messagesToRefine: messagesToRefine.reverse(),
refineIndex
};
}
async handleContextStrategy({instructions, orderedMessages, formattedMessages}) {
let payload = this.addInstructions(formattedMessages, instructions);
let orderedWithInstructions = this.addInstructions(orderedMessages, instructions);
let {
context,
remainingContextTokens,
messagesToRefine,
refineIndex
} = await this.getMessagesWithinTokenLimit(payload);
payload = context;
let refinedMessage;
// if (messagesToRefine.length > 0) {
// refinedMessage = await this.refineMessages(messagesToRefine, remainingContextTokens);
// payload.unshift(refinedMessage);
// remainingContextTokens -= refinedMessage.tokenCount;
// }
// if (remainingContextTokens <= instructions?.tokenCount) {
// if (this.options.debug) {
// console.debug(`Remaining context (${remainingContextTokens}) is less than instructions token count: ${instructions.tokenCount}`);
// }
// ({ context, remainingContextTokens, messagesToRefine, refineIndex } = await this.getMessagesWithinTokenLimit(payload));
// payload = context;
// }
// Calculate the difference in length to determine how many messages were discarded if any
let diff = orderedWithInstructions.length - payload.length;
if (this.options.debug) {
console.debug('<---------------------------------DIFF--------------------------------->');
console.debug(`Difference between payload (${payload.length}) and orderedWithInstructions (${orderedWithInstructions.length}): ${diff}`);
console.debug('remainingContextTokens, this.maxContextTokens (1/2)', remainingContextTokens, this.maxContextTokens);
}
// If the difference is positive, slice the orderedWithInstructions array
if (diff > 0) {
orderedWithInstructions = orderedWithInstructions.slice(diff);
}
if (messagesToRefine.length > 0) {
refinedMessage = await this.refineMessages(messagesToRefine, remainingContextTokens);
payload.unshift(refinedMessage);
remainingContextTokens -= refinedMessage.tokenCount;
}
if (this.options.debug) {
console.debug('remainingContextTokens, this.maxContextTokens (2/2)', remainingContextTokens, this.maxContextTokens);
}
let tokenCountMap = orderedWithInstructions.reduce((map, message, index) => {
if (!message.messageId) {
return map;
}
if (index === refineIndex) {
map.refined = { ...refinedMessage, messageId: message.messageId};
}
map[message.messageId] = payload[index].tokenCount;
return map;
}, {});
const promptTokens = this.maxContextTokens - remainingContextTokens;
if (this.options.debug) {
console.debug('<-------------------------PAYLOAD/TOKEN COUNT MAP------------------------->');
console.debug('Payload:', payload);
console.debug('Token Count Map:', tokenCountMap);
console.debug('Prompt Tokens', promptTokens, remainingContextTokens, this.maxContextTokens);
}
return { payload, tokenCountMap, promptTokens, messages: orderedWithInstructions };
}
async sendMessage(message, opts = {}) {
console.log('BaseClient: sendMessage', message, opts);
const {
user,
conversationId,
responseMessageId,
saveOptions,
userMessage,
} = await this.handleStartMethods(message, opts);
// It's not necessary to push to currentMessages
// depending on subclass implementation of handling messages
this.currentMessages.push(userMessage);
let { prompt: payload, tokenCountMap, promptTokens } = await this.buildMessages(
this.currentMessages,
// When the userMessage is pushed to currentMessages, the parentMessage is the userMessageId.
// this only matters when buildMessages is utilizing the parentMessageId, and may vary on implementation
userMessage.messageId,
this.getBuildMessagesOptions(opts),
);
if (this.options.debug) {
console.debug('payload');
console.debug(payload);
}
if (tokenCountMap) {
console.dir(tokenCountMap, { depth: null })
if (tokenCountMap[userMessage.messageId]) {
userMessage.tokenCount = tokenCountMap[userMessage.messageId];
console.log('userMessage.tokenCount', userMessage.tokenCount);
console.log('userMessage', userMessage);
}
payload = payload.map((message) => {
const messageWithoutTokenCount = message;
delete messageWithoutTokenCount.tokenCount;
return messageWithoutTokenCount;
});
this.handleTokenCountMap(tokenCountMap);
}
await this.saveMessageToDatabase(userMessage, saveOptions, user);
const responseMessage = {
messageId: responseMessageId,
conversationId,
parentMessageId: userMessage.messageId,
isCreatedByUser: false,
model: this.modelOptions.model,
sender: this.sender,
text: await this.sendCompletion(payload, opts),
promptTokens,
};
if (tokenCountMap && this.getTokenCountForResponse) {
responseMessage.tokenCount = this.getTokenCountForResponse(responseMessage);
responseMessage.completionTokens = responseMessage.tokenCount;
}
await this.saveMessageToDatabase(responseMessage, saveOptions, user);
delete responseMessage.tokenCount;
return responseMessage;
}
async getConversation(conversationId, user = null) {
return await getConvo(user, conversationId);
}
async loadHistory(conversationId, parentMessageId = null) {
if (this.options.debug) {
console.debug('Loading history for conversation', conversationId, parentMessageId);
}
const messages = (await getMessages({ conversationId })) || [];
if (messages.length === 0) {
return [];
}
let mapMethod = null;
if (this.getMessageMapMethod) {
mapMethod = this.getMessageMapMethod();
}
return this.constructor.getMessagesForConversation(messages, parentMessageId, mapMethod);
}
async saveMessageToDatabase(message, endpointOptions, user = null) {
await saveMessage({ ...message, unfinished: false });
await saveConvo(user, {
conversationId: message.conversationId,
endpoint: this.options.endpoint,
...endpointOptions
});
}
async updateMessageInDatabase(message) {
await updateMessage(message);
}
/**
* Iterate through messages, building an array based on the parentMessageId.
* Each message has an id and a parentMessageId. The parentMessageId is the id of the message that this message is a reply to.
* @param messages
* @param parentMessageId
* @returns {*[]} An array containing the messages in the order they should be displayed, starting with the root message.
*/
static getMessagesForConversation(messages, parentMessageId, mapMethod = null) {
if (!messages || messages.length === 0) {
return [];
}
const orderedMessages = [];
let currentMessageId = parentMessageId;
while (currentMessageId) {
const message = messages.find(msg => {
const messageId = msg.messageId ?? msg.id;
return messageId === currentMessageId;
});
if (!message) {
break;
}
orderedMessages.unshift(message);
currentMessageId = message.parentMessageId;
}
if (mapMethod) {
return orderedMessages.map(mapMethod);
}
return orderedMessages;
}
/**
* Algorithm adapted from "6. Counting tokens for chat API calls" of
* https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb
*
* An additional 2 tokens need to be added for metadata after all messages have been counted.
*
* @param {*} message
*/
getTokenCountForMessage(message) {
let tokensPerMessage;
let nameAdjustment;
if (this.modelOptions.model.startsWith('gpt-4')) {
tokensPerMessage = 3;
nameAdjustment = 1;
} else {
tokensPerMessage = 4;
nameAdjustment = -1;
}
if (this.options.debug) {
console.debug('getTokenCountForMessage', message);
}
// Map each property of the message to the number of tokens it contains
const propertyTokenCounts = Object.entries(message).map(([key, value]) => {
if (key === 'tokenCount' || typeof value !== 'string') {
return 0;
}
// Count the number of tokens in the property value
const numTokens = this.getTokenCount(value);
// Adjust by `nameAdjustment` tokens if the property key is 'name'
const adjustment = (key === 'name') ? nameAdjustment : 0;
return numTokens + adjustment;
});
if (this.options.debug) {
console.debug('propertyTokenCounts', propertyTokenCounts);
}
// Sum the number of tokens in all properties and add `tokensPerMessage` for metadata
return propertyTokenCounts.reduce((a, b) => a + b, tokensPerMessage);
}
}
module.exports = BaseClient;

View File

@@ -0,0 +1,570 @@
const crypto = require('crypto');
const Keyv = require('keyv');
const { encoding_for_model: encodingForModel, get_encoding: getEncoding } = require('@dqbd/tiktoken');
const { fetchEventSource } = require('@waylaidwanderer/fetch-event-source');
const { Agent, ProxyAgent } = require('undici');
const BaseClient = require('./BaseClient');
const CHATGPT_MODEL = 'gpt-3.5-turbo';
const tokenizersCache = {};
class ChatGPTClient extends BaseClient {
constructor(
apiKey,
options = {},
cacheOptions = {},
) {
super(apiKey, options, cacheOptions);
cacheOptions.namespace = cacheOptions.namespace || 'chatgpt';
this.conversationsCache = new Keyv(cacheOptions);
this.setOptions(options);
}
setOptions(options) {
if (this.options && !this.options.replaceOptions) {
// nested options aren't spread properly, so we need to do this manually
this.options.modelOptions = {
...this.options.modelOptions,
...options.modelOptions,
};
delete options.modelOptions;
// now we can merge options
this.options = {
...this.options,
...options,
};
} else {
this.options = options;
}
if (this.options.openaiApiKey) {
this.apiKey = this.options.openaiApiKey;
}
const modelOptions = this.options.modelOptions || {};
this.modelOptions = {
...modelOptions,
// set some good defaults (check for undefined in some cases because they may be 0)
model: modelOptions.model || CHATGPT_MODEL,
temperature: typeof modelOptions.temperature === 'undefined' ? 0.8 : modelOptions.temperature,
top_p: typeof modelOptions.top_p === 'undefined' ? 1 : modelOptions.top_p,
presence_penalty: typeof modelOptions.presence_penalty === 'undefined' ? 1 : modelOptions.presence_penalty,
stop: modelOptions.stop,
};
this.isChatGptModel = this.modelOptions.model.startsWith('gpt-');
const { isChatGptModel } = this;
this.isUnofficialChatGptModel = this.modelOptions.model.startsWith('text-chat') || this.modelOptions.model.startsWith('text-davinci-002-render');
const { isUnofficialChatGptModel } = this;
// Davinci models have a max context length of 4097 tokens.
this.maxContextTokens = this.options.maxContextTokens || (isChatGptModel ? 4095 : 4097);
// I decided to reserve 1024 tokens for the response.
// The max prompt tokens is determined by the max context tokens minus the max response tokens.
// Earlier messages will be dropped until the prompt is within the limit.
this.maxResponseTokens = this.modelOptions.max_tokens || 1024;
this.maxPromptTokens = this.options.maxPromptTokens || (this.maxContextTokens - this.maxResponseTokens);
if (this.maxPromptTokens + this.maxResponseTokens > this.maxContextTokens) {
throw new Error(`maxPromptTokens + max_tokens (${this.maxPromptTokens} + ${this.maxResponseTokens} = ${this.maxPromptTokens + this.maxResponseTokens}) must be less than or equal to maxContextTokens (${this.maxContextTokens})`);
}
this.userLabel = this.options.userLabel || 'User';
this.chatGptLabel = this.options.chatGptLabel || 'ChatGPT';
if (isChatGptModel) {
// Use these faux tokens to help the AI understand the context since we are building the chat log ourselves.
// Trying to use "<|im_start|>" causes the AI to still generate "<" or "<|" at the end sometimes for some reason,
// without tripping the stop sequences, so I'm using "||>" instead.
this.startToken = '||>';
this.endToken = '';
this.gptEncoder = this.constructor.getTokenizer('cl100k_base');
} else if (isUnofficialChatGptModel) {
this.startToken = '<|im_start|>';
this.endToken = '<|im_end|>';
this.gptEncoder = this.constructor.getTokenizer('text-davinci-003', true, {
'<|im_start|>': 100264,
'<|im_end|>': 100265,
});
} else {
// Previously I was trying to use "<|endoftext|>" but there seems to be some bug with OpenAI's token counting
// system that causes only the first "<|endoftext|>" to be counted as 1 token, and the rest are not treated
// as a single token. So we're using this instead.
this.startToken = '||>';
this.endToken = '';
try {
this.gptEncoder = this.constructor.getTokenizer(this.modelOptions.model, true);
} catch {
this.gptEncoder = this.constructor.getTokenizer('text-davinci-003', true);
}
}
if (!this.modelOptions.stop) {
const stopTokens = [this.startToken];
if (this.endToken && this.endToken !== this.startToken) {
stopTokens.push(this.endToken);
}
stopTokens.push(`\n${this.userLabel}:`);
stopTokens.push('<|diff_marker|>');
// I chose not to do one for `chatGptLabel` because I've never seen it happen
this.modelOptions.stop = stopTokens;
}
if (this.options.reverseProxyUrl) {
this.completionsUrl = this.options.reverseProxyUrl;
} else if (isChatGptModel) {
this.completionsUrl = 'https://api.openai.com/v1/chat/completions';
} else {
this.completionsUrl = 'https://api.openai.com/v1/completions';
}
return this;
}
static getTokenizer(encoding, isModelName = false, extendSpecialTokens = {}) {
if (tokenizersCache[encoding]) {
return tokenizersCache[encoding];
}
let tokenizer;
if (isModelName) {
tokenizer = encodingForModel(encoding, extendSpecialTokens);
} else {
tokenizer = getEncoding(encoding, extendSpecialTokens);
}
tokenizersCache[encoding] = tokenizer;
return tokenizer;
}
async getCompletion(input, onProgress, abortController = null) {
if (!abortController) {
abortController = new AbortController();
}
const modelOptions = { ...this.modelOptions };
if (typeof onProgress === 'function') {
modelOptions.stream = true;
}
if (this.isChatGptModel) {
modelOptions.messages = input;
} else {
modelOptions.prompt = input;
}
const { debug } = this.options;
const url = this.completionsUrl;
if (debug) {
console.debug();
console.debug(url);
console.debug(modelOptions);
console.debug();
}
const opts = {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(modelOptions),
dispatcher: new Agent({
bodyTimeout: 0,
headersTimeout: 0,
}),
};
if (this.apiKey && this.options.azure) {
opts.headers['api-key'] = this.apiKey;
} else if (this.apiKey) {
opts.headers.Authorization = `Bearer ${this.apiKey}`;
}
if (this.options.headers) {
opts.headers = { ...opts.headers, ...this.options.headers };
}
if (this.options.proxy) {
opts.dispatcher = new ProxyAgent(this.options.proxy);
}
if (modelOptions.stream) {
// eslint-disable-next-line no-async-promise-executor
return new Promise(async (resolve, reject) => {
try {
let done = false;
await fetchEventSource(url, {
...opts,
signal: abortController.signal,
async onopen(response) {
if (response.status === 200) {
return;
}
if (debug) {
console.debug(response);
}
let error;
try {
const body = await response.text();
error = new Error(`Failed to send message. HTTP ${response.status} - ${body}`);
error.status = response.status;
error.json = JSON.parse(body);
} catch {
error = error || new Error(`Failed to send message. HTTP ${response.status}`);
}
throw error;
},
onclose() {
if (debug) {
console.debug('Server closed the connection unexpectedly, returning...');
}
// workaround for private API not sending [DONE] event
if (!done) {
onProgress('[DONE]');
abortController.abort();
resolve();
}
},
onerror(err) {
if (debug) {
console.debug(err);
}
// rethrow to stop the operation
throw err;
},
onmessage(message) {
if (debug) {
// console.debug(message);
}
if (!message.data || message.event === 'ping') {
return;
}
if (message.data === '[DONE]') {
onProgress('[DONE]');
abortController.abort();
resolve();
done = true;
return;
}
onProgress(JSON.parse(message.data));
},
});
} catch (err) {
reject(err);
}
});
}
const response = await fetch(
url,
{
...opts,
signal: abortController.signal,
},
);
if (response.status !== 200) {
const body = await response.text();
const error = new Error(`Failed to send message. HTTP ${response.status} - ${body}`);
error.status = response.status;
try {
error.json = JSON.parse(body);
} catch {
error.body = body;
}
throw error;
}
return response.json();
}
async generateTitle(userMessage, botMessage) {
const instructionsPayload = {
role: 'system',
content: `Write an extremely concise subtitle for this conversation with no more than a few words. All words should be capitalized. Exclude punctuation.
||>Message:
${userMessage.message}
||>Response:
${botMessage.message}
||>Title:`,
};
const titleGenClientOptions = JSON.parse(JSON.stringify(this.options));
titleGenClientOptions.modelOptions = {
model: 'gpt-3.5-turbo',
temperature: 0,
presence_penalty: 0,
frequency_penalty: 0,
};
const titleGenClient = new ChatGPTClient(this.apiKey, titleGenClientOptions);
const result = await titleGenClient.getCompletion([instructionsPayload], null);
// remove any non-alphanumeric characters, replace multiple spaces with 1, and then trim
return result.choices[0].message.content
.replace(/[^a-zA-Z0-9' ]/g, '')
.replace(/\s+/g, ' ')
.trim();
}
async sendMessage(
message,
opts = {},
) {
if (opts.clientOptions && typeof opts.clientOptions === 'object') {
this.setOptions(opts.clientOptions);
}
const conversationId = opts.conversationId || crypto.randomUUID();
const parentMessageId = opts.parentMessageId || crypto.randomUUID();
let conversation = typeof opts.conversation === 'object'
? opts.conversation
: await this.conversationsCache.get(conversationId);
let isNewConversation = false;
if (!conversation) {
conversation = {
messages: [],
createdAt: Date.now(),
};
isNewConversation = true;
}
const shouldGenerateTitle = opts.shouldGenerateTitle && isNewConversation;
const userMessage = {
id: crypto.randomUUID(),
parentMessageId,
role: 'User',
message,
};
conversation.messages.push(userMessage);
// Doing it this way instead of having each message be a separate element in the array seems to be more reliable,
// especially when it comes to keeping the AI in character. It also seems to improve coherency and context retention.
const { prompt: payload, context } = await this.buildPrompt(
conversation.messages,
userMessage.id,
{
isChatGptModel: this.isChatGptModel,
promptPrefix: opts.promptPrefix,
},
);
if (this.options.keepNecessaryMessagesOnly) {
conversation.messages = context;
}
let reply = '';
let result = null;
if (typeof opts.onProgress === 'function') {
await this.getCompletion(
payload,
(progressMessage) => {
if (progressMessage === '[DONE]') {
return;
}
const token = this.isChatGptModel ? progressMessage.choices[0].delta.content : progressMessage.choices[0].text;
// first event's delta content is always undefined
if (!token) {
return;
}
if (this.options.debug) {
console.debug(token);
}
if (token === this.endToken) {
return;
}
opts.onProgress(token);
reply += token;
},
opts.abortController || new AbortController(),
);
} else {
result = await this.getCompletion(
payload,
null,
opts.abortController || new AbortController(),
);
if (this.options.debug) {
console.debug(JSON.stringify(result));
}
if (this.isChatGptModel) {
reply = result.choices[0].message.content;
} else {
reply = result.choices[0].text.replace(this.endToken, '');
}
}
// avoids some rendering issues when using the CLI app
if (this.options.debug) {
console.debug();
}
reply = reply.trim();
const replyMessage = {
id: crypto.randomUUID(),
parentMessageId: userMessage.id,
role: 'ChatGPT',
message: reply,
};
conversation.messages.push(replyMessage);
const returnData = {
response: replyMessage.message,
conversationId,
parentMessageId: replyMessage.parentMessageId,
messageId: replyMessage.id,
details: result || {},
};
if (shouldGenerateTitle) {
conversation.title = await this.generateTitle(userMessage, replyMessage);
returnData.title = conversation.title;
}
await this.conversationsCache.set(conversationId, conversation);
if (this.options.returnConversation) {
returnData.conversation = conversation;
}
return returnData;
}
async buildPrompt(messages, parentMessageId, { isChatGptModel = false, promptPrefix = null }) {
const orderedMessages = this.constructor.getMessagesForConversation(messages, parentMessageId);
promptPrefix = (promptPrefix || this.options.promptPrefix || '').trim();
if (promptPrefix) {
// If the prompt prefix doesn't end with the end token, add it.
if (!promptPrefix.endsWith(`${this.endToken}`)) {
promptPrefix = `${promptPrefix.trim()}${this.endToken}\n\n`;
}
promptPrefix = `${this.startToken}Instructions:\n${promptPrefix}`;
} else {
const currentDateString = new Date().toLocaleDateString(
'en-us',
{ year: 'numeric', month: 'long', day: 'numeric' },
);
promptPrefix = `${this.startToken}Instructions:\nYou are ChatGPT, a large language model trained by OpenAI. Respond conversationally.\nCurrent date: ${currentDateString}${this.endToken}\n\n`;
}
const promptSuffix = `${this.startToken}${this.chatGptLabel}:\n`; // Prompt ChatGPT to respond.
const instructionsPayload = {
role: 'system',
name: 'instructions',
content: promptPrefix,
};
const messagePayload = {
role: 'system',
content: promptSuffix,
};
let currentTokenCount;
if (isChatGptModel) {
currentTokenCount = this.getTokenCountForMessage(instructionsPayload) + this.getTokenCountForMessage(messagePayload);
} else {
currentTokenCount = this.getTokenCount(`${promptPrefix}${promptSuffix}`);
}
let promptBody = '';
const maxTokenCount = this.maxPromptTokens;
const context = [];
// Iterate backwards through the messages, adding them to the prompt until we reach the max token count.
// Do this within a recursive async function so that it doesn't block the event loop for too long.
const buildPromptBody = async () => {
if (currentTokenCount < maxTokenCount && orderedMessages.length > 0) {
const message = orderedMessages.pop();
const roleLabel = message?.isCreatedByUser || message?.role?.toLowerCase() === 'user' ? this.userLabel : this.chatGptLabel;
const messageString = `${this.startToken}${roleLabel}:\n${message?.text ?? message?.message}${this.endToken}\n`;
let newPromptBody;
if (promptBody || isChatGptModel) {
newPromptBody = `${messageString}${promptBody}`;
} else {
// Always insert prompt prefix before the last user message, if not gpt-3.5-turbo.
// This makes the AI obey the prompt instructions better, which is important for custom instructions.
// After a bunch of testing, it doesn't seem to cause the AI any confusion, even if you ask it things
// like "what's the last thing I wrote?".
newPromptBody = `${promptPrefix}${messageString}${promptBody}`;
}
context.unshift(message);
const tokenCountForMessage = this.getTokenCount(messageString);
const newTokenCount = currentTokenCount + tokenCountForMessage;
if (newTokenCount > maxTokenCount) {
if (promptBody) {
// This message would put us over the token limit, so don't add it.
return false;
}
// This is the first message, so we can't add it. Just throw an error.
throw new Error(`Prompt is too long. Max token count is ${maxTokenCount}, but prompt is ${newTokenCount} tokens long.`);
}
promptBody = newPromptBody;
currentTokenCount = newTokenCount;
// wait for next tick to avoid blocking the event loop
await new Promise(resolve => setImmediate(resolve));
return buildPromptBody();
}
return true;
};
await buildPromptBody();
const prompt = `${promptBody}${promptSuffix}`;
if (isChatGptModel) {
messagePayload.content = prompt;
// Add 2 tokens for metadata after all messages have been counted.
currentTokenCount += 2;
}
// Use up to `this.maxContextTokens` tokens (prompt + response), but try to leave `this.maxTokens` tokens for the response.
this.modelOptions.max_tokens = Math.min(this.maxContextTokens - currentTokenCount, this.maxResponseTokens);
if (this.options.debug) {
console.debug(`Prompt : ${prompt}`);
}
if (isChatGptModel) {
return { prompt: [instructionsPayload, messagePayload], context };
}
return { prompt, context };
}
getTokenCount(text) {
return this.gptEncoder.encode(text, 'all').length;
}
/**
* Algorithm adapted from "6. Counting tokens for chat API calls" of
* https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb
*
* An additional 2 tokens need to be added for metadata after all messages have been counted.
*
* @param {*} message
*/
getTokenCountForMessage(message) {
let tokensPerMessage;
let nameAdjustment;
if (this.modelOptions.model.startsWith('gpt-4')) {
tokensPerMessage = 3;
nameAdjustment = 1;
} else {
tokensPerMessage = 4;
nameAdjustment = -1;
}
// Map each property of the message to the number of tokens it contains
const propertyTokenCounts = Object.entries(message).map(([key, value]) => {
// Count the number of tokens in the property value
const numTokens = this.getTokenCount(value);
// Adjust by `nameAdjustment` tokens if the property key is 'name'
const adjustment = (key === 'name') ? nameAdjustment : 0;
return numTokens + adjustment;
});
// Sum the number of tokens in all properties and add `tokensPerMessage` for metadata
return propertyTokenCounts.reduce((a, b) => a + b, tokensPerMessage);
}
}
module.exports = ChatGPTClient;

View File

@@ -1,8 +1,6 @@
const crypto = require('crypto');
const TextStream = require('../stream');
const BaseClient = require('./BaseClient');
const { google } = require('googleapis');
const { Agent, ProxyAgent } = require('undici');
const { getMessages, saveMessage, saveConvo } = require('../../models');
const {
encoding_for_model: encodingForModel,
get_encoding: getEncoding
@@ -10,23 +8,36 @@ const {
const tokenizersCache = {};
class GoogleAgent {
class GoogleClient extends BaseClient {
constructor(credentials, options = {}) {
super('apiKey', options);
this.client_email = credentials.client_email;
this.project_id = credentials.project_id;
this.private_key = credentials.private_key;
this.sender = 'PaLM2';
this.setOptions(options);
this.currentDateString = new Date().toLocaleDateString('en-us', {
year: 'numeric',
month: 'long',
day: 'numeric'
});
}
/* Google/PaLM2 specific methods */
constructUrl() {
return `https://us-central1-aiplatform.googleapis.com/v1/projects/${this.project_id}/locations/us-central1/publishers/google/models/${this.modelOptions.model}:predict`;
}
async getClient() {
const scopes = ['https://www.googleapis.com/auth/cloud-platform'];
const jwtClient = new google.auth.JWT(this.client_email, null, this.private_key, scopes);
jwtClient.authorize((err) => {
if (err) {
console.log(err);
throw err;
}
});
return jwtClient;
}
/* Required Client methods */
setOptions(options) {
if (this.options && !this.options.replaceOptions) {
// nested options aren't spread properly, so we need to do this manually
@@ -129,39 +140,19 @@ class GoogleAgent {
return this;
}
static getTokenizer(encoding, isModelName = false, extendSpecialTokens = {}) {
if (tokenizersCache[encoding]) {
return tokenizersCache[encoding];
}
let tokenizer;
if (isModelName) {
tokenizer = encodingForModel(encoding, extendSpecialTokens);
} else {
tokenizer = getEncoding(encoding, extendSpecialTokens);
}
tokenizersCache[encoding] = tokenizer;
return tokenizer;
getMessageMapMethod() {
return ((message) => ({
author: message?.author ?? (message.isCreatedByUser ? this.userLabel : this.modelLabel),
content: message?.content ?? message.text
})).bind(this);
}
async getClient() {
const scopes = ['https://www.googleapis.com/auth/cloud-platform'];
const jwtClient = new google.auth.JWT(this.client_email, null, this.private_key, scopes);
jwtClient.authorize((err) => {
if (err) {
console.log(err);
throw err;
}
});
return jwtClient;
}
buildPayload(input, { messages = [] }) {
buildMessages(messages = []) {
const formattedMessages = messages.map(this.getMessageMapMethod());
let payload = {
instances: [
{
messages: [...messages, { author: this.userLabel, content: input }]
messages: formattedMessages,
}
],
parameters: this.options.modelOptions
@@ -175,23 +166,24 @@ class GoogleAgent {
payload.instances[0].examples = this.options.examples;
}
/* TO-DO: text model needs more context since it can't process an array of messages */
if (this.isTextModel) {
payload.instances = [
{
prompt: input
prompt: messages[messages.length -1].content
}
];
}
if (this.options.debug) {
console.debug('buildPayload');
console.debug('GoogleClient buildMessages');
console.dir(payload, { depth: null });
}
return payload;
return { prompt: payload };
}
async getCompletion(input, messages = [], abortController = null) {
async getCompletion(payload, abortController = null) {
if (!abortController) {
abortController = new AbortController();
}
@@ -217,83 +209,27 @@ class GoogleAgent {
}
const client = await this.getClient();
const payload = this.buildPayload(input, { messages });
const res = await client.request({ url, method: 'POST', data: payload });
console.dir(res.data, { depth: null });
return res.data;
}
async loadHistory(conversationId, parentMessageId = null) {
if (this.options.debug) {
console.debug('Loading history for conversation', conversationId, parentMessageId);
}
if (!parentMessageId) {
return [];
}
const messages = (await getMessages({ conversationId })) || [];
if (messages.length === 0) {
this.currentMessages = [];
return [];
}
const orderedMessages = this.constructor.getMessagesForConversation(messages, parentMessageId);
return orderedMessages.map((message) => {
return {
author: message.isCreatedByUser ? this.userLabel : this.modelLabel,
content: message.content
};
});
}
async saveMessageToDatabase(message, user = null) {
await saveMessage({ ...message, unfinished: false });
await saveConvo(user, {
conversationId: message.conversationId,
endpoint: 'google',
getSaveOptions() {
return {
...this.modelOptions
});
};
}
async sendMessage(message, opts = {}) {
if (opts && typeof opts === 'object') {
this.setOptions(opts);
}
console.log('sendMessage', message, opts);
getBuildMessagesOptions() {
// console.log('GoogleClient doesn\'t use getBuildMessagesOptions');
}
const user = opts.user || null;
const conversationId = opts.conversationId || crypto.randomUUID();
const parentMessageId = opts.parentMessageId || '00000000-0000-0000-0000-000000000000';
const userMessageId = opts.overrideParentMessageId || crypto.randomUUID();
const responseMessageId = crypto.randomUUID();
const messages = await this.loadHistory(conversationId, this.options?.parentMessageId);
const userMessage = {
messageId: userMessageId,
parentMessageId,
conversationId,
sender: 'User',
text: message,
isCreatedByUser: true
};
if (typeof opts?.getIds === 'function') {
opts.getIds({
userMessage,
conversationId,
responseMessageId
});
}
console.log('userMessage', userMessage);
await this.saveMessageToDatabase(userMessage, user);
async sendCompletion(payload, opts = {}) {
console.log('GoogleClient: sendcompletion', payload, opts);
let reply = '';
let blocked = false;
try {
const result = await this.getCompletion(message, messages, opts.abortController);
const result = await this.getCompletion(payload, opts.abortController);
blocked = result?.predictions?.[0]?.safetyAttributes?.blocked;
reply =
result?.predictions?.[0]?.candidates?.[0]?.content ||
@@ -312,86 +248,31 @@ class GoogleAgent {
console.error(err);
}
if (this.options.debug) {
console.debug('options');
console.debug(this.options);
}
if (!blocked) {
const textStream = new TextStream(reply, { delay: 0.5 });
await textStream.processTextStream(opts.onProgress);
await this.generateTextStream(reply, opts.onProgress, { delay: 0.5 });
}
const responseMessage = {
messageId: responseMessageId,
conversationId,
parentMessageId: userMessage.messageId,
sender: 'PaLM2',
text: reply,
error: blocked,
isCreatedByUser: false
};
return reply.trim();
}
await this.saveMessageToDatabase(responseMessage, user);
return responseMessage;
/* TO-DO: Handle tokens with Google tokenization NOTE: these are required */
static getTokenizer(encoding, isModelName = false, extendSpecialTokens = {}) {
if (tokenizersCache[encoding]) {
return tokenizersCache[encoding];
}
let tokenizer;
if (isModelName) {
tokenizer = encodingForModel(encoding, extendSpecialTokens);
} else {
tokenizer = getEncoding(encoding, extendSpecialTokens);
}
tokenizersCache[encoding] = tokenizer;
return tokenizer;
}
getTokenCount(text) {
return this.gptEncoder.encode(text, 'all').length;
}
/**
* Algorithm adapted from "6. Counting tokens for chat API calls" of
* https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb
*
* An additional 2 tokens need to be added for metadata after all messages have been counted.
*
* @param {*} message
*/
getTokenCountForMessage(message) {
// Map each property of the message to the number of tokens it contains
const propertyTokenCounts = Object.entries(message).map(([key, value]) => {
// Count the number of tokens in the property value
const numTokens = this.getTokenCount(value);
// Subtract 1 token if the property key is 'name'
const adjustment = key === 'name' ? 1 : 0;
return numTokens - adjustment;
});
// Sum the number of tokens in all properties and add 4 for metadata
return propertyTokenCounts.reduce((a, b) => a + b, 4);
}
/**
* Iterate through messages, building an array based on the parentMessageId.
* Each message has an id and a parentMessageId. The parentMessageId is the id of the message that this message is a reply to.
* @param messages
* @param parentMessageId
* @returns {*[]} An array containing the messages in the order they should be displayed, starting with the root message.
*/
static getMessagesForConversation(messages, parentMessageId) {
const orderedMessages = [];
let currentMessageId = parentMessageId;
while (currentMessageId) {
// eslint-disable-next-line no-loop-func
const message = messages.find((m) => m.messageId === currentMessageId);
if (!message) {
break;
}
orderedMessages.unshift(message);
currentMessageId = message.parentMessageId;
}
if (orderedMessages.length === 0) {
return [];
}
return orderedMessages.map((msg) => ({
isCreatedByUser: msg.isCreatedByUser,
content: msg.text
}));
}
}
module.exports = GoogleAgent;
module.exports = GoogleClient;

View File

@@ -0,0 +1,319 @@
const BaseClient = require('./BaseClient');
const ChatGPTClient = require('./ChatGPTClient');
const { encoding_for_model: encodingForModel, get_encoding: getEncoding } = require('@dqbd/tiktoken');
const { maxTokensMap, genAzureChatCompletion } = require('../../utils');
const tokenizersCache = {};
class OpenAIClient extends BaseClient {
constructor(apiKey, options = {}) {
super(apiKey, options);
this.ChatGPTClient = new ChatGPTClient();
this.buildPrompt = this.ChatGPTClient.buildPrompt.bind(this);
this.getCompletion = this.ChatGPTClient.getCompletion.bind(this);
this.sender = options.sender ?? 'ChatGPT';
this.contextStrategy = options.contextStrategy ? options.contextStrategy.toLowerCase() : 'discard';
this.shouldRefineContext = this.contextStrategy === 'refine';
this.azure = options.azure || false;
if (this.azure) {
this.azureEndpoint = genAzureChatCompletion(this.azure);
}
this.setOptions(options);
}
setOptions(options) {
if (this.options && !this.options.replaceOptions) {
this.options.modelOptions = {
...this.options.modelOptions,
...options.modelOptions,
};
delete options.modelOptions;
this.options = {
...this.options,
...options,
};
} else {
this.options = options;
}
if (this.options.openaiApiKey) {
this.apiKey = this.options.openaiApiKey;
}
const modelOptions = this.options.modelOptions || {};
if (!this.modelOptions) {
this.modelOptions = {
...modelOptions,
model: modelOptions.model || 'gpt-3.5-turbo',
temperature: typeof modelOptions.temperature === 'undefined' ? 0.8 : modelOptions.temperature,
top_p: typeof modelOptions.top_p === 'undefined' ? 1 : modelOptions.top_p,
presence_penalty: typeof modelOptions.presence_penalty === 'undefined' ? 1 : modelOptions.presence_penalty,
stop: modelOptions.stop,
};
}
this.isChatCompletion = this.options.reverseProxyUrl || this.options.localAI || this.modelOptions.model.startsWith('gpt-');
this.isChatGptModel = this.isChatCompletion;
if (this.modelOptions.model === 'text-davinci-003') {
this.isChatCompletion = false;
this.isChatGptModel = false;
}
const { isChatGptModel } = this;
this.isUnofficialChatGptModel = this.modelOptions.model.startsWith('text-chat') || this.modelOptions.model.startsWith('text-davinci-002-render');
this.maxContextTokens = maxTokensMap[this.modelOptions.model] ?? 4095; // 1 less than maximum
this.maxResponseTokens = this.modelOptions.max_tokens || 1024;
this.maxPromptTokens = this.options.maxPromptTokens || (this.maxContextTokens - this.maxResponseTokens);
if (this.maxPromptTokens + this.maxResponseTokens > this.maxContextTokens) {
throw new Error(`maxPromptTokens + max_tokens (${this.maxPromptTokens} + ${this.maxResponseTokens} = ${this.maxPromptTokens + this.maxResponseTokens}) must be less than or equal to maxContextTokens (${this.maxContextTokens})`);
}
this.userLabel = this.options.userLabel || 'User';
this.chatGptLabel = this.options.chatGptLabel || 'Assistant';
this.setupTokens();
this.setupTokenizer();
if (!this.modelOptions.stop) {
const stopTokens = [this.startToken];
if (this.endToken && this.endToken !== this.startToken) {
stopTokens.push(this.endToken);
}
stopTokens.push(`\n${this.userLabel}:`);
stopTokens.push('<|diff_marker|>');
this.modelOptions.stop = stopTokens;
}
if (this.options.reverseProxyUrl) {
this.completionsUrl = this.options.reverseProxyUrl;
} else if (isChatGptModel) {
this.completionsUrl = 'https://api.openai.com/v1/chat/completions';
} else {
this.completionsUrl = 'https://api.openai.com/v1/completions';
}
if (this.azureEndpoint) {
this.completionsUrl = this.azureEndpoint;
}
if (this.azureEndpoint && this.options.debug) {
console.debug(`Using Azure endpoint: ${this.azureEndpoint}`, this.azure);
}
return this;
}
setupTokens() {
if (this.isChatCompletion) {
this.startToken = '||>';
this.endToken = '';
} else if (this.isUnofficialChatGptModel) {
this.startToken = '<|im_start|>';
this.endToken = '<|im_end|>';
} else {
this.startToken = '||>';
this.endToken = '';
}
}
setupTokenizer() {
this.encoding = 'text-davinci-003';
if (this.isChatCompletion) {
this.encoding = 'cl100k_base';
this.gptEncoder = this.constructor.getTokenizer(this.encoding);
} else if (this.isUnofficialChatGptModel) {
this.gptEncoder = this.constructor.getTokenizer(this.encoding, true, {
'<|im_start|>': 100264,
'<|im_end|>': 100265,
});
} else {
try {
this.encoding = this.modelOptions.model;
this.gptEncoder = this.constructor.getTokenizer(this.modelOptions.model, true);
} catch {
this.gptEncoder = this.constructor.getTokenizer(this.encoding, true);
}
}
}
static getTokenizer(encoding, isModelName = false, extendSpecialTokens = {}) {
if (tokenizersCache[encoding]) {
return tokenizersCache[encoding];
}
let tokenizer;
if (isModelName) {
tokenizer = encodingForModel(encoding, extendSpecialTokens);
} else {
tokenizer = getEncoding(encoding, extendSpecialTokens);
}
tokenizersCache[encoding] = tokenizer;
return tokenizer;
}
freeAndResetEncoder() {
try {
if (!this.gptEncoder) {
return;
}
this.gptEncoder.free();
delete tokenizersCache[this.encoding];
delete tokenizersCache.count;
this.setupTokenizer();
} catch (error) {
console.log('freeAndResetEncoder error');
console.error(error);
}
}
getTokenCount(text) {
try {
if (tokenizersCache.count >= 25) {
if (this.options.debug) {
console.debug('freeAndResetEncoder: reached 25 encodings, reseting...');
}
this.freeAndResetEncoder();
}
tokenizersCache.count = (tokenizersCache.count || 0) + 1;
return this.gptEncoder.encode(text, 'all').length;
} catch (error) {
this.freeAndResetEncoder();
return this.gptEncoder.encode(text, 'all').length;
}
}
getSaveOptions() {
return {
chatGptLabel: this.options.chatGptLabel,
promptPrefix: this.options.promptPrefix,
...this.modelOptions
};
}
getBuildMessagesOptions(opts) {
return {
isChatCompletion: this.isChatCompletion,
promptPrefix: opts.promptPrefix,
abortController: opts.abortController,
};
}
async buildMessages(messages, parentMessageId, { isChatCompletion = false, promptPrefix = null }) {
if (!isChatCompletion) {
return await this.buildPrompt(messages, parentMessageId, { isChatGptModel: isChatCompletion, promptPrefix });
}
let payload;
let instructions;
let tokenCountMap;
let promptTokens;
let orderedMessages = this.constructor.getMessagesForConversation(messages, parentMessageId);
promptPrefix = (promptPrefix || this.options.promptPrefix || '').trim();
if (promptPrefix) {
promptPrefix = `Instructions:\n${promptPrefix}`;
instructions = {
role: 'system',
name: 'instructions',
content: promptPrefix
};
if (this.contextStrategy) {
instructions.tokenCount = this.getTokenCountForMessage(instructions);
}
}
const formattedMessages = orderedMessages.map((message) => {
let { role: _role, sender, text } = message;
const role = _role ?? sender;
const content = text ?? '';
const formattedMessage = {
role: role?.toLowerCase() === 'user' ? 'user' : 'assistant',
content,
};
if (this.options?.name && formattedMessage.role === 'user') {
formattedMessage.name = this.options.name;
}
if (this.contextStrategy) {
formattedMessage.tokenCount = message.tokenCount ?? this.getTokenCountForMessage(formattedMessage);
}
return formattedMessage;
});
// TODO: need to handle interleaving instructions better
if (this.contextStrategy) {
({ payload, tokenCountMap, promptTokens, messages } =
await this.handleContextStrategy({ instructions, orderedMessages, formattedMessages }));
}
const result = {
prompt: payload,
promptTokens,
messages,
};
if (tokenCountMap) {
tokenCountMap.instructions = instructions?.tokenCount;
result.tokenCountMap = tokenCountMap;
}
return result;
}
async sendCompletion(payload, opts = {}) {
let reply = '';
let result = null;
if (typeof opts.onProgress === 'function') {
await this.getCompletion(
payload,
(progressMessage) => {
if (progressMessage === '[DONE]') {
return;
}
const token =
this.isChatCompletion ? progressMessage.choices?.[0]?.delta?.content : progressMessage.choices?.[0]?.text;
// first event's delta content is always undefined
if (!token) {
return;
}
if (this.options.debug) {
// console.debug(token);
}
if (token === this.endToken) {
return;
}
opts.onProgress(token);
reply += token;
},
opts.abortController || new AbortController(),
);
} else {
result = await this.getCompletion(
payload,
null,
opts.abortController || new AbortController(),
);
if (this.options.debug) {
console.debug(JSON.stringify(result));
}
if (this.isChatCompletion) {
reply = result.choices[0].message.content;
} else {
reply = result.choices[0].text.replace(this.endToken, '');
}
}
return reply.trim();
}
getTokenCountForResponse(response) {
return this.getTokenCountForMessage({
role: 'assistant',
content: response.text,
});
}
}
module.exports = OpenAIClient;

View File

@@ -0,0 +1,540 @@
const OpenAIClient = require('./OpenAIClient');
const { ChatOpenAI } = require('langchain/chat_models/openai');
const { CallbackManager } = require('langchain/callbacks');
const { initializeCustomAgent, initializeFunctionsAgent } = require('./agents/');
const { loadTools } = require('./tools/util');
const { SelfReflectionTool } = require('./tools/');
const { HumanChatMessage, AIChatMessage } = require('langchain/schema');
const {
instructions,
imageInstructions,
errorInstructions,
} = require('./prompts/instructions');
class PluginsClient extends OpenAIClient {
constructor(apiKey, options = {}) {
super(apiKey, options);
this.sender = options.sender ?? 'Assistant';
this.tools = [];
this.actions = [];
this.openAIApiKey = apiKey;
this.setOptions(options);
this.executor = null;
}
getActions(input = null) {
let output = 'Internal thoughts & actions taken:\n"';
let actions = input || this.actions;
if (actions[0]?.action && this.functionsAgent) {
actions = actions.map((step) => ({
log: `Action: ${step.action?.tool || ''}\nInput: ${JSON.stringify(step.action?.toolInput) || ''}\nObservation: ${step.observation}`
}));
} else if (actions[0]?.action) {
actions = actions.map((step) => ({
log: `${step.action.log}\nObservation: ${step.observation}`
}));
}
actions.forEach((actionObj, index) => {
output += `${actionObj.log}`;
if (index < actions.length - 1) {
output += '\n';
}
});
return output + '"';
}
buildErrorInput(message, errorMessage) {
const log = errorMessage.includes('Could not parse LLM output:')
? `A formatting error occurred with your response to the human's last message. You didn't follow the formatting instructions. Remember to ${instructions}`
: `You encountered an error while replying to the human's last message. Attempt to answer again or admit an answer cannot be given.\nError: ${errorMessage}`;
return `
${log}
${this.getActions()}
Human's last message: ${message}
`;
}
buildPromptPrefix(result, message) {
if ((result.output && result.output.includes('N/A')) || result.output === undefined) {
return null;
}
if (
result?.intermediateSteps?.length === 1 &&
result?.intermediateSteps[0]?.action?.toolInput === 'N/A'
) {
return null;
}
const internalActions =
result?.intermediateSteps?.length > 0
? this.getActions(result.intermediateSteps)
: 'Internal Actions Taken: None';
const toolBasedInstructions = internalActions.toLowerCase().includes('image')
? imageInstructions
: '';
const errorMessage = result.errorMessage ? `${errorInstructions} ${result.errorMessage}\n` : '';
const preliminaryAnswer =
result.output?.length > 0 ? `Preliminary Answer: "${result.output.trim()}"` : '';
const prefix = preliminaryAnswer
? 'review and improve the answer you generated using plugins in response to the User Message below. The user hasn\'t seen your answer or thoughts yet.'
: 'respond to the User Message below based on your preliminary thoughts & actions.';
return `As a helpful AI Assistant, ${prefix}${errorMessage}\n${internalActions}
${preliminaryAnswer}
Reply conversationally to the User based on your ${
preliminaryAnswer ? 'preliminary answer, ' : ''
}internal actions, thoughts, and observations, making improvements wherever possible, but do not modify URLs.
${
preliminaryAnswer
? ''
: '\nIf there is an incomplete thought or action, you are expected to complete it in your response now.\n'
}You must cite sources if you are using any web links. ${toolBasedInstructions}
Only respond with your conversational reply to the following User Message:
"${message}"`;
}
setOptions(options) {
this.agentOptions = options.agentOptions;
this.functionsAgent = this.agentOptions?.agent === 'functions';
this.agentIsGpt3 = this.agentOptions?.model.startsWith('gpt-3');
if (this.functionsAgent && this.agentOptions.model) {
this.agentOptions.model = this.getFunctionModelName(this.agentOptions.model);
}
super.setOptions(options);
this.isGpt3 = this.modelOptions.model.startsWith('gpt-3');
if (this.reverseProxyUrl) {
this.langchainProxy = this.reverseProxyUrl.match(/.*v1/)[0];
}
}
getSaveOptions() {
return {
chatGptLabel: this.options.chatGptLabel,
promptPrefix: this.options.promptPrefix,
...this.modelOptions,
agentOptions: this.agentOptions,
};
}
saveLatestAction(action) {
this.actions.push(action);
}
getFunctionModelName(input) {
const prefixMap = {
'gpt-4': 'gpt-4-0613',
'gpt-4-32k': 'gpt-4-32k-0613',
'gpt-3.5-turbo': 'gpt-3.5-turbo-0613'
};
const prefix = Object.keys(prefixMap).find(key => input.startsWith(key));
return prefix ? prefixMap[prefix] : 'gpt-3.5-turbo-0613';
}
getBuildMessagesOptions(opts) {
return {
isChatCompletion: true,
promptPrefix: opts.promptPrefix,
abortController: opts.abortController,
};
}
createLLM(modelOptions, configOptions) {
let credentials = { openAIApiKey: this.openAIApiKey };
let configuration = {
apiKey: this.openAIApiKey,
};
if (this.azure) {
credentials = {};
configuration = {};
}
if (this.options.debug) {
console.debug('createLLM: configOptions');
console.debug(configOptions);
}
return new ChatOpenAI({ credentials, configuration, ...modelOptions }, configOptions);
}
async initialize({ user, message, onAgentAction, onChainEnd, signal }) {
const modelOptions = {
modelName: this.agentOptions.model,
temperature: this.agentOptions.temperature
};
const configOptions = {};
if (this.langchainProxy) {
configOptions.basePath = this.langchainProxy;
}
const model = this.createLLM(modelOptions, configOptions);
if (this.options.debug) {
console.debug(`<-----Agent Model: ${model.modelName} | Temp: ${model.temperature}----->`);
}
this.availableTools = await loadTools({
user,
model,
tools: this.options.tools,
functions: this.functionsAgent,
options: {
openAIApiKey: this.openAIApiKey
}
});
// load tools
for (const tool of this.options.tools) {
const validTool = this.availableTools[tool];
if (tool === 'plugins') {
const plugins = await validTool();
this.tools = [...this.tools, ...plugins];
} else if (validTool) {
this.tools.push(await validTool());
}
}
if (this.options.debug) {
console.debug('Requested Tools');
console.debug(this.options.tools);
console.debug('Loaded Tools');
console.debug(this.tools.map((tool) => tool.name));
}
if (this.tools.length > 0 && !this.functionsAgent) {
this.tools.push(new SelfReflectionTool({ message, isGpt3: false }));
} else if (this.tools.length === 0) {
return;
}
const handleAction = (action, callback = null) => {
this.saveLatestAction(action);
if (this.options.debug) {
console.debug('Latest Agent Action ', this.actions[this.actions.length - 1]);
}
if (typeof callback === 'function') {
callback(action);
}
};
// Map Messages to Langchain format
const pastMessages = this.currentMessages.slice(0, -1).map(
msg => msg?.isCreatedByUser || msg?.role?.toLowerCase() === 'user'
? new HumanChatMessage(msg.text)
: new AIChatMessage(msg.text));
// initialize agent
const initializer = this.functionsAgent ? initializeFunctionsAgent : initializeCustomAgent;
this.executor = await initializer({
model,
signal,
pastMessages,
tools: this.tools,
currentDateString: this.currentDateString,
verbose: this.options.debug,
returnIntermediateSteps: true,
callbackManager: CallbackManager.fromHandlers({
async handleAgentAction(action) {
handleAction(action, onAgentAction);
},
async handleChainEnd(action) {
if (typeof onChainEnd === 'function') {
onChainEnd(action);
}
}
})
});
if (this.options.debug) {
console.debug('Loaded agent.');
}
}
async executorCall(message, signal) {
let errorMessage = '';
const maxAttempts = 1;
for (let attempts = 1; attempts <= maxAttempts; attempts++) {
const errorInput = this.buildErrorInput(message, errorMessage);
const input = attempts > 1 ? errorInput : message;
if (this.options.debug) {
console.debug(`Attempt ${attempts} of ${maxAttempts}`);
}
if (this.options.debug && errorMessage.length > 0) {
console.debug('Caught error, input:', input);
}
try {
this.result = await this.executor.call({ input, signal });
break; // Exit the loop if the function call is successful
} catch (err) {
console.error(err);
errorMessage = err.message;
if (attempts === maxAttempts) {
this.result.output = `Encountered an error while attempting to respond. Error: ${err.message}`;
this.result.intermediateSteps = this.actions;
this.result.errorMessage = errorMessage;
break;
}
}
}
}
addImages(intermediateSteps, responseMessage) {
if (!intermediateSteps || !responseMessage) {
return;
}
intermediateSteps.forEach(step => {
const { observation } = step;
if (!observation || !observation.includes('![')) {
return;
}
if (!responseMessage.text.includes(observation)) {
responseMessage.text += '\n' + observation;
if (this.options.debug) {
console.debug('added image from intermediateSteps');
}
}
});
}
async handleResponseMessage(responseMessage, saveOptions, user) {
responseMessage.tokenCount = this.getTokenCountForResponse(responseMessage);
responseMessage.completionTokens = responseMessage.tokenCount;
await this.saveMessageToDatabase(responseMessage, saveOptions, user);
delete responseMessage.tokenCount;
return { ...responseMessage, ...this.result };
}
async sendMessage(message, opts = {}) {
const completionMode = this.options.tools.length === 0;
if (completionMode) {
this.setOptions(opts);
return super.sendMessage(message, opts);
}
console.log('Plugins sendMessage', message, opts);
const {
user,
conversationId,
responseMessageId,
saveOptions,
userMessage,
onAgentAction,
onChainEnd,
} = await this.handleStartMethods(message, opts);
this.currentMessages.push(userMessage);
let { prompt: payload, tokenCountMap, promptTokens, messages } = await this.buildMessages(
this.currentMessages,
userMessage.messageId,
this.getBuildMessagesOptions({
promptPrefix: null,
abortController: this.abortController,
}),
);
if (tokenCountMap) {
console.dir(tokenCountMap, { depth: null })
if (tokenCountMap[userMessage.messageId]) {
userMessage.tokenCount = tokenCountMap[userMessage.messageId];
console.log('userMessage.tokenCount', userMessage.tokenCount);
}
payload = payload.map((message) => {
const messageWithoutTokenCount = message;
delete messageWithoutTokenCount.tokenCount;
return messageWithoutTokenCount;
});
this.handleTokenCountMap(tokenCountMap);
}
this.result = {};
if (messages) {
this.currentMessages = messages;
}
await this.saveMessageToDatabase(userMessage, saveOptions, user);
const responseMessage = {
messageId: responseMessageId,
conversationId,
parentMessageId: userMessage.messageId,
isCreatedByUser: false,
model: this.modelOptions.model,
sender: this.sender,
promptTokens,
};
await this.initialize({
user,
message,
onAgentAction,
onChainEnd,
signal: this.abortController.signal
});
await this.executorCall(message, this.abortController.signal);
// If message was aborted mid-generation
if (this.result?.errorMessage?.length > 0 && this.result?.errorMessage?.includes('cancel')) {
responseMessage.text = 'Cancelled.';
return await this.handleResponseMessage(responseMessage, saveOptions, user);
}
if (this.agentOptions.skipCompletion && this.result.output) {
responseMessage.text = this.result.output;
this.addImages(this.result.intermediateSteps, responseMessage);
await this.generateTextStream(this.result.output, opts.onProgress);
return await this.handleResponseMessage(responseMessage, saveOptions, user);
}
if (this.options.debug) {
console.debug('Plugins completion phase: this.result');
console.debug(this.result);
}
const promptPrefix = this.buildPromptPrefix(this.result, message);
if (this.options.debug) {
console.debug('Plugins: promptPrefix');
console.debug(promptPrefix);
}
payload = await this.buildCompletionPrompt({
messages: this.currentMessages,
promptPrefix,
});
if (this.options.debug) {
console.debug('buildCompletionPrompt Payload');
console.debug(payload);
}
responseMessage.text = await this.sendCompletion(payload, opts);
return await this.handleResponseMessage(responseMessage, saveOptions, user);
}
async buildCompletionPrompt({ messages, promptPrefix: _promptPrefix }) {
if (this.options.debug) {
console.debug('buildCompletionPrompt messages', messages);
}
const orderedMessages = messages;
let promptPrefix = _promptPrefix.trim();
// If the prompt prefix doesn't end with the end token, add it.
if (!promptPrefix.endsWith(`${this.endToken}`)) {
promptPrefix = `${promptPrefix.trim()}${this.endToken}\n\n`;
}
promptPrefix = `${this.startToken}Instructions:\n${promptPrefix}`;
const promptSuffix = `${this.startToken}${this.chatGptLabel ?? 'Assistant'}:\n`;
const instructionsPayload = {
role: 'system',
name: 'instructions',
content: promptPrefix
};
const messagePayload = {
role: 'system',
content: promptSuffix
};
if (this.isGpt3) {
instructionsPayload.role = 'user';
messagePayload.role = 'user';
instructionsPayload.content += `\n${promptSuffix}`;
}
// testing if this works with browser endpoint
if (!this.isGpt3 && this.reverseProxyUrl) {
instructionsPayload.role = 'user';
}
let currentTokenCount =
this.getTokenCountForMessage(instructionsPayload) +
this.getTokenCountForMessage(messagePayload);
let promptBody = '';
const maxTokenCount = this.maxPromptTokens;
// Iterate backwards through the messages, adding them to the prompt until we reach the max token count.
// Do this within a recursive async function so that it doesn't block the event loop for too long.
const buildPromptBody = async () => {
if (currentTokenCount < maxTokenCount && orderedMessages.length > 0) {
const message = orderedMessages.pop();
const isCreatedByUser = message.isCreatedByUser || message.role?.toLowerCase() === 'user';
const roleLabel = isCreatedByUser ? this.userLabel : this.chatGptLabel;
let messageString = `${this.startToken}${roleLabel}:\n${message.text}${this.endToken}\n`;
let newPromptBody = `${messageString}${promptBody}`;
const tokenCountForMessage = this.getTokenCount(messageString);
const newTokenCount = currentTokenCount + tokenCountForMessage;
if (newTokenCount > maxTokenCount) {
if (promptBody) {
// This message would put us over the token limit, so don't add it.
return false;
}
// This is the first message, so we can't add it. Just throw an error.
throw new Error(
`Prompt is too long. Max token count is ${maxTokenCount}, but prompt is ${newTokenCount} tokens long.`
);
}
promptBody = newPromptBody;
currentTokenCount = newTokenCount;
// wait for next tick to avoid blocking the event loop
await new Promise((resolve) => setTimeout(resolve, 0));
return buildPromptBody();
}
return true;
};
await buildPromptBody();
const prompt = promptBody;
messagePayload.content = prompt;
// Add 2 tokens for metadata after all messages have been counted.
currentTokenCount += 2;
if (this.isGpt3 && messagePayload.content.length > 0) {
const context = 'Chat History:\n';
messagePayload.content = `${context}${prompt}`;
currentTokenCount += this.getTokenCount(context);
}
// Use up to `this.maxContextTokens` tokens (prompt + response), but try to leave `this.maxTokens` tokens for the response.
this.modelOptions.max_tokens = Math.min(
this.maxContextTokens - currentTokenCount,
this.maxResponseTokens
);
if (this.isGpt3) {
messagePayload.content += promptSuffix;
return [instructionsPayload, messagePayload];
}
const result = [messagePayload, instructionsPayload];
if (this.functionsAgent && !this.isGpt3) {
result[1].content = `${result[1].content}\n${this.startToken}${this.chatGptLabel}:\nSure thing! Here is the output you requested:\n`;
}
return result.filter((message) => message.content.length > 0);
}
}
module.exports = PluginsClient;

View File

@@ -51,6 +51,4 @@ Query: {input}
return AgentExecutor.fromAgentAndTools({ agent, tools, memory, ...rest });
};
module.exports = {
initializeCustomAgent
};
module.exports = initializeCustomAgent;

View File

@@ -102,8 +102,8 @@ class CustomOutputParser extends ZeroShotAgentOutputParser {
match
);
selectedTool = this.getValidTool(selectedTool);
}
}
if (match && !selectedTool) {
console.log(
'\n\n<----------------------HIT INVALID TOOL PARSING ERROR---------------------->\n\n',

View File

@@ -0,0 +1,120 @@
const { Agent } = require('langchain/agents');
const { LLMChain } = require('langchain/chains');
const { FunctionChatMessage, AIChatMessage } = require('langchain/schema');
const {
ChatPromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate
} = require('langchain/prompts');
const PREFIX = `You are a helpful AI assistant.`;
function parseOutput(message) {
if (message.additional_kwargs.function_call) {
const function_call = message.additional_kwargs.function_call;
return {
tool: function_call.name,
toolInput: function_call.arguments ? JSON.parse(function_call.arguments) : {},
log: message.text
};
} else {
return { returnValues: { output: message.text }, log: message.text };
}
}
class FunctionsAgent extends Agent {
constructor(input) {
super({ ...input, outputParser: undefined });
this.tools = input.tools;
}
lc_namespace = ['langchain', 'agents', 'openai'];
_agentType() {
return 'openai-functions';
}
observationPrefix() {
return 'Observation: ';
}
llmPrefix() {
return 'Thought:';
}
_stop() {
return ['Observation:'];
}
static createPrompt(_tools, fields) {
const { prefix = PREFIX, currentDateString } = fields || {};
return ChatPromptTemplate.fromPromptMessages([
SystemMessagePromptTemplate.fromTemplate(`Date: ${currentDateString}\n${prefix}`),
new MessagesPlaceholder('chat_history'),
HumanMessagePromptTemplate.fromTemplate(`Query: {input}`),
new MessagesPlaceholder('agent_scratchpad'),
]);
}
static fromLLMAndTools(llm, tools, args) {
FunctionsAgent.validateTools(tools);
const prompt = FunctionsAgent.createPrompt(tools, args);
const chain = new LLMChain({
prompt,
llm,
callbacks: args?.callbacks
});
return new FunctionsAgent({
llmChain: chain,
allowedTools: tools.map((t) => t.name),
tools
});
}
async constructScratchPad(steps) {
return steps.flatMap(({ action, observation }) => [
new AIChatMessage('', {
function_call: {
name: action.tool,
arguments: JSON.stringify(action.toolInput)
}
}),
new FunctionChatMessage(observation, action.tool)
]);
}
async plan(steps, inputs, callbackManager) {
// Add scratchpad and stop to inputs
const thoughts = await this.constructScratchPad(steps);
const newInputs = Object.assign({}, inputs, { agent_scratchpad: thoughts });
if (this._stop().length !== 0) {
newInputs.stop = this._stop();
}
// Split inputs between prompt and llm
const llm = this.llmChain.llm;
const valuesForPrompt = Object.assign({}, newInputs);
const valuesForLLM = {
tools: this.tools
};
for (let i = 0; i < this.llmChain.llm.callKeys.length; i++) {
const key = this.llmChain.llm.callKeys[i];
if (key in inputs) {
valuesForLLM[key] = inputs[key];
delete valuesForPrompt[key];
}
}
const promptValue = await this.llmChain.prompt.formatPromptValue(valuesForPrompt);
const message = await llm.predictMessages(
promptValue.toChatMessages(),
valuesForLLM,
callbackManager
);
console.log('message', message);
return parseOutput(message);
}
}
module.exports = FunctionsAgent;

View File

@@ -0,0 +1,35 @@
const { initializeAgentExecutorWithOptions } = require('langchain/agents');
const { BufferMemory, ChatMessageHistory } = require('langchain/memory');
const initializeFunctionsAgent = async ({
tools,
model,
pastMessages,
// currentDateString,
...rest
}) => {
const memory = new BufferMemory({
chatHistory: new ChatMessageHistory(pastMessages),
memoryKey: 'chat_history',
humanPrefix: 'User',
aiPrefix: 'Assistant',
inputKey: 'input',
outputKey: 'output',
returnMessages: true,
});
return await initializeAgentExecutorWithOptions(
tools,
model,
{
agentType: 'openai-functions',
memory,
...rest,
}
);
};
module.exports = initializeFunctionsAgent;

View File

@@ -0,0 +1,7 @@
const initializeCustomAgent = require('./CustomAgent/initializeCustomAgent');
const initializeFunctionsAgent = require('./Functions/initializeFunctionsAgent');
module.exports = {
initializeCustomAgent,
initializeFunctionsAgent
};

View File

@@ -1,94 +0,0 @@
require('dotenv').config();
const { KeyvFile } = require('keyv-file');
const { genAzureChatCompletion } = require('../../utils/genAzureEndpoints');
const tiktoken = require('@dqbd/tiktoken');
const tiktokenModels = require('../../utils/tiktokenModels');
const encoding_for_model = tiktoken.encoding_for_model;
const askClient = async ({
text,
parentMessageId,
conversationId,
model,
oaiApiKey,
chatGptLabel,
promptPrefix,
temperature,
top_p,
presence_penalty,
frequency_penalty,
onProgress,
abortController,
userId
}) => {
const { ChatGPTClient } = await import('@waylaidwanderer/chatgpt-api');
const store = {
store: new KeyvFile({ filename: './data/cache.json' })
};
const azure = process.env.AZURE_OPENAI_API_KEY ? true : false;
let promptText = 'You are ChatGPT, a large language model trained by OpenAI.';
if (promptPrefix) {
promptText = promptPrefix;
}
const maxContextTokens = model === 'gpt-4-32k' ? 32767 : model.startsWith('gpt-4') ? 8191 : 4095; // 1 less than maximum
const clientOptions = {
reverseProxyUrl: process.env.OPENAI_REVERSE_PROXY || null,
azure,
maxContextTokens,
modelOptions: {
model,
temperature,
top_p,
presence_penalty,
frequency_penalty
},
chatGptLabel,
promptPrefix,
proxy: process.env.PROXY || null
// debug: true
};
let apiKey = oaiApiKey ? oaiApiKey : process.env.OPENAI_API_KEY || null;
if (azure) {
apiKey = oaiApiKey ? oaiApiKey : process.env.AZURE_OPENAI_API_KEY || null;
clientOptions.reverseProxyUrl = genAzureChatCompletion({
azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME,
azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME,
azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION
});
}
const client = new ChatGPTClient(apiKey, clientOptions, store);
const options = {
onProgress,
abortController,
...(parentMessageId && conversationId ? { parentMessageId, conversationId } : {})
};
let usage = {};
let enc = null;
try {
enc = encoding_for_model(tiktokenModels.has(model) ? model : 'gpt-3.5-turbo');
usage.prompt_tokens = (enc.encode(promptText)).length + (enc.encode(text)).length;
} catch (e) {
console.log('Error encoding prompt text', e);
}
const res = await client.sendMessage(text, { ...options, userId });
try {
usage.completion_tokens = (enc.encode(res.response)).length;
enc.free();
usage.total_tokens = usage.prompt_tokens + usage.completion_tokens;
res.usage = usage;
} catch (e) {
console.log('Error encoding response text', e);
}
return res;
};
module.exports = { askClient };

15
api/app/clients/index.js Normal file
View File

@@ -0,0 +1,15 @@
const ChatGPTClient = require('./ChatGPTClient');
const OpenAIClient = require('./OpenAIClient');
const PluginsClient = require('./PluginsClient');
const GoogleClient = require('./GoogleClient');
const TextStream = require('./TextStream');
const toolUtils = require('./tools/util');
module.exports = {
ChatGPTClient,
OpenAIClient,
PluginsClient,
GoogleClient,
TextStream,
...toolUtils
};

View File

@@ -0,0 +1,24 @@
const { PromptTemplate } = require('langchain/prompts');
const refinePromptTemplate = `Your job is to produce a final summary of the following conversation.
We have provided an existing summary up to a certain point: "{existing_answer}"
We have the opportunity to refine the existing summary
(only if needed) with some more context below.
------------
"{text}"
------------
Given the new context, refine the original summary of the conversation.
Do note who is speaking in the conversation to give proper context.
If the context isn't useful, return the original summary.
REFINED CONVERSATION SUMMARY:`;
const refinePrompt = new PromptTemplate({
template: refinePromptTemplate,
inputVariables: ["existing_answer", "text"],
});
module.exports = {
refinePrompt,
};

View File

@@ -0,0 +1,371 @@
const { initializeFakeClient } = require('./FakeClient');
jest.mock('../../../lib/db/connectDb');
jest.mock('../../../models', () => {
return function () {
return {
save: jest.fn(),
deleteConvos: jest.fn(),
getConvo: jest.fn(),
getMessages: jest.fn(),
saveMessage: jest.fn(),
updateMessage: jest.fn(),
saveConvo: jest.fn()
};
};
});
jest.mock('langchain/text_splitter', () => {
return {
RecursiveCharacterTextSplitter: jest.fn().mockImplementation(() => {
return { createDocuments: jest.fn().mockResolvedValue([]) };
}),
};
});
jest.mock('langchain/chat_models/openai', () => {
return {
ChatOpenAI: jest.fn().mockImplementation(() => {
return {};
}),
};
});
jest.mock('langchain/chains', () => {
return {
loadSummarizationChain: jest.fn().mockReturnValue({
call: jest.fn().mockResolvedValue({ output_text: 'Refined answer' }),
}),
};
});
let parentMessageId;
let conversationId;
const fakeMessages = [];
const userMessage = 'Hello, ChatGPT!';
const apiKey = 'fake-api-key';
describe('BaseClient', () => {
let TestClient;
const options = {
// debug: true,
modelOptions: {
model: 'gpt-3.5-turbo',
temperature: 0,
}
};
beforeEach(() => {
TestClient = initializeFakeClient(apiKey, options, fakeMessages);
});
test('returns the input messages without instructions when addInstructions() is called with empty instructions', () => {
const messages = [
{ content: 'Hello' },
{ content: 'How are you?' },
{ content: 'Goodbye' },
];
const instructions = '';
const result = TestClient.addInstructions(messages, instructions);
expect(result).toEqual(messages);
});
test('returns the input messages with instructions properly added when addInstructions() is called with non-empty instructions', () => {
const messages = [
{ content: 'Hello' },
{ content: 'How are you?' },
{ content: 'Goodbye' },
];
const instructions = { content: 'Please respond to the question.' };
const result = TestClient.addInstructions(messages, instructions);
const expected = [
{ content: 'Hello' },
{ content: 'How are you?' },
{ content: 'Please respond to the question.' },
{ content: 'Goodbye' },
];
expect(result).toEqual(expected);
});
test('concats messages correctly in concatenateMessages()', () => {
const messages = [
{ name: 'User', content: 'Hello' },
{ name: 'Assistant', content: 'How can I help you?' },
{ name: 'User', content: 'I have a question.' },
];
const result = TestClient.concatenateMessages(messages);
const expected = `User:\nHello\n\nAssistant:\nHow can I help you?\n\nUser:\nI have a question.\n\n`;
expect(result).toBe(expected);
});
test('refines messages correctly in refineMessages()', async () => {
const messagesToRefine = [
{ role: 'user', content: 'Hello', tokenCount: 10 },
{ role: 'assistant', content: 'How can I help you?', tokenCount: 20 }
];
const remainingContextTokens = 100;
const expectedRefinedMessage = {
role: 'assistant',
content: 'Refined answer',
tokenCount: 14 // 'Refined answer'.length
};
const result = await TestClient.refineMessages(messagesToRefine, remainingContextTokens);
expect(result).toEqual(expectedRefinedMessage);
});
test('gets messages within token limit (under limit) correctly in getMessagesWithinTokenLimit()', async () => {
TestClient.maxContextTokens = 100;
TestClient.shouldRefineContext = true;
TestClient.refineMessages = jest.fn().mockResolvedValue({
role: 'assistant',
content: 'Refined answer',
tokenCount: 30
});
const messages = [
{ role: 'user', content: 'Hello', tokenCount: 5 },
{ role: 'assistant', content: 'How can I help you?', tokenCount: 19 },
{ role: 'user', content: 'I have a question.', tokenCount: 18 },
];
const expectedContext = [
{ role: 'user', content: 'Hello', tokenCount: 5 }, // 'Hello'.length
{ role: 'assistant', content: 'How can I help you?', tokenCount: 19 },
{ role: 'user', content: 'I have a question.', tokenCount: 18 },
];
const expectedRemainingContextTokens = 58; // 100 - 5 - 19 - 18
const expectedMessagesToRefine = [];
const result = await TestClient.getMessagesWithinTokenLimit(messages);
expect(result.context).toEqual(expectedContext);
expect(result.remainingContextTokens).toBe(expectedRemainingContextTokens);
expect(result.messagesToRefine).toEqual(expectedMessagesToRefine);
});
test('gets messages within token limit (over limit) correctly in getMessagesWithinTokenLimit()', async () => {
TestClient.maxContextTokens = 50; // Set a lower limit
TestClient.shouldRefineContext = true;
TestClient.refineMessages = jest.fn().mockResolvedValue({
role: 'assistant',
content: 'Refined answer',
tokenCount: 4
});
const messages = [
{ role: 'user', content: 'I need a coffee, stat!', tokenCount: 30 },
{ role: 'assistant', content: 'Sure, I can help with that.', tokenCount: 30 },
{ role: 'user', content: 'Hello', tokenCount: 5 },
{ role: 'assistant', content: 'How can I help you?', tokenCount: 19 },
{ role: 'user', content: 'I have a question.', tokenCount: 18 },
];
const expectedContext = [
{ role: 'user', content: 'Hello', tokenCount: 5 },
{ role: 'assistant', content: 'How can I help you?', tokenCount: 19 },
{ role: 'user', content: 'I have a question.', tokenCount: 18 },
];
const expectedRemainingContextTokens = 8; // 50 - 18 - 19 - 5
const expectedMessagesToRefine = [
{ role: 'user', content: 'I need a coffee, stat!', tokenCount: 30 },
{ role: 'assistant', content: 'Sure, I can help with that.', tokenCount: 30 },
];
const result = await TestClient.getMessagesWithinTokenLimit(messages);
expect(result.context).toEqual(expectedContext);
expect(result.remainingContextTokens).toBe(expectedRemainingContextTokens);
expect(result.messagesToRefine).toEqual(expectedMessagesToRefine);
});
test('handles context strategy correctly in handleContextStrategy()', async () => {
TestClient.addInstructions = jest.fn().mockReturnValue([
{ content: 'Hello' },
{ content: 'How can I help you?' },
{ content: 'Please provide more details.' },
{ content: 'I can assist you with that.' }
]);
TestClient.getMessagesWithinTokenLimit = jest.fn().mockReturnValue({
context: [
{ content: 'How can I help you?' },
{ content: 'Please provide more details.' },
{ content: 'I can assist you with that.' }
],
remainingContextTokens: 80,
messagesToRefine: [
{ content: 'Hello' },
],
refineIndex: 3,
});
TestClient.refineMessages = jest.fn().mockResolvedValue({
role: 'assistant',
content: 'Refined answer',
tokenCount: 30
});
TestClient.getTokenCountForResponse = jest.fn().mockReturnValue(40);
const instructions = { content: 'Please provide more details.' };
const orderedMessages = [
{ content: 'Hello' },
{ content: 'How can I help you?' },
{ content: 'Please provide more details.' },
{ content: 'I can assist you with that.' }
];
const formattedMessages = [
{ content: 'Hello' },
{ content: 'How can I help you?' },
{ content: 'Please provide more details.' },
{ content: 'I can assist you with that.' }
];
const expectedResult = {
payload: [
{
content: 'Refined answer',
role: 'assistant',
tokenCount: 30
},
{ content: 'How can I help you?' },
{ content: 'Please provide more details.' },
{ content: 'I can assist you with that.' }
],
promptTokens: expect.any(Number),
tokenCountMap: {},
messages: expect.any(Array),
};
const result = await TestClient.handleContextStrategy({
instructions,
orderedMessages,
formattedMessages,
});
expect(result).toEqual(expectedResult);
});
describe('sendMessage', () => {
test('sendMessage should return a response message', async () => {
const expectedResult = expect.objectContaining({
sender: TestClient.sender,
text: expect.any(String),
isCreatedByUser: false,
messageId: expect.any(String),
parentMessageId: expect.any(String),
conversationId: expect.any(String)
});
const response = await TestClient.sendMessage(userMessage);
parentMessageId = response.messageId;
conversationId = response.conversationId;
expect(response).toEqual(expectedResult);
});
test('sendMessage should work with provided conversationId and parentMessageId', async () => {
const userMessage = 'Second message in the conversation';
const opts = {
conversationId,
parentMessageId,
getIds: jest.fn(),
onStart: jest.fn()
};
const expectedResult = expect.objectContaining({
sender: TestClient.sender,
text: expect.any(String),
isCreatedByUser: false,
messageId: expect.any(String),
parentMessageId: expect.any(String),
conversationId: opts.conversationId
});
const response = await TestClient.sendMessage(userMessage, opts);
parentMessageId = response.messageId;
expect(response.conversationId).toEqual(conversationId);
expect(response).toEqual(expectedResult);
expect(opts.getIds).toHaveBeenCalled();
expect(opts.onStart).toHaveBeenCalled();
expect(TestClient.getBuildMessagesOptions).toHaveBeenCalled();
expect(TestClient.getSaveOptions).toHaveBeenCalled();
});
test('should return chat history', async () => {
const chatMessages = await TestClient.loadHistory(conversationId, parentMessageId);
expect(TestClient.currentMessages).toHaveLength(4);
expect(chatMessages[0].text).toEqual(userMessage);
});
test('setOptions is called with the correct arguments', async () => {
TestClient.setOptions = jest.fn();
const opts = { conversationId: '123', parentMessageId: '456' };
await TestClient.sendMessage('Hello, world!', opts);
expect(TestClient.setOptions).toHaveBeenCalledWith(opts);
TestClient.setOptions.mockClear();
});
test('loadHistory is called with the correct arguments', async () => {
const opts = { conversationId: '123', parentMessageId: '456' };
await TestClient.sendMessage('Hello, world!', opts);
expect(TestClient.loadHistory).toHaveBeenCalledWith(opts.conversationId, opts.parentMessageId);
});
test('getIds is called with the correct arguments', async () => {
const getIds = jest.fn();
const opts = { getIds };
const response = await TestClient.sendMessage('Hello, world!', opts);
expect(getIds).toHaveBeenCalledWith({
userMessage: expect.objectContaining({ text: 'Hello, world!' }),
conversationId: response.conversationId,
responseMessageId: response.messageId
});
});
test('onStart is called with the correct arguments', async () => {
const onStart = jest.fn();
const opts = { onStart };
await TestClient.sendMessage('Hello, world!', opts);
expect(onStart).toHaveBeenCalledWith(expect.objectContaining({ text: 'Hello, world!' }));
});
test('saveMessageToDatabase is called with the correct arguments', async () => {
const saveOptions = TestClient.getSaveOptions();
const user = {}; // Mock user
const opts = { user };
await TestClient.sendMessage('Hello, world!', opts);
expect(TestClient.saveMessageToDatabase).toHaveBeenCalledWith(
expect.objectContaining({
sender: expect.any(String),
text: expect.any(String),
isCreatedByUser: expect.any(Boolean),
messageId: expect.any(String),
parentMessageId: expect.any(String),
conversationId: expect.any(String)
}),
saveOptions,
user
);
});
test('sendCompletion is called with the correct arguments', async () => {
const payload = {}; // Mock payload
TestClient.buildMessages.mockReturnValue({ prompt: payload, tokenCountMap: null });
const opts = {};
await TestClient.sendMessage('Hello, world!', opts);
expect(TestClient.sendCompletion).toHaveBeenCalledWith(payload, opts);
});
test('getTokenCountForResponse is called with the correct arguments', async () => {
const tokenCountMap = {}; // Mock tokenCountMap
TestClient.buildMessages.mockReturnValue({ prompt: [], tokenCountMap });
TestClient.getTokenCountForResponse = jest.fn();
const response = await TestClient.sendMessage('Hello, world!', {});
expect(TestClient.getTokenCountForResponse).toHaveBeenCalledWith(response);
});
test('returns an object with the correct shape', async () => {
const response = await TestClient.sendMessage('Hello, world!', {});
expect(response).toEqual(expect.objectContaining({
sender: expect.any(String),
text: expect.any(String),
isCreatedByUser: expect.any(Boolean),
messageId: expect.any(String),
parentMessageId: expect.any(String),
conversationId: expect.any(String)
}));
});
});
});

View File

@@ -0,0 +1,185 @@
const crypto = require('crypto');
const BaseClient = require('../BaseClient');
const { maxTokensMap } = require('../../../utils');
class FakeClient extends BaseClient {
constructor(apiKey, options = {}) {
super(apiKey, options);
this.sender = 'AI Assistant';
this.setOptions(options);
}
setOptions(options) {
if (this.options && !this.options.replaceOptions) {
this.options.modelOptions = {
...this.options.modelOptions,
...options.modelOptions,
};
delete options.modelOptions;
this.options = {
...this.options,
...options,
};
} else {
this.options = options;
}
if (this.options.openaiApiKey) {
this.apiKey = this.options.openaiApiKey;
}
const modelOptions = this.options.modelOptions || {};
if (!this.modelOptions) {
this.modelOptions = {
...modelOptions,
model: modelOptions.model || 'gpt-3.5-turbo',
temperature: typeof modelOptions.temperature === 'undefined' ? 0.8 : modelOptions.temperature,
top_p: typeof modelOptions.top_p === 'undefined' ? 1 : modelOptions.top_p,
presence_penalty: typeof modelOptions.presence_penalty === 'undefined' ? 1 : modelOptions.presence_penalty,
stop: modelOptions.stop,
};
}
this.maxContextTokens = maxTokensMap[this.modelOptions.model] ?? 4097;
}
getCompletion() {}
buildMessages() {}
getTokenCount(str) {
return str.length;
}
getTokenCountForMessage(message) {
return message?.content?.length || message.length;
}
}
const initializeFakeClient = (apiKey, options, fakeMessages) => {
let TestClient = new FakeClient(apiKey);
TestClient.options = options;
TestClient.abortController = { abort: jest.fn() };
TestClient.saveMessageToDatabase = jest.fn();
TestClient.loadHistory = jest
.fn()
.mockImplementation((conversationId, parentMessageId = null) => {
if (!conversationId) {
TestClient.currentMessages = [];
return Promise.resolve([]);
}
const orderedMessages = TestClient.constructor.getMessagesForConversation(
fakeMessages,
parentMessageId
);
TestClient.currentMessages = orderedMessages;
return Promise.resolve(orderedMessages);
});
TestClient.getSaveOptions = jest.fn().mockImplementation(() => {
return {};
});
TestClient.getBuildMessagesOptions = jest.fn().mockImplementation(() => {
return {};
});
TestClient.sendCompletion = jest.fn(async () => {
return 'Mock response text';
});
TestClient.sendMessage = jest.fn().mockImplementation(async (message, opts = {}) => {
if (opts && typeof opts === 'object') {
TestClient.setOptions(opts);
}
const user = opts.user || null;
const conversationId = opts.conversationId || crypto.randomUUID();
const parentMessageId = opts.parentMessageId || '00000000-0000-0000-0000-000000000000';
const userMessageId = opts.overrideParentMessageId || crypto.randomUUID();
const saveOptions = TestClient.getSaveOptions();
this.pastMessages = await TestClient.loadHistory(
conversationId,
TestClient.options?.parentMessageId
);
const userMessage = {
text: message,
sender: TestClient.sender,
isCreatedByUser: true,
messageId: userMessageId,
parentMessageId,
conversationId
};
const response = {
sender: TestClient.sender,
text: 'Hello, User!',
isCreatedByUser: false,
messageId: crypto.randomUUID(),
parentMessageId: userMessage.messageId,
conversationId
};
fakeMessages.push(userMessage);
fakeMessages.push(response);
if (typeof opts.getIds === 'function') {
opts.getIds({
userMessage,
conversationId,
responseMessageId: response.messageId
});
}
if (typeof opts.onStart === 'function') {
opts.onStart(userMessage);
}
let { prompt: payload, tokenCountMap } = await TestClient.buildMessages(
this.currentMessages,
userMessage.messageId,
TestClient.getBuildMessagesOptions(opts),
);
if (tokenCountMap) {
payload = payload.map((message, i) => {
const { tokenCount, ...messageWithoutTokenCount } = message;
// userMessage is always the last one in the payload
if (i === payload.length - 1) {
userMessage.tokenCount = message.tokenCount;
console.debug(`Token count for user message: ${tokenCount}`, `Instruction Tokens: ${tokenCountMap.instructions || 'N/A'}`);
}
return messageWithoutTokenCount;
});
TestClient.handleTokenCountMap(tokenCountMap);
}
await TestClient.saveMessageToDatabase(userMessage, saveOptions, user);
response.text = await TestClient.sendCompletion(payload, opts);
if (tokenCountMap && TestClient.getTokenCountForResponse) {
response.tokenCount = TestClient.getTokenCountForResponse(response);
}
await TestClient.saveMessageToDatabase(response, saveOptions, user);
return response;
});
TestClient.buildMessages = jest.fn(async (messages, parentMessageId) => {
const orderedMessages = TestClient.constructor.getMessagesForConversation(messages, parentMessageId);
const formattedMessages = orderedMessages.map((message) => {
let { role: _role, sender, text } = message;
const role = _role ?? sender;
const content = text ?? '';
return {
role: role?.toLowerCase() === 'user' ? 'user' : 'assistant',
content,
};
});
return {
prompt: formattedMessages,
tokenCountMap: null, // Simplified for the mock
};
});
return TestClient;
}
module.exports = { FakeClient, initializeFakeClient };

View File

@@ -0,0 +1,160 @@
const OpenAIClient = require('../OpenAIClient');
describe('OpenAIClient', () => {
let client;
const model = 'gpt-4';
const parentMessageId = '1';
const messages = [
{ role: 'user', sender: 'User', text: 'Hello', messageId: parentMessageId},
{ role: 'assistant', sender: 'Assistant', text: 'Hi', messageId: '2' },
];
beforeEach(() => {
const options = {
// debug: true,
openaiApiKey: 'new-api-key',
modelOptions: {
model,
temperature: 0.7,
},
};
client = new OpenAIClient('test-api-key', options);
client.refineMessages = jest.fn().mockResolvedValue({
role: 'assistant',
content: 'Refined answer',
tokenCount: 30
});
});
describe('setOptions', () => {
it('should set the options correctly', () => {
expect(client.apiKey).toBe('new-api-key');
expect(client.modelOptions.model).toBe(model);
expect(client.modelOptions.temperature).toBe(0.7);
});
});
describe('freeAndResetEncoder', () => {
it('should reset the encoder', () => {
client.freeAndResetEncoder();
expect(client.gptEncoder).toBeDefined();
});
});
describe('getTokenCount', () => {
it('should return the correct token count', () => {
const count = client.getTokenCount('Hello, world!');
expect(count).toBeGreaterThan(0);
});
it('should reset the encoder and count when count reaches 25', () => {
const freeAndResetEncoderSpy = jest.spyOn(client, 'freeAndResetEncoder');
// Call getTokenCount 25 times
for (let i = 0; i < 25; i++) {
client.getTokenCount('test text');
}
expect(freeAndResetEncoderSpy).toHaveBeenCalled();
});
it('should not reset the encoder and count when count is less than 25', () => {
const freeAndResetEncoderSpy = jest.spyOn(client, 'freeAndResetEncoder');
// Call getTokenCount 24 times
for (let i = 0; i < 24; i++) {
client.getTokenCount('test text');
}
expect(freeAndResetEncoderSpy).not.toHaveBeenCalled();
});
it('should handle errors and reset the encoder', () => {
const freeAndResetEncoderSpy = jest.spyOn(client, 'freeAndResetEncoder');
client.gptEncoder.encode = jest.fn().mockImplementation(() => {
throw new Error('Test error');
});
client.getTokenCount('test text');
expect(freeAndResetEncoderSpy).toHaveBeenCalled();
});
});
describe('getSaveOptions', () => {
it('should return the correct save options', () => {
const options = client.getSaveOptions();
expect(options).toHaveProperty('chatGptLabel');
expect(options).toHaveProperty('promptPrefix');
});
});
describe('getBuildMessagesOptions', () => {
it('should return the correct build messages options', () => {
const options = client.getBuildMessagesOptions({ promptPrefix: 'Hello' });
expect(options).toHaveProperty('isChatCompletion');
expect(options).toHaveProperty('promptPrefix');
expect(options.promptPrefix).toBe('Hello');
});
});
describe('buildMessages', () => {
it('should build messages correctly for chat completion', async () => {
const result = await client.buildMessages(messages, parentMessageId, { isChatCompletion: true });
expect(result).toHaveProperty('prompt');
});
it('should build messages correctly for non-chat completion', async () => {
const result = await client.buildMessages(messages, parentMessageId, { isChatCompletion: false });
expect(result).toHaveProperty('prompt');
});
it('should build messages correctly with a promptPrefix', async () => {
const result = await client.buildMessages(messages, parentMessageId, { isChatCompletion: true, promptPrefix: 'Test Prefix' });
expect(result).toHaveProperty('prompt');
const instructions = result.prompt.find(item => item.name === 'instructions');
expect(instructions).toBeDefined();
expect(instructions.content).toContain('Test Prefix');
});
it('should handle context strategy correctly', async () => {
client.contextStrategy = 'refine';
const result = await client.buildMessages(messages, parentMessageId, { isChatCompletion: true });
expect(result).toHaveProperty('prompt');
expect(result).toHaveProperty('tokenCountMap');
});
it('should assign name property for user messages when options.name is set', async () => {
client.options.name = 'Test User';
const result = await client.buildMessages(messages, parentMessageId, { isChatCompletion: true });
const hasUserWithName = result.prompt.some(item => item.role === 'user' && item.name === 'Test User');
expect(hasUserWithName).toBe(true);
});
it('should calculate tokenCount for each message when contextStrategy is set', async () => {
client.contextStrategy = 'refine';
const result = await client.buildMessages(messages, parentMessageId, { isChatCompletion: true });
const hasUserWithTokenCount = result.prompt.some(item => item.role === 'user' && item.tokenCount > 0);
expect(hasUserWithTokenCount).toBe(true);
});
it('should handle promptPrefix from options when promptPrefix argument is not provided', async () => {
client.options.promptPrefix = 'Test Prefix from options';
const result = await client.buildMessages(messages, parentMessageId, { isChatCompletion: true });
const instructions = result.prompt.find(item => item.name === 'instructions');
expect(instructions.content).toContain('Test Prefix from options');
});
it('should handle case when neither promptPrefix argument nor options.promptPrefix is set', async () => {
const result = await client.buildMessages(messages, parentMessageId, { isChatCompletion: true });
const instructions = result.prompt.find(item => item.name === 'instructions');
expect(instructions).toBeUndefined();
});
it('should handle case when getMessagesForConversation returns null or an empty array', async () => {
const messages = [];
const result = await client.buildMessages(messages, parentMessageId, { isChatCompletion: true });
expect(result.prompt).toEqual([]);
});
});
});

View File

@@ -1,7 +1,25 @@
/*
This is a test script to see how much memory is used by the client when encoding.
On my work machine, it was able to process 10,000 encoding requests / 48.686 seconds = approximately 205.4 RPS
I've significantly reduced the amount of encoding needed by saving token counts in the database, so these
numbers should only be hit with a large amount of concurrent users
It would take 103 concurrent users sending 1 message every 1 second to hit these numbers, which is rather unrealistic,
and at that point, out-sourcing the encoding to a separate server would be a better solution
Also, for scaling, could increase the rate at which the encoder resets; the trade-off is more resource usage on the server.
Initial memory usage: 25.93 megabytes
Peak memory usage: 55 megabytes
Final memory usage: 28.03 megabytes
Post-test (timeout of 15s): 21.91 megabytes
*/
require('dotenv').config();
const { OpenAIClient } = require('../');
function timeout(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
const run = async () => {
const { ChatGPTClient } = await import('@waylaidwanderer/chatgpt-api');
const text = `
The standard Lorem Ipsum passage, used since the 1500s
@@ -37,7 +55,6 @@ const run = async () => {
// Calculate initial percentage of memory used
const initialMemoryUsage = process.memoryUsage().heapUsed;
function printProgressBar(percentageUsed) {
const filledBlocks = Math.round(percentageUsed / 2); // Each block represents 2%
@@ -46,20 +63,20 @@ const run = async () => {
console.log(progressBar);
}
const iterations = 16000;
const iterations = 10000;
console.time('loopTime');
// Trying to catch the error doesn't help; all future calls will immediately crash
for (let i = 0; i < iterations; i++) {
try {
console.log(`Iteration ${i}`);
const client = new ChatGPTClient(apiKey, clientOptions);
const client = new OpenAIClient(apiKey, clientOptions);
client.getTokenCount(text);
// const encoder = client.constructor.getTokenizer('cl100k_base');
// console.log(`Iteration ${i}: call encode()...`);
// encoder.encode(text, 'all');
// encoder.free();
const memoryUsageDuringLoop = process.memoryUsage().heapUsed;
const percentageUsed = memoryUsageDuringLoop / maxMemory * 100;
printProgressBar(percentageUsed);
@@ -80,10 +97,23 @@ const run = async () => {
// const finalPercentageUsed = finalMemoryUsage / maxMemory * 100;
console.log(`Initial memory usage: ${initialMemoryUsage / 1024 / 1024} megabytes`);
console.log(`Final memory usage: ${finalMemoryUsage / 1024 / 1024} megabytes`);
setTimeout(() => {
const memoryUsageAfterTimeout = process.memoryUsage().heapUsed;
console.log(`Post timeout: ${memoryUsageAfterTimeout / 1024 / 1024} megabytes`);
} , 10000);
await timeout(15000);
const memoryUsageAfterTimeout = process.memoryUsage().heapUsed;
console.log(`Post timeout: ${memoryUsageAfterTimeout / 1024 / 1024} megabytes`);
}
run();
run();
process.on('uncaughtException', (err) => {
if (!err.message.includes('fetch failed')) {
console.error('There was an uncaught error:');
console.error(err);
}
if (err.message.includes('fetch failed')) {
console.log('fetch failed error caught');
// process.exit(0);
} else {
process.exit(1);
}
});

View File

@@ -0,0 +1,148 @@
const { HumanChatMessage, AIChatMessage } = require('langchain/schema');
const PluginsClient = require('../PluginsClient');
const crypto = require('crypto');
jest.mock('../../../lib/db/connectDb');
jest.mock('../../../models/Conversation', () => {
return function () {
return {
save: jest.fn(),
deleteConvos: jest.fn()
};
};
});
describe('PluginsClient', () => {
let TestAgent;
let options = {
tools: [],
modelOptions: {
model: 'gpt-3.5-turbo',
temperature: 0,
max_tokens: 2
},
agentOptions: {
model: 'gpt-3.5-turbo'
}
};
let parentMessageId;
let conversationId;
const fakeMessages = [];
const userMessage = 'Hello, ChatGPT!';
const apiKey = 'fake-api-key';
beforeEach(() => {
TestAgent = new PluginsClient(apiKey, options);
TestAgent.loadHistory = jest
.fn()
.mockImplementation((conversationId, parentMessageId = null) => {
if (!conversationId) {
TestAgent.currentMessages = [];
return Promise.resolve([]);
}
const orderedMessages = TestAgent.constructor.getMessagesForConversation(
fakeMessages,
parentMessageId
);
const chatMessages = orderedMessages.map((msg) =>
msg?.isCreatedByUser || msg?.role?.toLowerCase() === 'user'
? new HumanChatMessage(msg.text)
: new AIChatMessage(msg.text)
);
TestAgent.currentMessages = orderedMessages;
return Promise.resolve(chatMessages);
});
TestAgent.sendMessage = jest.fn().mockImplementation(async (message, opts = {}) => {
if (opts && typeof opts === 'object') {
TestAgent.setOptions(opts);
}
const conversationId = opts.conversationId || crypto.randomUUID();
const parentMessageId = opts.parentMessageId || '00000000-0000-0000-0000-000000000000';
const userMessageId = opts.overrideParentMessageId || crypto.randomUUID();
this.pastMessages = await TestAgent.loadHistory(
conversationId,
TestAgent.options?.parentMessageId
);
const userMessage = {
text: message,
sender: 'ChatGPT',
isCreatedByUser: true,
messageId: userMessageId,
parentMessageId,
conversationId
};
const response = {
sender: 'ChatGPT',
text: 'Hello, User!',
isCreatedByUser: false,
messageId: crypto.randomUUID(),
parentMessageId: userMessage.messageId,
conversationId
};
fakeMessages.push(userMessage);
fakeMessages.push(response);
return response;
});
});
test('initializes PluginsClient without crashing', () => {
expect(TestAgent).toBeInstanceOf(PluginsClient);
});
test('check setOptions function', () => {
expect(TestAgent.agentIsGpt3).toBe(true);
});
describe('sendMessage', () => {
test('sendMessage should return a response message', async () => {
const expectedResult = expect.objectContaining({
sender: 'ChatGPT',
text: expect.any(String),
isCreatedByUser: false,
messageId: expect.any(String),
parentMessageId: expect.any(String),
conversationId: expect.any(String)
});
const response = await TestAgent.sendMessage(userMessage);
console.log(response);
parentMessageId = response.messageId;
conversationId = response.conversationId;
expect(response).toEqual(expectedResult);
});
test('sendMessage should work with provided conversationId and parentMessageId', async () => {
const userMessage = 'Second message in the conversation';
const opts = {
conversationId,
parentMessageId
};
const expectedResult = expect.objectContaining({
sender: 'ChatGPT',
text: expect.any(String),
isCreatedByUser: false,
messageId: expect.any(String),
parentMessageId: expect.any(String),
conversationId: opts.conversationId
});
const response = await TestAgent.sendMessage(userMessage, opts);
parentMessageId = response.messageId;
expect(response.conversationId).toEqual(conversationId);
expect(response).toEqual(expectedResult);
});
test('should return chat history', async () => {
const chatMessages = await TestAgent.loadHistory(conversationId, parentMessageId);
expect(TestAgent.currentMessages).toHaveLength(4);
expect(chatMessages[0].text).toEqual(userMessage);
});
});
});

View File

@@ -10,7 +10,6 @@ export interface AIPluginToolParams {
model: BaseLanguageModel;
}
export interface PathParameter {
name: string;
description: string;

View File

@@ -2,7 +2,7 @@
// To use this tool, you must pass in a configured OpenAIApi object.
const fs = require('fs');
const { Configuration, OpenAIApi } = require('openai');
const { genAzureEndpoint } = require('../../../utils/genAzureEndpoints');
// const { genAzureEndpoint } = require('../../../utils/genAzureEndpoints');
const { Tool } = require('langchain/tools');
const saveImageFromUrl = require('./saveImageFromUrl');
const path = require('path');
@@ -11,31 +11,31 @@ class OpenAICreateImage extends Tool {
constructor(fields = {}) {
super();
let apiKey = fields.OPENAI_API_KEY || process.env.OPENAI_API_KEY;
let azureKey = fields.AZURE_OPENAI_API_KEY || process.env.AZURE_OPENAI_API_KEY;
let apiKey = fields.DALLE_API_KEY || this.getApiKey();
// let azureKey = fields.AZURE_API_KEY || process.env.AZURE_API_KEY;
let config = { apiKey };
if (azureKey) {
apiKey = azureKey;
const azureConfig = {
apiKey,
azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME || fields.azureOpenAIApiInstanceName,
azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME || fields.azureOpenAIApiDeploymentName,
azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION || fields.azureOpenAIApiVersion
};
config = {
apiKey,
basePath: genAzureEndpoint({
...azureConfig,
}),
baseOptions: {
headers: { 'api-key': apiKey },
params: {
'api-version': azureConfig.azureOpenAIApiVersion // this might change. I got the current value from the sample code at https://oai.azure.com/portal/chat
}
}
};
}
// if (azureKey) {
// apiKey = azureKey;
// const azureConfig = {
// apiKey,
// azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME || fields.azureOpenAIApiInstanceName,
// azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME || fields.azureOpenAIApiDeploymentName,
// azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION || fields.azureOpenAIApiVersion
// };
// config = {
// apiKey,
// basePath: genAzureEndpoint({
// ...azureConfig,
// }),
// baseOptions: {
// headers: { 'api-key': apiKey },
// params: {
// 'api-version': azureConfig.azureOpenAIApiVersion // this might change. I got the current value from the sample code at https://oai.azure.com/portal/chat
// }
// }
// };
// }
this.openaiApi = new OpenAIApi(new Configuration(config));
this.name = 'dall-e';
this.description = `You can generate images with 'dall-e'. This tool is exclusively for visual content.
@@ -46,11 +46,14 @@ Guidelines:
"Subject: [subject], Style: [style], Color: [color], Details: [details], Emotion: [emotion]"
- Generate images only once per human query unless explicitly requested by the user`;
}
// "Subject": "Mona Lisa",
// "Style": "Chinese traditional painting",
// "Color": "Mainly wash tones of ink, with small color blocks in some parts",
// "Details": "Mona Lisa should have long hair, a silk dress, holding a fan. The background should have mountains and trees.",
// "Emotion": "Serene and elegant"
getApiKey() {
const apiKey = process.env.DALLE_API_KEY || '';
if (!apiKey) {
throw new Error('Missing DALLE_API_KEY environment variable.');
}
return apiKey;
}
replaceUnwantedChars(inputString) {
return inputString.replace(/\r\n|\r|\n/g, ' ').replace('"', '').trim();

View File

@@ -6,7 +6,7 @@ const { Tool } = require('langchain/tools');
// this.name = 'requests_get';
// this.headers = headers;
// this.maxOutputLength = maxOutputLength || 2000;
// this.description = `A portal to the internet. Use this when you need to get specific content from a website.
// this.description = `A portal to the internet. Use this when you need to get specific content from a website.
// - Input should be a url (i.e. https://www.google.com). The output will be the text response of the GET request.`;
// }
@@ -27,7 +27,7 @@ const { Tool } = require('langchain/tools');
// this.maxOutputLength = maxOutputLength || Infinity;
// this.description = `Use this when you want to POST to a website.
// - Input should be a json string with two keys: "url" and "data".
// - The value of "url" should be a string, and the value of "data" should be a dictionary of
// - The value of "url" should be a string, and the value of "data" should be a dictionary of
// - key-value pairs you want to POST to the url as a JSON body.
// - Be careful to always use double quotes for strings in the json string
// - The output will be the text response of the POST request.`;
@@ -63,23 +63,23 @@ class HttpRequestTool extends Tool {
const urlPattern = /"url":\s*"([^"]*)"/;
const methodPattern = /"method":\s*"([^"]*)"/;
const dataPattern = /"data":\s*"([^"]*)"/;
const url = input.match(urlPattern)[1];
const method = input.match(methodPattern)[1];
let data = input.match(dataPattern)[1];
// Parse 'data' back to JSON if possible
try {
data = JSON.parse(data);
} catch (e) {
// If it's not a JSON string, keep it as is
}
let options = {
method: method,
headers: this.headers
};
if (['POST', 'PUT', 'PATCH'].includes(method.toUpperCase()) && data) {
if (typeof data === 'object') {
options.body = JSON.stringify(data);
@@ -88,20 +88,20 @@ class HttpRequestTool extends Tool {
}
options.headers['Content-Type'] = 'application/json';
}
const res = await fetch(url, options);
const text = await res.text();
if (text.includes('<html')) {
return 'This tool is not designed to browse web pages. Only use it for API calls.';
}
return text.slice(0, this.maxOutputLength);
} catch (error) {
console.log(error);
return `${error}`;
}
}
}
}
module.exports = HttpRequestTool;

View File

@@ -0,0 +1,23 @@
const GoogleSearchAPI = require('./GoogleSearch');
const HttpRequestTool = require('./HttpRequestTool');
const AIPluginTool = require('./AIPluginTool');
const OpenAICreateImage = require('./DALL-E');
const StructuredSD = require('./structured/StableDiffusion');
const StableDiffusionAPI = require('./StableDiffusion');
const WolframAlphaAPI = require('./Wolfram');
const StructuredWolfram = require('./structured/Wolfram');
const SelfReflectionTool = require('./SelfReflection');
const availableTools = require('./manifest.json');
module.exports = {
availableTools,
GoogleSearchAPI,
HttpRequestTool,
AIPluginTool,
OpenAICreateImage,
StableDiffusionAPI,
StructuredSD,
WolframAlphaAPI,
StructuredWolfram,
SelfReflectionTool
}

View File

@@ -8,12 +8,12 @@
{
"authField": "GOOGLE_CSE_ID",
"label": "Google CSE ID",
"description": "This is your Google Custom Search Engine ID. For instructions on how to obtain this, see <a href='https://github.com/danny-avila/chatgpt-clone/blob/main/guides/GOOGLE_SEARCH.md'>Our Docs</a>."
"description": "This is your Google Custom Search Engine ID. For instructions on how to obtain this, see <a href='https://github.com/danny-avila/LibreChat/blob/main/docs/features/plugins/google_search.md'>Our Docs</a>."
},
{
"authField": "GOOGLE_API_KEY",
"label": "Google API Key",
"description": "This is your Google Custom Search API Key. For instructions on how to obtain this, see <a href='https://github.com/danny-avila/chatgpt-clone/blob/main/guides/GOOGLE_SEARCH.md'>Our Docs</a>."
"description": "This is your Google Custom Search API Key. For instructions on how to obtain this, see <a href='https://github.com/danny-avila/LibreChat/blob/main/docs/features/plugins/google_search.md'>Our Docs</a>."
}
]
},

View File

@@ -0,0 +1,89 @@
// Generates image using stable diffusion webui's api (automatic1111)
const fs = require('fs');
const { StructuredTool } = require('langchain/tools');
const { z } = require('zod');
const path = require('path');
const axios = require('axios');
const sharp = require('sharp');
class StableDiffusionAPI extends StructuredTool {
constructor(fields) {
super();
this.name = 'stable-diffusion';
this.url = fields.SD_WEBUI_URL || this.getServerURL();
this.description = `You can generate images with 'stable-diffusion'. This tool is exclusively for visual content.
Guidelines:
- Visually describe the moods, details, structures, styles, and/or proportions of the image. Remember, the focus is on visual attributes.
- Craft your input by "showing" and not "telling" the imagery. Think in terms of what you'd want to see in a photograph or a painting.
- Here's an example for generating a realistic portrait photo of a man:
"prompt":"photo of a man in black clothes, half body, high detailed skin, coastline, overcast weather, wind, waves, 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3"
"negative_prompt":"semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, out of frame, low quality, ugly, mutation, deformed"
- Generate images only once per human query unless explicitly requested by the user`;
this.schema = z.object({
prompt: z.string().describe("Detailed keywords to describe the subject, using at least 7 keywords to accurately describe the image, separated by comma"),
negative_prompt: z.string().describe("Keywords we want to exclude from the final image, using at least 7 keywords to accurately describe the image, separated by comma")
});
}
replaceNewLinesWithSpaces(inputString) {
return inputString.replace(/\r\n|\r|\n/g, ' ');
}
getMarkdownImageUrl(imageName) {
const imageUrl = path.join(this.relativeImageUrl, imageName).replace(/\\/g, '/').replace('public/', '');
return `![generated image](/${imageUrl})`;
}
getServerURL() {
const url = process.env.SD_WEBUI_URL || '';
if (!url) {
throw new Error('Missing SD_WEBUI_URL environment variable.');
}
return url;
}
async _call(data) {
const url = this.url;
const { prompt, negative_prompt } = data;
const payload = {
prompt,
negative_prompt,
steps: 20
};
const response = await axios.post(`${url}/sdapi/v1/txt2img`, payload);
const image = response.data.images[0];
const pngPayload = { image: `data:image/png;base64,${image}` };
const response2 = await axios.post(`${url}/sdapi/v1/png-info`, pngPayload);
const info = response2.data.info;
// Generate unique name
const imageName = `${Date.now()}.png`;
this.outputPath = path.resolve(__dirname, '..', '..', '..', '..', '..', 'client', 'public', 'images');
const appRoot = path.resolve(__dirname, '..', '..', '..', '..', '..', 'client');
this.relativeImageUrl = path.relative(appRoot, this.outputPath);
// Check if directory exists, if not create it
if (!fs.existsSync(this.outputPath)) {
fs.mkdirSync(this.outputPath, { recursive: true });
}
try {
const buffer = Buffer.from(image.split(',', 1)[0], 'base64');
await sharp(buffer)
.withMetadata({
iptcpng: {
parameters: info
}
})
.toFile(this.outputPath + '/' + imageName);
this.result = this.getMarkdownImageUrl(imageName);
} catch (error) {
console.error('Error while saving the image:', error);
// this.result = theImageUrl;
}
return this.result;
}
}
module.exports = StableDiffusionAPI;

View File

@@ -0,0 +1,72 @@
/* eslint-disable no-useless-escape */
const axios = require('axios');
const { StructuredTool } = require('langchain/tools');
const { z } = require('zod');
class WolframAlphaAPI extends StructuredTool {
constructor(fields) {
super();
this.name = 'wolfram';
this.apiKey = fields.WOLFRAM_APP_ID || this.getAppId();
this.description = `WolframAlpha offers computation, math, curated knowledge, and real-time data. It handles natural language queries and performs complex calculations.
Guidelines include:
- Use English for queries and inform users if information isn't from Wolfram.
- Use "6*10^14" for exponent notation and single-line strings for input.
- Use Markdown for formulas and simplify queries to keywords.
- Use single-letter variable names and named physical constants.
- Include a space between compound units and consider equations without units when solving.
- Make separate calls for each property and choose relevant 'Assumptions' if results aren't relevant.
- The tool also performs data analysis, plotting, and information retrieval.`;
this.schema = z.object({
nl_query: z.string().describe("Natural language query to WolframAlpha following the guidelines"),
});
}
async fetchRawText(url) {
try {
const response = await axios.get(url, { responseType: 'text' });
return response.data;
} catch (error) {
console.error(`Error fetching raw text: ${error}`);
throw error;
}
}
getAppId() {
const appId = process.env.WOLFRAM_APP_ID || '';
if (!appId) {
throw new Error('Missing WOLFRAM_APP_ID environment variable.');
}
return appId;
}
createWolframAlphaURL(query) {
// Clean up query
const formattedQuery = query.replaceAll(/`/g, '').replaceAll(/\n/g, ' ');
const baseURL = 'https://www.wolframalpha.com/api/v1/llm-api';
const encodedQuery = encodeURIComponent(formattedQuery);
const appId = this.apiKey || this.getAppId();
const url = `${baseURL}?input=${encodedQuery}&appid=${appId}`;
return url;
}
async _call(data) {
try {
const { nl_query } = data;
const url = this.createWolframAlphaURL(nl_query);
const response = await this.fetchRawText(url);
return response;
} catch (error) {
if (error.response && error.response.data) {
console.log('Error data:', error.response.data);
return error.response.data;
} else {
console.log(`Error querying Wolfram Alpha`, error.message);
// throw error;
return 'There was an error querying Wolfram Alpha.';
}
}
}
}
module.exports = WolframAlphaAPI;

View File

@@ -1,3 +1,4 @@
const { getUserPluginAuthValue } = require('../../../../server/services/PluginService');
const { OpenAIEmbeddings } = require('langchain/embeddings/openai');
const { ZapierToolKit } = require('langchain/agents');
const {
@@ -7,14 +8,17 @@ const {
const { ChatOpenAI } = require('langchain/chat_models/openai');
const { Calculator } = require('langchain/tools/calculator');
const { WebBrowser } = require('langchain/tools/webbrowser');
const GoogleSearchAPI = require('./GoogleSearch');
const HttpRequestTool = require('./HttpRequestTool');
const AIPluginTool = require('./AIPluginTool');
const OpenAICreateImage = require('./DALL-E');
const StableDiffusionAPI = require('./StableDiffusion');
const WolframAlphaAPI = require('./Wolfram');
const availableTools = require('./manifest.json');
const { getUserPluginAuthValue } = require('../../../server/services/PluginService');
const {
availableTools,
AIPluginTool,
GoogleSearchAPI,
WolframAlphaAPI,
StructuredWolfram,
HttpRequestTool,
OpenAICreateImage,
StableDiffusionAPI,
StructuredSD,
} = require('../');
const validateTools = async (user, tools = []) => {
try {
@@ -69,13 +73,13 @@ const loadToolWithAuth = async (user, authFields, ToolConstructor, options = {})
};
};
const loadTools = async ({ user, model, tools = [], options = {} }) => {
const loadTools = async ({ user, model, functions = null, tools = [], options = {} }) => {
const toolConstructors = {
calculator: Calculator,
google: GoogleSearchAPI,
wolfram: WolframAlphaAPI,
wolfram: functions ? StructuredWolfram : WolframAlphaAPI,
'dall-e': OpenAICreateImage,
'stable-diffusion': StableDiffusionAPI
'stable-diffusion': functions ? StructuredSD : StableDiffusionAPI
};
const customConstructors = {
@@ -109,9 +113,10 @@ const loadTools = async ({ user, model, tools = [], options = {} }) => {
return [
new HttpRequestTool(),
await AIPluginTool.fromPluginUrl(
"https://www.klarna.com/.well-known/ai-plugin.json", new ChatOpenAI({ openAIApiKey: options.openAIApiKey, temperature: 0 })
),
]
'https://www.klarna.com/.well-known/ai-plugin.json',
new ChatOpenAI({ openAIApiKey: options.openAIApiKey, temperature: 0 })
)
];
}
};

View File

@@ -1,14 +1,29 @@
/* eslint-disable jest/no-conditional-expect */
require('dotenv').config({ path: '../../../.env' });
const mongoose = require('mongoose');
const User = require('../../../models/User');
const connectDb = require('../../../lib/db/connectDb');
const { validateTools, loadTools, availableTools } = require('./index');
const PluginService = require('../../../server/services/PluginService');
const mockUser = {
_id: 'fakeId',
save: jest.fn(),
findByIdAndDelete: jest.fn(),
};
var mockPluginService = {
updateUserPluginAuth: jest.fn(),
deleteUserPluginAuth: jest.fn(),
getUserPluginAuthValue: jest.fn()
};
jest.mock('../../../../models/User', () => {
return function() {
return mockUser;
};
});
jest.mock('../../../../server/services/PluginService', () => mockPluginService);
const User = require('../../../../models/User');
const { validateTools, loadTools } = require('./');
const PluginService = require('../../../../server/services/PluginService');
const { BaseChatModel } = require('langchain/chat_models/openai');
const { Calculator } = require('langchain/tools/calculator');
const OpenAICreateImage = require('./DALL-E');
const GoogleSearchAPI = require('./GoogleSearch');
const { availableTools, OpenAICreateImage, GoogleSearchAPI, StructuredSD } = require('../');
describe('Tool Handlers', () => {
let fakeUser;
@@ -21,7 +36,16 @@ describe('Tool Handlers', () => {
const authConfigs = mainPlugin.authConfig;
beforeAll(async () => {
await connectDb();
mockUser.save.mockResolvedValue(undefined);
const userAuthValues = {};
mockPluginService.getUserPluginAuthValue.mockImplementation((userId, authField) => {
return userAuthValues[`${userId}-${authField}`];
});
mockPluginService.updateUserPluginAuth.mockImplementation((userId, authField, _pluginKey, credential) => {
userAuthValues[`${userId}-${authField}`] = credential;
});
fakeUser = new User({
name: 'Fake User',
username: 'fakeuser',
@@ -41,17 +65,11 @@ describe('Tool Handlers', () => {
}
});
// afterEach(async () => {
// // Clean up any test-specific data.
// });
afterAll(async () => {
// Delete the fake user & plugin auth
await User.findByIdAndDelete(fakeUser._id);
await mockUser.findByIdAndDelete(fakeUser._id);
for (const authConfig of authConfigs) {
await PluginService.deleteUserPluginAuth(fakeUser._id, authConfig.authField);
}
await mongoose.connection.close();
});
describe('validateTools', () => {
@@ -128,6 +146,7 @@ describe('Tool Handlers', () => {
try {
await loadTool2();
} catch (error) {
// eslint-disable-next-line jest/no-conditional-expect
expect(error).toBeDefined();
}
});
@@ -154,5 +173,17 @@ describe('Tool Handlers', () => {
});
expect(toolFunctions).toEqual({});
});
it('should return the StructuredTool version when using functions', async () => {
process.env.SD_WEBUI_URL = mockCredential;
toolFunctions = await loadTools({
user: fakeUser._id,
model: BaseChatModel,
tools: ['stable-diffusion'],
functions: true
});
const structuredTool = await toolFunctions['stable-diffusion']();
expect(structuredTool).toBeInstanceOf(StructuredSD);
delete process.env.SD_WEBUI_URL;
});
});
});

View File

@@ -0,0 +1,6 @@
const { validateTools, loadTools } = require('./handleTools');
module.exports = {
validateTools,
loadTools
};

View File

@@ -1,15 +1,15 @@
const { askClient } = require('./clients/chatgpt-client');
const { browserClient } = require('./clients/chatgpt-browser');
const { askBing } = require('./clients/bingai');
const { browserClient } = require('./chatgpt-browser');
const { askBing } = require('./bingai');
const clients = require('./clients');
const titleConvo = require('./titleConvo');
const getCitations = require('../lib/parse/getCitations');
const citeText = require('../lib/parse/citeText');
module.exports = {
askClient,
browserClient,
askBing,
titleConvo,
getCitations,
citeText
citeText,
...clients
};

View File

@@ -1,904 +0,0 @@
const crypto = require('crypto');
const { genAzureChatCompletion } = require('../../utils/genAzureEndpoints');
const {
encoding_for_model: encodingForModel,
get_encoding: getEncoding
} = require('@dqbd/tiktoken');
const { fetchEventSource } = require('@waylaidwanderer/fetch-event-source');
const { Agent, ProxyAgent } = require('undici');
const TextStream = require('../stream');
const { ChatOpenAI } = require('langchain/chat_models/openai');
const { CallbackManager } = require('langchain/callbacks');
const { HumanChatMessage, AIChatMessage } = require('langchain/schema');
const { initializeCustomAgent } = require('./agents/CustomAgent/initializeCustomAgent');
const { getMessages, saveMessage, saveConvo } = require('../../models');
const { loadTools, SelfReflectionTool } = require('./tools');
const {
instructions,
imageInstructions,
errorInstructions,
completionInstructions
} = require('./instructions');
const tokenizersCache = {};
class ChatAgent {
constructor(apiKey, options = {}) {
this.tools = [];
this.actions = [];
this.openAIApiKey = apiKey;
this.azure = options.azure || false;
if (this.azure) {
const { azureOpenAIApiInstanceName, azureOpenAIApiDeploymentName, azureOpenAIApiVersion } =
this.azure;
this.azureEndpoint = genAzureChatCompletion({
azureOpenAIApiInstanceName,
azureOpenAIApiDeploymentName,
azureOpenAIApiVersion
});
}
this.setOptions(options);
this.executor = null;
this.currentDateString = new Date().toLocaleDateString('en-us', {
year: 'numeric',
month: 'long',
day: 'numeric'
});
}
getActions(input = null) {
let output = 'Internal thoughts & actions taken:\n"';
let actions = input || this.actions;
if (actions[0]?.action) {
actions = actions.map((step) => ({
log: `${step.action.log}\nObservation: ${step.observation}`
}));
}
actions.forEach((actionObj, index) => {
output += `${actionObj.log}`;
if (index < actions.length - 1) {
output += '\n';
}
});
return output + '"';
}
buildErrorInput(message, errorMessage) {
const log = errorMessage.includes('Could not parse LLM output:')
? `A formatting error occurred with your response to the human's last message. You didn't follow the formatting instructions. Remember to ${instructions}`
: `You encountered an error while replying to the human's last message. Attempt to answer again or admit an answer cannot be given.\nError: ${errorMessage}`;
return `
${log}
${this.getActions()}
Human's last message: ${message}
`;
}
buildPromptPrefix(result, message) {
if ((result.output && result.output.includes('N/A')) || result.output === undefined) {
return null;
}
if (
result?.intermediateSteps?.length === 1 &&
result?.intermediateSteps[0]?.action?.toolInput === 'N/A'
) {
return null;
}
const internalActions =
result?.intermediateSteps?.length > 0
? this.getActions(result.intermediateSteps)
: 'Internal Actions Taken: None';
const toolBasedInstructions = internalActions.toLowerCase().includes('image')
? imageInstructions
: '';
const errorMessage = result.errorMessage ? `${errorInstructions} ${result.errorMessage}\n` : '';
const preliminaryAnswer =
result.output?.length > 0 ? `Preliminary Answer: "${result.output.trim()}"` : '';
const prefix = preliminaryAnswer
? `review and improve the answer you generated using plugins in response to the User Message below. The answer hasn't been sent to the user yet.`
: 'respond to the User Message below based on your preliminary thoughts & actions.';
return `As ChatGPT, ${prefix}${errorMessage}\n${internalActions}
${preliminaryAnswer}
Reply conversationally to the User based on your ${
preliminaryAnswer ? 'preliminary answer, ' : ''
}internal actions, thoughts, and observations, making improvements wherever possible, but do not modify URLs.
${
preliminaryAnswer
? ''
: '\nIf there is an incomplete thought or action, you are expected to complete it in your response now.\n'
}You must cite sources if you are using any web links. ${toolBasedInstructions}
Only respond with your conversational reply to the following User Message:
"${message}"`;
}
setOptions(options) {
if (this.options && !this.options.replaceOptions) {
// nested options aren't spread properly, so we need to do this manually
this.options.modelOptions = {
...this.options.modelOptions,
...options.modelOptions
};
this.options.agentOptions = {
...this.options.agentOptions,
...options.agentOptions
};
delete options.modelOptions;
delete options.agentOptions;
// now we can merge options
this.options = {
...this.options,
...options
};
} else {
this.options = options;
}
this.agentOptions = this.options.agentOptions || {};
this.agentIsGpt3 = this.agentOptions.model.startsWith('gpt-3');
const modelOptions = this.options.modelOptions || {};
this.modelOptions = {
...modelOptions,
model: modelOptions.model || 'gpt-3.5-turbo',
temperature: typeof modelOptions.temperature === 'undefined' ? 0.8 : modelOptions.temperature,
top_p: typeof modelOptions.top_p === 'undefined' ? 1 : modelOptions.top_p,
presence_penalty:
typeof modelOptions.presence_penalty === 'undefined' ? 0 : modelOptions.presence_penalty,
frequency_penalty:
typeof modelOptions.frequency_penalty === 'undefined' ? 0 : modelOptions.frequency_penalty,
stop: modelOptions.stop
};
this.isChatGptModel = this.modelOptions.model.startsWith('gpt-');
this.isGpt3 = this.modelOptions.model.startsWith('gpt-3');
this.maxContextTokens = this.modelOptions.model === 'gpt-4-32k' ? 32767 : this.modelOptions.model.startsWith('gpt-4') ? 8191 : 4095,
// Reserve 1024 tokens for the response.
// The max prompt tokens is determined by the max context tokens minus the max response tokens.
// Earlier messages will be dropped until the prompt is within the limit.
this.maxResponseTokens = this.modelOptions.max_tokens || 1024;
this.maxPromptTokens =
this.options.maxPromptTokens || this.maxContextTokens - this.maxResponseTokens;
if (this.maxPromptTokens + this.maxResponseTokens > this.maxContextTokens) {
throw new Error(
`maxPromptTokens + max_tokens (${this.maxPromptTokens} + ${this.maxResponseTokens} = ${
this.maxPromptTokens + this.maxResponseTokens
}) must be less than or equal to maxContextTokens (${this.maxContextTokens})`
);
}
this.userLabel = this.options.userLabel || 'User';
this.chatGptLabel = this.options.chatGptLabel || 'ChatGPT';
// Use these faux tokens to help the AI understand the context since we are building the chat log ourselves.
// Trying to use "<|im_start|>" causes the AI to still generate "<" or "<|" at the end sometimes for some reason,
// without tripping the stop sequences, so I'm using "||>" instead.
this.startToken = '||>';
this.endToken = '';
this.gptEncoder = this.constructor.getTokenizer('cl100k_base');
this.completionsUrl = 'https://api.openai.com/v1/chat/completions';
this.reverseProxyUrl = this.options.reverseProxyUrl || process.env.OPENAI_REVERSE_PROXY;
if (this.reverseProxyUrl) {
this.completionsUrl = this.reverseProxyUrl;
this.langchainProxy = this.reverseProxyUrl.substring(0, this.reverseProxyUrl.indexOf('v1') + 'v1'.length)
}
if (this.azureEndpoint) {
this.completionsUrl = this.azureEndpoint;
}
if (this.azureEndpoint && this.options.debug) {
console.debug(`Using Azure endpoint: ${this.azureEndpoint}`, this.azure);
}
}
static getTokenizer(encoding, isModelName = false, extendSpecialTokens = {}) {
if (tokenizersCache[encoding]) {
return tokenizersCache[encoding];
}
let tokenizer;
if (isModelName) {
tokenizer = encodingForModel(encoding, extendSpecialTokens);
} else {
tokenizer = getEncoding(encoding, extendSpecialTokens);
}
tokenizersCache[encoding] = tokenizer;
return tokenizer;
}
async getCompletion(input, onProgress, abortController = null) {
if (!abortController) {
abortController = new AbortController();
}
const modelOptions = this.modelOptions;
if (typeof onProgress === 'function') {
modelOptions.stream = true;
}
if (this.isChatGptModel) {
modelOptions.messages = input;
} else {
modelOptions.prompt = input;
}
const { debug } = this.options;
const url = this.completionsUrl;
if (debug) {
console.debug();
console.debug(url);
console.debug(modelOptions);
console.debug();
}
const opts = {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify(modelOptions),
dispatcher: new Agent({
bodyTimeout: 0,
headersTimeout: 0
})
};
if (this.azureEndpoint) {
opts.headers['api-key'] = this.azure.azureOpenAIApiKey;
} else if (this.openAIApiKey) {
opts.headers.Authorization = `Bearer ${this.openAIApiKey}`;
}
if (this.options.proxy) {
opts.dispatcher = new ProxyAgent(this.options.proxy);
}
if (modelOptions.stream) {
// eslint-disable-next-line no-async-promise-executor
return new Promise(async (resolve, reject) => {
try {
let done = false;
await fetchEventSource(url, {
...opts,
signal: abortController.signal,
async onopen(response) {
if (response.status === 200) {
return;
}
if (debug) {
// console.debug(response);
}
let error;
try {
const body = await response.text();
error = new Error(`Failed to send message. HTTP ${response.status} - ${body}`);
error.status = response.status;
error.json = JSON.parse(body);
} catch {
error = error || new Error(`Failed to send message. HTTP ${response.status}`);
}
throw error;
},
onclose() {
if (debug) {
console.debug('Server closed the connection unexpectedly, returning...');
}
// workaround for private API not sending [DONE] event
if (!done) {
onProgress('[DONE]');
abortController.abort();
resolve();
}
},
onerror(err) {
if (debug) {
console.debug(err);
}
// rethrow to stop the operation
throw err;
},
onmessage(message) {
if (debug) {
// console.debug(message);
}
if (!message.data || message.event === 'ping') {
return;
}
if (message.data === '[DONE]') {
onProgress('[DONE]');
abortController.abort();
resolve();
done = true;
return;
}
onProgress(JSON.parse(message.data));
}
});
} catch (err) {
reject(err);
}
});
}
const response = await fetch(url, {
...opts,
signal: abortController.signal
});
if (response.status !== 200) {
const body = await response.text();
const error = new Error(`Failed to send message. HTTP ${response.status} - ${body}`);
error.status = response.status;
try {
error.json = JSON.parse(body);
} catch {
error.body = body;
}
throw error;
}
return response.json();
}
async loadHistory(conversationId, parentMessageId = null) {
if (this.options.debug) {
console.debug('Loading history for conversation', conversationId, parentMessageId);
}
const messages = (await getMessages({ conversationId })) || [];
if (messages.length === 0) {
this.currentMessages = [];
return [];
}
const orderedMessages = this.constructor.getMessagesForConversation(messages, parentMessageId);
// Convert Message documents into appropriate ChatMessage instances
const chatMessages = orderedMessages.map((msg) =>
msg?.isCreatedByUser || msg?.role.toLowerCase() === 'user'
? new HumanChatMessage(msg.text)
: new AIChatMessage(msg.text)
);
this.currentMessages = orderedMessages;
return chatMessages;
}
async saveMessageToDatabase(message, user = null) {
await saveMessage({ ...message, unfinished: false });
await saveConvo(user, {
conversationId: message.conversationId,
endpoint: 'gptPlugins',
chatGptLabel: this.options.chatGptLabel,
promptPrefix: this.options.promptPrefix,
...this.modelOptions,
agentOptions: this.agentOptions
});
}
saveLatestAction(action) {
this.actions.push(action);
}
async initialize({ user, message, onAgentAction, onChainEnd, signal }) {
const modelOptions = {
modelName: this.agentOptions.model,
temperature: this.agentOptions.temperature
};
const configOptions = {};
if (this.langchainProxy) {
configOptions.basePath = this.langchainProxy;
}
const model = this.azure
? new ChatOpenAI({
...this.azure,
...modelOptions
})
: new ChatOpenAI(
{
openAIApiKey: this.openAIApiKey,
...modelOptions
},
configOptions
// {
// basePath: 'http://localhost:8080/v1'
// }
);
if (this.options.debug) {
console.debug(`<-----Agent Model: ${model.modelName} | Temp: ${model.temperature}----->`);
}
this.availableTools = await loadTools({
user,
model,
tools: this.options.tools,
options: {
openAIApiKey: this.openAIApiKey
}
});
// load tools
for (const tool of this.options.tools) {
const validTool = this.availableTools[tool];
if (tool === 'plugins') {
const plugins = await validTool();
this.tools = [...this.tools, ...plugins];
} else if (validTool) {
this.tools.push(await validTool());
}
}
if (this.options.debug) {
console.debug('Requested Tools');
console.debug(this.options.tools);
console.debug('Loaded Tools');
console.debug(this.tools.map((tool) => tool.name));
}
if (this.tools.length > 0) {
this.tools.push(new SelfReflectionTool({ message, isGpt3: false }));
} else if (this.tools.length === 0) {
return;
}
const handleAction = (action, callback = null) => {
this.saveLatestAction(action);
if (this.options.debug) {
console.debug('Latest Agent Action ', this.actions[this.actions.length - 1]);
}
if (typeof callback === 'function') {
callback(action);
}
};
// initialize agent
this.executor = await initializeCustomAgent({
model,
signal,
tools: this.tools,
pastMessages: this.pastMessages,
currentDateString: this.currentDateString,
verbose: this.options.debug,
returnIntermediateSteps: true,
callbackManager: CallbackManager.fromHandlers({
async handleAgentAction(action) {
handleAction(action, onAgentAction);
},
async handleChainEnd(action) {
if (typeof onChainEnd === 'function') {
onChainEnd(action);
}
}
})
});
if (this.options.debug) {
console.debug('Loaded agent.');
}
}
async sendApiMessage(messages, userMessage, opts = {}) {
// Doing it this way instead of having each message be a separate element in the array seems to be more reliable,
// especially when it comes to keeping the AI in character. It also seems to improve coherency and context retention.
let payload = await this.buildPrompt({
messages: [
...messages,
{
messageId: userMessage.messageId,
parentMessageId: userMessage.parentMessageId,
role: 'User',
text: userMessage.text
}
],
...opts
});
let reply = '';
let result = {};
if (typeof opts.onProgress === 'function') {
await this.getCompletion(
payload,
(progressMessage) => {
if (progressMessage === '[DONE]') {
return;
}
const token = this.isChatGptModel
? progressMessage.choices[0].delta.content
: progressMessage.choices[0].text;
// first event's delta content is always undefined
if (!token) {
return;
}
if (token === this.endToken) {
return;
}
opts.onProgress(token);
reply += token;
},
opts.abortController || new AbortController()
);
} else {
result = await this.getCompletion(
payload,
null,
opts.abortController || new AbortController()
);
if (this.options.debug) {
console.debug(JSON.stringify(result));
}
if (this.isChatGptModel) {
reply = result.choices[0].message.content;
} else {
reply = result.choices[0].text.replace(this.endToken, '');
}
}
if (this.options.debug) {
console.debug();
}
return reply.trim();
}
async executorCall(message, signal) {
let errorMessage = '';
const maxAttempts = 1;
for (let attempts = 1; attempts <= maxAttempts; attempts++) {
const errorInput = this.buildErrorInput(message, errorMessage);
const input = attempts > 1 ? errorInput : message;
if (this.options.debug) {
console.debug(`Attempt ${attempts} of ${maxAttempts}`);
}
if (this.options.debug && errorMessage.length > 0) {
console.debug('Caught error, input:', input);
}
try {
this.result = await this.executor.call({ input, signal });
break; // Exit the loop if the function call is successful
} catch (err) {
console.error(err);
errorMessage = err.message;
if (attempts === maxAttempts) {
this.result.output = `Encountered an error while attempting to respond. Error: ${err.message}`;
this.result.intermediateSteps = this.actions;
this.result.errorMessage = errorMessage;
break;
}
}
}
}
async sendMessage(message, opts = {}) {
if (opts && typeof opts === 'object') {
this.setOptions(opts);
}
console.log('sendMessage', message, opts);
const user = opts.user || null;
const { onAgentAction, onChainEnd, onProgress } = opts;
const conversationId = opts.conversationId || crypto.randomUUID();
const parentMessageId = opts.parentMessageId || '00000000-0000-0000-0000-000000000000';
const userMessageId = opts.overrideParentMessageId || crypto.randomUUID();
const responseMessageId = crypto.randomUUID();
this.pastMessages = await this.loadHistory(conversationId, this.options?.parentMessageId);
const userMessage = {
messageId: userMessageId,
parentMessageId,
conversationId,
sender: 'User',
text: message,
isCreatedByUser: true
};
if (typeof opts?.getIds === 'function') {
opts.getIds({
userMessage,
conversationId,
responseMessageId
});
}
if (typeof opts?.onStart === 'function') {
opts.onStart(userMessage);
}
await this.saveMessageToDatabase(userMessage, user);
this.result = {};
const responseMessage = {
messageId: responseMessageId,
conversationId,
parentMessageId: userMessage.messageId,
isCreatedByUser: false,
model: this.modelOptions.model,
sender: 'ChatGPT'
};
if (this.options.debug) {
console.debug('options');
console.debug(this.options);
}
const completionMode = this.options.tools.length === 0;
if (!completionMode) {
await this.initialize({
user,
message,
onAgentAction,
onChainEnd,
signal: opts.abortController.signal
});
await this.executorCall(message, opts.abortController.signal);
}
// If message was aborted mid-generation
if (this.result?.errorMessage?.length > 0 && this.result?.errorMessage?.includes('cancel')) {
responseMessage.text = 'Cancelled.';
await this.saveMessageToDatabase(responseMessage, user);
return { ...responseMessage, ...this.result };
}
if (!this.agentIsGpt3 && this.result.output) {
responseMessage.text = this.result.output;
await this.saveMessageToDatabase(responseMessage, user);
const textStream = new TextStream(this.result.output);
await textStream.processTextStream(onProgress);
return { ...responseMessage, ...this.result };
}
if (this.options.debug) {
console.debug('this.result', this.result);
}
const userProvidedPrefix = completionMode && this.options?.promptPrefix?.length > 0;
const promptPrefix = userProvidedPrefix
? this.options.promptPrefix
: this.buildPromptPrefix(this.result, message);
if (this.options.debug) {
console.debug('promptPrefix', promptPrefix);
}
const finalReply = await this.sendApiMessage(this.currentMessages, userMessage, { ...opts, completionMode, promptPrefix });
responseMessage.text = finalReply;
await this.saveMessageToDatabase(responseMessage, user);
return { ...responseMessage, ...this.result };
}
async buildPrompt({ messages, promptPrefix: _promptPrefix, completionMode = false, isChatGptModel = true }) {
if (this.options.debug) {
console.debug('buildPrompt messages', messages);
}
const orderedMessages = messages;
let promptPrefix = _promptPrefix;
if (promptPrefix) {
promptPrefix = promptPrefix.trim();
// If the prompt prefix doesn't end with the end token, add it.
if (!promptPrefix.endsWith(`${this.endToken}`)) {
promptPrefix = `${promptPrefix.trim()}${this.endToken}\n\n`;
}
promptPrefix = `${this.startToken}Instructions:\n${promptPrefix}`;
} else {
promptPrefix = `${this.startToken}${completionInstructions} ${this.currentDateString}${this.endToken}\n\n`;
}
const promptSuffix = `${this.startToken}${this.chatGptLabel}:\n`; // Prompt ChatGPT to respond.
const instructionsPayload = {
role: 'system',
name: 'instructions',
content: promptPrefix
};
const messagePayload = {
role: 'system',
content: promptSuffix
};
if (this.isGpt3) {
instructionsPayload.role = 'user';
messagePayload.role = 'user';
}
if (this.isGpt3 && completionMode) {
instructionsPayload.content += `\n${promptSuffix}`;
}
// testing if this works with browser endpoint
if (!this.isGpt3 && this.reverseProxyUrl) {
instructionsPayload.role = 'user';
}
let currentTokenCount;
if (isChatGptModel) {
currentTokenCount =
this.getTokenCountForMessage(instructionsPayload) +
this.getTokenCountForMessage(messagePayload);
} else {
currentTokenCount = this.getTokenCount(`${promptPrefix}${promptSuffix}`);
}
let promptBody = '';
const maxTokenCount = this.maxPromptTokens;
// Iterate backwards through the messages, adding them to the prompt until we reach the max token count.
// Do this within a recursive async function so that it doesn't block the event loop for too long.
const buildPromptBody = async () => {
if (currentTokenCount < maxTokenCount && orderedMessages.length > 0) {
const message = orderedMessages.pop();
// const roleLabel = message.role === 'User' ? this.userLabel : this.chatGptLabel;
const roleLabel = message.role;
let messageString = `${this.startToken}${roleLabel}:\n${message.text}${this.endToken}\n`;
let newPromptBody;
if (promptBody || isChatGptModel) {
newPromptBody = `${messageString}${promptBody}`;
} else {
// Always insert prompt prefix before the last user message, if not gpt-3.5-turbo.
// This makes the AI obey the prompt instructions better, which is important for custom instructions.
// After a bunch of testing, it doesn't seem to cause the AI any confusion, even if you ask it things
// like "what's the last thing I wrote?".
newPromptBody = `${promptPrefix}${messageString}${promptBody}`;
}
const tokenCountForMessage = this.getTokenCount(messageString);
const newTokenCount = currentTokenCount + tokenCountForMessage;
if (newTokenCount > maxTokenCount) {
if (promptBody) {
// This message would put us over the token limit, so don't add it.
return false;
}
// This is the first message, so we can't add it. Just throw an error.
throw new Error(
`Prompt is too long. Max token count is ${maxTokenCount}, but prompt is ${newTokenCount} tokens long.`
);
}
promptBody = newPromptBody;
currentTokenCount = newTokenCount;
// wait for next tick to avoid blocking the event loop
await new Promise((resolve) => setTimeout(resolve, 0));
return buildPromptBody();
}
return true;
};
await buildPromptBody();
// const prompt = `${promptBody}${promptSuffix}`;
const prompt = promptBody;
if (isChatGptModel) {
messagePayload.content = prompt;
// Add 2 tokens for metadata after all messages have been counted.
currentTokenCount += 2;
}
if (this.isGpt3 && messagePayload.content.length > 0) {
const context = `Chat History:\n`;
messagePayload.content = `${context}${prompt}`;
currentTokenCount += this.getTokenCount(context);
}
// Use up to `this.maxContextTokens` tokens (prompt + response), but try to leave `this.maxTokens` tokens for the response.
this.modelOptions.max_tokens = Math.min(
this.maxContextTokens - currentTokenCount,
this.maxResponseTokens
);
if (this.isGpt3 && !completionMode) {
messagePayload.content += promptSuffix;
return [instructionsPayload, messagePayload];
}
if (isChatGptModel) {
const result = [messagePayload, instructionsPayload];
return result.filter((message) => message.content.length > 0);
}
this.completionPromptTokens = currentTokenCount;
return prompt;
}
getTokenCount(text) {
return this.gptEncoder.encode(text, 'all').length;
}
/**
* Algorithm adapted from "6. Counting tokens for chat API calls" of
* https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb
*
* An additional 2 tokens need to be added for metadata after all messages have been counted.
*
* @param {*} message
*/
getTokenCountForMessage(message) {
// Map each property of the message to the number of tokens it contains
const propertyTokenCounts = Object.entries(message).map(([key, value]) => {
// Count the number of tokens in the property value
const numTokens = this.getTokenCount(value);
// Subtract 1 token if the property key is 'name'
const adjustment = key === 'name' ? 1 : 0;
return numTokens - adjustment;
});
// Sum the number of tokens in all properties and add 4 for metadata
return propertyTokenCounts.reduce((a, b) => a + b, 4);
}
/**
* Iterate through messages, building an array based on the parentMessageId.
* Each message has an id and a parentMessageId. The parentMessageId is the id of the message that this message is a reply to.
* @param messages
* @param parentMessageId
* @returns {*[]} An array containing the messages in the order they should be displayed, starting with the root message.
*/
static getMessagesForConversation(messages, parentMessageId) {
const orderedMessages = [];
let currentMessageId = parentMessageId;
while (currentMessageId) {
// eslint-disable-next-line no-loop-func
const message = messages.find((m) => m.messageId === currentMessageId);
if (!message) {
break;
}
orderedMessages.unshift(message);
currentMessageId = message.parentMessageId;
}
if (orderedMessages.length === 0) {
return [];
}
return orderedMessages.map((msg) => ({
messageId: msg.messageId,
parentMessageId: msg.parentMessageId,
role: msg.isCreatedByUser ? 'User' : 'ChatGPT',
text: msg.text
}));
}
/**
* Extracts the action tool values from the intermediate steps array.
* Each step object in the array contains an action object with a tool property.
* This function returns an array of tool values.
*
* @param {Object[]} intermediateSteps - An array of intermediate step objects.
* @returns {string} An string of action tool values from each step.
*/
extractToolValues(intermediateSteps) {
const tools = intermediateSteps.map((step) => step.action.tool);
if (tools.length === 0) {
return '';
}
const uniqueTools = [...new Set(tools)];
if (tools.length === 1) {
return tools[0] + ' plugin';
}
return uniqueTools.join(' plugin, ');
}
}
module.exports = ChatAgent;

View File

@@ -1,92 +0,0 @@
const mongoose = require('mongoose');
const ChatAgent = require('./ChatAgent');
const connectDb = require('../../lib/db/connectDb');
const Conversation = require('../../models/Conversation');
describe('ChatAgent', () => {
let TestAgent;
let options = {
tools: [],
modelOptions: {
model: 'gpt-3.5-turbo',
temperature: 0,
max_tokens: 2
},
agentOptions: {
model: 'gpt-3.5-turbo',
}
};
let parentMessageId;
let conversationId;
const userMessage = 'Hello, ChatGPT!';
const apiKey = process.env.OPENAI_API_KEY;
beforeAll(async () => {
await connectDb();
});
beforeEach(() => {
TestAgent = new ChatAgent(apiKey, options);
});
afterAll(async () => {
// Delete the messages and conversation created by the test
await Conversation.deleteConvos(null, { conversationId });
await mongoose.connection.close();
});
test('initializes ChatAgent without crashing', () => {
expect(TestAgent).toBeInstanceOf(ChatAgent);
});
test('check setOptions function', () => {
expect(TestAgent.agentIsGpt3).toBe(true);
});
describe('sendMessage', () => {
test('sendMessage should return a response message', async () => {
const expectedResult = expect.objectContaining({
sender: 'ChatGPT',
text: expect.any(String),
isCreatedByUser: false,
messageId: expect.any(String),
parentMessageId: expect.any(String),
conversationId: expect.any(String)
});
const response = await TestAgent.sendMessage(userMessage);
console.log(response);
parentMessageId = response.messageId;
conversationId = response.conversationId;
expect(response).toEqual(expectedResult);
});
test('sendMessage should work with provided conversationId and parentMessageId', async () => {
const userMessage = 'Second message in the conversation';
const opts = {
conversationId,
parentMessageId
};
const expectedResult = expect.objectContaining({
sender: 'ChatGPT',
text: expect.any(String),
isCreatedByUser: false,
messageId: expect.any(String),
parentMessageId: expect.any(String),
conversationId: opts.conversationId
});
const response = await TestAgent.sendMessage(userMessage, opts);
parentMessageId = response.messageId;
expect(response.conversationId).toEqual(conversationId);
expect(response).toEqual(expectedResult);
});
test('should return chat history', async () => {
const chatMessages = await TestAgent.loadHistory(conversationId, parentMessageId);
expect(TestAgent.currentMessages).toHaveLength(4);
expect(chatMessages[0].text).toEqual(userMessage);
});
});
});

View File

@@ -1,77 +0,0 @@
const {
ChainStepExecutor,
LLMPlanner,
PlanOutputParser,
PlanAndExecuteAgentExecutor
} = require('langchain/experimental/plan_and_execute');
const { LLMChain } = require('langchain/chains');
const { ChatAgent, AgentExecutor } = require('langchain/agents');
const { BufferMemory, ChatMessageHistory } = require('langchain/memory');
const {
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate
} = require('langchain/prompts');
const DEFAULT_STEP_EXECUTOR_HUMAN_CHAT_MESSAGE_TEMPLATE = `{chat_history}
Previous steps: {previous_steps}
Current objective: {current_step}
{agent_scratchpad}
You may extract and combine relevant data from your previous steps when responding to me.`;
const PLANNER_SYSTEM_PROMPT_MESSAGE_TEMPLATE = [
`Let's first understand the problem and devise a plan to solve the problem.`,
`Please output the plan starting with the header "Plan:"`,
`and then followed by a numbered list of steps.`,
`Please make the plan the minimum number of steps required`,
`to answer the query or complete the task accurately and precisely.`,
`Your steps should be general, and should not require a specific method to solve a step. If the task is a question,`,
`the final step in the plan must be the following: "Given the above steps taken,`,
`please respond to the original query."`,
`At the end of your plan, say "<END_OF_PLAN>"`
].join(' ');
const PLANNER_CHAT_PROMPT = /* #__PURE__ */ ChatPromptTemplate.fromPromptMessages([
/* #__PURE__ */ SystemMessagePromptTemplate.fromTemplate(PLANNER_SYSTEM_PROMPT_MESSAGE_TEMPLATE),
/* #__PURE__ */ HumanMessagePromptTemplate.fromTemplate(`{input}`)
]);
const initializePAEAgent = async ({ tools: _tools, model: llm, pastMessages, ...rest }) => {
//removed currentDateString
const tools = _tools.filter((tool) => tool.name !== 'self-reflection');
const memory = new BufferMemory({
chatHistory: new ChatMessageHistory(pastMessages),
// returnMessages: true, // commenting this out retains memory
memoryKey: 'chat_history',
humanPrefix: 'User',
aiPrefix: 'Assistant',
inputKey: 'input',
outputKey: 'output'
});
const plannerLlmChain = new LLMChain({
llm,
prompt: PLANNER_CHAT_PROMPT,
memory
});
const planner = new LLMPlanner(plannerLlmChain, new PlanOutputParser());
const agent = ChatAgent.fromLLMAndTools(llm, tools, {
humanMessageTemplate: DEFAULT_STEP_EXECUTOR_HUMAN_CHAT_MESSAGE_TEMPLATE
});
const stepExecutor = new ChainStepExecutor(
AgentExecutor.fromAgentAndTools({ agent, tools, memory, ...rest })
);
return new PlanAndExecuteAgentExecutor({
planner,
stepExecutor
});
};
module.exports = {
initializePAEAgent
};

View File

@@ -1,31 +0,0 @@
require('dotenv').config();
const { ChatOpenAI } = require( "langchain/chat_models/openai");
const { initializeAgentExecutorWithOptions } = require( "langchain/agents");
const HttpRequestTool = require('../tools/HttpRequestTool');
const AIPluginTool = require('../tools/AIPluginTool');
const run = async () => {
const openAIApiKey = process.env.OPENAI_API_KEY;
const tools = [
new HttpRequestTool(),
await AIPluginTool.fromPluginUrl(
"https://www.klarna.com/.well-known/ai-plugin.json", new ChatOpenAI({ temperature: 0, openAIApiKey })
),
];
const agent = await initializeAgentExecutorWithOptions(
tools,
new ChatOpenAI({ temperature: 0, openAIApiKey }),
{ agentType: "chat-zero-shot-react-description", verbose: true }
);
const result = await agent.call({
input: "what t shirts are available in klarna?",
});
console.log({ result });
};
(async () => {
await run();
})();

View File

@@ -1,47 +0,0 @@
require('dotenv').config();
const fs = require( "fs");
const yaml = require( "js-yaml");
const { OpenAI } = require( "langchain/llms/openai");
const { JsonSpec } = require( "langchain/tools");
const { createOpenApiAgent, OpenApiToolkit } = require( "langchain/agents");
const run = async () => {
let data;
try {
const yamlFile = fs.readFileSync("./app/langchain/demos/klarna.yaml", "utf8");
data = yaml.load(yamlFile);
if (!data) {
throw new Error("Failed to load OpenAPI spec");
}
} catch (e) {
console.error(e);
return;
}
const headers = {
"Content-Type": "application/json",
// Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,
};
const model = new OpenAI({ temperature: 0 });
const toolkit = new OpenApiToolkit(new JsonSpec(data), model, headers);
const executor = createOpenApiAgent(model, toolkit, { verbose: true });
const input = `Find me some medium sized blue shirts`;
console.log(`Executing with input "${input}"...`);
const result = await executor.call({ input });
console.log(`Got output ${result.output}`);
console.log(
`Got intermediate steps ${JSON.stringify(
result.intermediateSteps,
null,
2
)}`
);
};
(async () => {
await run();
})();

View File

@@ -1,79 +0,0 @@
openapi: 3.0.1
servers:
- url: https://www.klarna.com/us/shopping
info:
title: Open AI Klarna product Api
version: v0
x-apisguru-categories:
- ecommerce
x-logo:
url: https://www.klarna.com/static/img/social-prod-imagery-blinds-beauty-default.jpg
x-origin:
- format: openapi
url: https://www.klarna.com/us/shopping/public/openai/v0/api-docs/
version: "3.0"
x-providerName: klarna.com
x-serviceName: openai
tags:
- description: Open AI Product Endpoint. Query for products.
name: open-ai-product-endpoint
paths:
/public/openai/v0/products:
get:
deprecated: false
operationId: productsUsingGET
parameters:
- description: A precise query that matches one very small category or product that needs to be searched for to find the products the user is looking for. If the user explicitly stated what they want, use that as a query. The query is as specific as possible to the product name or category mentioned by the user in its singular form, and don't contain any clarifiers like latest, newest, cheapest, budget, premium, expensive or similar. The query is always taken from the latest topic, if there is a new topic a new query is started.
in: query
name: q
required: true
schema:
type: string
- description: number of products returned
in: query
name: size
required: false
schema:
type: integer
- description: maximum price of the matching product in local currency, filters results
in: query
name: budget
required: false
schema:
type: integer
responses:
"200":
content:
application/json:
schema:
$ref: "#/components/schemas/ProductResponse"
description: Products found
"503":
description: one or more services are unavailable
summary: API for fetching Klarna product information
tags:
- open-ai-product-endpoint
components:
schemas:
Product:
properties:
attributes:
items:
type: string
type: array
name:
type: string
price:
type: string
url:
type: string
title: Product
type: object
ProductResponse:
properties:
products:
items:
$ref: "#/components/schemas/Product"
type: array
title: ProductResponse
type: object

View File

@@ -1,32 +0,0 @@
require('dotenv').config();
const { Calculator } = require('langchain/tools/calculator');
const { SerpAPI } = require('langchain/tools');
const { ChatOpenAI } = require('langchain/chat_models/openai');
const { PlanAndExecuteAgentExecutor } = require('langchain/experimental/plan_and_execute');
const tools = [
new Calculator(),
new SerpAPI(process.env.SERPAPI_API_KEY || '', {
location: 'Austin,Texas,United States',
hl: 'en',
gl: 'us'
})
];
const model = new ChatOpenAI({
temperature: 0,
modelName: 'gpt-3.5-turbo',
verbose: true,
openAIApiKey: process.env.OPENAI_API_KEY
});
const executor = PlanAndExecuteAgentExecutor.fromLLMAndTools({
llm: model,
tools
});
(async () => {
const result = await executor.call({
input: `Who is the current president of the United States? What is their current age raised to the second power?`
});
console.log({ result });
})();

File diff suppressed because it is too large Load Diff

View File

@@ -1,10 +0,0 @@
const SelfReflectionTool = require('./SelfReflection');
const availableTools = require('./manifest.json');
const { validateTools, loadTools } = require('./handleTools');
module.exports = {
validateTools,
loadTools,
availableTools,
SelfReflectionTool
};

View File

@@ -1,21 +1,6 @@
// const { Configuration, OpenAIApi } = require('openai');
const _ = require('lodash');
const { genAzureChatCompletion } = require('../utils/genAzureEndpoints');
// const proxyEnvToAxiosProxy = (proxyString) => {
// if (!proxyString) return null;
// const regex = /^([^:]+):\/\/(?:([^:@]*):?([^:@]*)@)?([^:]+)(?::(\d+))?/;
// const [, protocol, username, password, host, port] = proxyString.match(regex);
// const proxyConfig = {
// protocol,
// host,
// port: port ? parseInt(port) : undefined,
// auth: username && password ? { username, password } : undefined
// };
// return proxyConfig;
// };
const { genAzureChatCompletion, getAzureCredentials } = require('../utils/');
const titleConvo = async ({ text, response, oaiApiKey }) => {
let title = 'New Chat';
@@ -34,7 +19,7 @@ const titleConvo = async ({ text, response, oaiApiKey }) => {
||>Title:`
};
const azure = process.env.AZURE_OPENAI_API_KEY ? true : false;
const azure = process.env.AZURE_API_KEY ? true : false;
const options = {
azure,
reverseProxyUrl: process.env.OPENAI_REVERSE_PROXY || null,
@@ -53,12 +38,8 @@ const titleConvo = async ({ text, response, oaiApiKey }) => {
let apiKey = oaiApiKey || process.env.OPENAI_API_KEY;
if (azure) {
apiKey = process.env.AZURE_OPENAI_API_KEY;
titleGenClientOptions.reverseProxyUrl = genAzureChatCompletion({
azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME,
azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME,
azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION
});
apiKey = process.env.AZURE_API_KEY;
titleGenClientOptions.reverseProxyUrl = genAzureChatCompletion(getAzureCredentials());
}
const titleGenClient = new ChatGPTClient(apiKey, titleGenClientOptions);

View File

@@ -14,6 +14,7 @@ module.exports = {
error,
unfinished,
cancelled,
tokenCount = null,
plugin = null,
model = null,
}) {
@@ -31,6 +32,7 @@ module.exports = {
error,
unfinished,
cancelled,
tokenCount,
plugin,
model
},
@@ -43,14 +45,41 @@ module.exports = {
parentMessageId,
sender,
text,
isCreatedByUser
isCreatedByUser,
tokenCount,
};
} catch (err) {
console.error(`Error saving message: ${err}`);
throw new Error('Failed to save message.');
}
},
async updateMessage(message) {
try {
const { messageId, ...update } = message;
const updatedMessage = await Message.findOneAndUpdate(
{ messageId },
update,
{ new: true }
);
if (!updatedMessage) {
throw new Error('Message not found.');
}
return {
messageId: updatedMessage.messageId,
conversationId: updatedMessage.conversationId,
parentMessageId: updatedMessage.parentMessageId,
sender: updatedMessage.sender,
text: updatedMessage.text,
isCreatedByUser: updatedMessage.isCreatedByUser,
tokenCount: updatedMessage.tokenCount,
};
} catch (err) {
console.error(`Error updating message: ${err}`);
throw new Error('Failed to update message.');
}
},
async deleteMessagesSince({ messageId, conversationId }) {
try {
const message = await Message.findOne({ messageId }).exec();

View File

@@ -25,7 +25,7 @@ const userSchema = mongoose.Schema(
type: String,
lowercase: true,
required: [true, "can't be blank"],
match: [/^[a-zA-Z0-9_]+$/, 'is invalid'],
match: [/^[a-zA-Z0-9_-]+$/, 'is invalid'],
index: true
},
email: {
@@ -45,7 +45,7 @@ const userSchema = mongoose.Schema(
type: String,
trim: true,
minlength: 8,
maxlength: 60
maxlength: 128
},
avatar: {
type: String,
@@ -65,6 +65,16 @@ const userSchema = mongoose.Schema(
unique: true,
sparse: true
},
openidId: {
type: String,
unique: true,
sparse: true
},
githubId: {
type: String,
unique: true,
sparse: true
},
plugins: {
type: Array,
default: []
@@ -157,9 +167,9 @@ module.exports.validateUser = (user) => {
username: Joi.string()
.min(2)
.max(80)
.regex(/^[a-zA-Z0-9_]+$/)
.regex(/^[a-zA-Z0-9_-]+$/)
.required(),
password: Joi.string().min(8).max(60).allow('').allow(null)
password: Joi.string().min(8).max(128).allow('').allow(null)
};
return schema.validate(user);

View File

@@ -1,10 +1,11 @@
const { getMessages, saveMessage, deleteMessagesSince, deleteMessages } = require('./Message');
const { getMessages, saveMessage, updateMessage, deleteMessagesSince, deleteMessages } = require('./Message');
const { getConvoTitle, getConvo, saveConvo } = require('./Conversation');
const { getPreset, getPresets, savePreset, deletePresets } = require('./Preset');
module.exports = {
getMessages,
saveMessage,
updateMessage,
deleteMessagesSince,
deleteMessages,

View File

@@ -31,6 +31,12 @@ const messageSchema = mongoose.Schema(
type: String
// required: true
},
tokenCount: {
type: Number
},
refinedTokenCount: {
type: Number
},
sender: {
type: String,
required: true,
@@ -41,6 +47,9 @@ const messageSchema = mongoose.Schema(
required: true,
meiliIndex: true
},
refinedMessageText: {
type: String
},
isCreatedByUser: {
type: Boolean,
required: true,

View File

@@ -1,6 +0,0 @@
{
"ignore": [
"api/data/",
"data"
]
}

View File

@@ -1,19 +1,12 @@
{
"name": "chat-backend",
"version": "0.5.0",
"name": "@librechat/backend",
"version": "0.5.3",
"description": "",
"scripts": {
"start": "echo 'please run this from the root directory'",
"server-dev": "echo 'please run this from the root directory'",
"watch": "cross-env NODE_ENV=development npx nodemon server/index.js",
"test": "cross-env NODE_ENV=test jest",
"test:ci": "jest --ci",
"test2": "node --inspect app/langchain/test2.js",
"test3": "node --inspect app/langchain/test3.js",
"test4": "node --inspect app/langchain/test4.js",
"test5": "node --inspect app/langchain/test5.js",
"test8": "node --inspect app/langchain/test8.js",
"langchain": "node app/langchain/test2.js"
"test:ci": "jest --ci"
},
"repository": {
"type": "git",
@@ -39,6 +32,7 @@
"dotenv": "^16.0.3",
"eslint": "^8.41.0",
"express": "^4.18.2",
"express-session": "^1.17.3",
"googleapis": "^118.0.0",
"handlebars": "^4.7.7",
"html": "^1.0.0",
@@ -47,12 +41,13 @@
"jsonwebtoken": "^9.0.0",
"keyv": "^4.5.2",
"keyv-file": "^0.2.0",
"langchain": "^0.0.92",
"langchain": "^0.0.103",
"lodash": "^4.17.21",
"meilisearch": "^0.33.0",
"mongoose": "^7.1.1",
"nodemailer": "^6.9.1",
"openai": "^3.2.1",
"openid-client": "^5.4.2",
"passport": "^0.6.0",
"passport-facebook": "^3.0.0",
"passport-google-oauth20": "^2.0.0",
@@ -65,6 +60,7 @@
"devDependencies": {
"jest": "^29.5.0",
"nodemon": "^2.0.20",
"path": "^0.12.7"
"path": "^0.12.7",
"supertest": "^6.3.3"
}
}

View File

@@ -25,7 +25,7 @@ const isPluginAuthenticated = (plugin) => {
const getAvailablePluginsController = async (req, res) => {
try {
fs.readFile(
path.join(__dirname, '..', '..', 'app', 'langchain', 'tools', 'manifest.json'),
path.join(__dirname, '..', '..', 'app', 'clients', 'tools', 'manifest.json'),
'utf8',
(err, data) => {
if (err) {

View File

@@ -1,11 +1,12 @@
const express = require('express');
const session = require('express-session');
const connectDb = require('../lib/db/connectDb');
const migrateDb = require('../lib/db/migrateDb');
const indexSync = require('../lib/db/indexSync');
const path = require('path');
const cors = require('cors');
const routes = require('./routes');
const errorController = require('./controllers/error.controller');
const errorController = require('./controllers/ErrorController');
const passport = require('passport');
const port = process.env.PORT || 3080;
const host = process.env.HOST || 'localhost';
@@ -41,6 +42,20 @@ config.validate(); // Validate the config
if (process.env.FACEBOOK_CLIENT_ID && process.env.FACEBOOK_CLIENT_SECRET) {
require('../strategies/facebookStrategy');
}
if (process.env.GITHUB_CLIENT_ID && process.env.GITHUB_CLIENT_SECRET) {
require('../strategies/githubStrategy');
}
if (process.env.OPENID_CLIENT_ID && process.env.OPENID_CLIENT_SECRET &&
process.env.OPENID_ISSUER && process.env.OPENID_SCOPE &&
process.env.OPENID_SESSION_SECRET) {
app.use(session({
secret: process.env.OPENID_SESSION_SECRET,
resave: false,
saveUninitialized: false
}));
app.use(passport.session());
require('../strategies/openidStrategy');
}
app.use('/oauth', routes.oauth);
// api endpoint
app.use('/api/auth', routes.auth);
@@ -54,6 +69,7 @@ config.validate(); // Validate the config
app.use('/api/tokenizer', routes.tokenizer);
app.use('/api/endpoints', routes.endpoints);
app.use('/api/plugins', routes.plugins);
app.use('/api/config', routes.config);
// static files
app.get('/*', function (req, res) {

View File

@@ -0,0 +1,56 @@
const request = require('supertest');
const express = require('express');
const routes = require('../');
const app = express();
app.use('/api/config', routes.config);
afterEach(() => {
delete process.env.APP_TITLE;
delete process.env.GOOGLE_CLIENT_ID;
delete process.env.GOOGLE_CLIENT_SECRET;
delete process.env.OPENID_CLIENT_ID;
delete process.env.OPENID_CLIENT_SECRET;
delete process.env.OPENID_ISSUER;
delete process.env.OPENID_SESSION_SECRET;
delete process.env.OPENID_BUTTON_LABEL;
delete process.env.OPENID_AUTH_URL;
delete process.env.GITHUB_CLIENT_ID;
delete process.env.GITHUB_CLIENT_SECRET;
delete process.env.DOMAIN_SERVER;
delete process.env.ALLOW_REGISTRATION;
});
//TODO: This works/passes locally but http request tests fail with 404 in CI. Need to figure out why.
// eslint-disable-next-line jest/no-disabled-tests
describe.skip('GET /', () => {
it('should return 200 and the correct body', async () => {
process.env.APP_TITLE = 'Test Title';
process.env.GOOGLE_CLIENT_ID = 'Test Google Client Id';
process.env.GOOGLE_CLIENT_SECRET = 'Test Google Client Secret';
process.env.OPENID_CLIENT_ID= 'Test OpenID Id';
process.env.OPENID_CLIENT_SECRET= 'Test OpenID Secret';
process.env.OPENID_ISSUER= 'Test OpenID Issuer';
process.env.OPENID_SESSION_SECRET= 'Test Secret';
process.env.OPENID_BUTTON_LABEL= 'Test OpenID';
process.env.OPENID_AUTH_URL= 'http://test-server.com';
process.env.GITHUB_CLIENT_ID = 'Test Github client Id';
process.env.GITHUB_CLIENT_SECRET= 'Test Github client Secret';
process.env.DOMAIN_SERVER = 'http://test-server.com';
process.env.ALLOW_REGISTRATION = 'true';
const response = await request(app).get('/');
expect(response.statusCode).toBe(200);
expect(response.body).toEqual({
appTitle: 'Test Title',
googleLoginEnabled: true,
openidLoginEnabled: true,
openidLabel: 'Test OpenID',
openidImageUrl: 'http://test-server.com',
githubLoginEnabled: true,
serverDomain: 'http://test-server.com',
registrationEnabled: 'true',
});
});
});

View File

@@ -1,7 +1,7 @@
const express = require('express');
const crypto = require('crypto');
const router = express.Router();
const { getChatGPTBrowserModels } = require('../endpoints');
// const { getChatGPTBrowserModels } = require('../endpoints');
const { browserClient } = require('../../../app/');
const { saveMessage, getConvoTitle, saveConvo, getConvo } = require('../../../models');
const { handleError, sendMessage, createOnProgress, handleText } = require('./handlers');
@@ -38,9 +38,9 @@ router.post('/', requireJwtAuth, async (req, res) => {
token: req.body?.token ?? null
};
const availableModels = getChatGPTBrowserModels();
if (availableModels.find((model) => model === endpointOption.model) === undefined)
return handleError(res, { text: 'Illegal request: model' });
// const availableModels = getChatGPTBrowserModels();
// if (availableModels.find((model) => model === endpointOption.model) === undefined)
// return handleError(res, { text: 'Illegal request: model' });
console.log('ask log', {
userMessage,

View File

@@ -1,286 +0,0 @@
const express = require('express');
const crypto = require('crypto');
const router = express.Router();
const addToCache = require('./addToCache');
const { getOpenAIModels } = require('../endpoints');
const { titleConvo, askClient } = require('../../../app/');
const { saveMessage, getConvoTitle, saveConvo, getConvo } = require('../../../models');
const { handleError, sendMessage, createOnProgress, handleText } = require('./handlers');
const requireJwtAuth = require('../../../middleware/requireJwtAuth');
const abortControllers = new Map();
router.post('/abort', requireJwtAuth, async (req, res) => {
const { abortKey } = req.body;
console.log(`req.body`, req.body);
if (!abortControllers.has(abortKey)) {
return res.status(404).send('Request not found');
}
const { abortController } = abortControllers.get(abortKey);
abortControllers.delete(abortKey);
const ret = await abortController.abortAsk();
console.log('Aborted request', abortKey);
console.log('Aborted message:', ret);
res.send(JSON.stringify(ret));
});
router.post('/', requireJwtAuth, async (req, res) => {
const {
endpoint,
text,
overrideParentMessageId = null,
parentMessageId,
conversationId: oldConversationId
} = req.body;
if (text.length === 0) return handleError(res, { text: 'Prompt empty or too short' });
if (endpoint !== 'openAI') return handleError(res, { text: 'Illegal request' });
// build user message
const conversationId = oldConversationId || crypto.randomUUID();
const isNewConversation = !oldConversationId;
const userMessageId = crypto.randomUUID();
const userParentMessageId = parentMessageId || '00000000-0000-0000-0000-000000000000';
const userMessage = {
messageId: userMessageId,
sender: 'User',
text,
parentMessageId: userParentMessageId,
conversationId,
isCreatedByUser: true
};
// build endpoint option
const endpointOption = {
model: req.body?.model ?? 'gpt-3.5-turbo',
chatGptLabel: req.body?.chatGptLabel ?? null,
promptPrefix: req.body?.promptPrefix ?? null,
temperature: req.body?.temperature ?? 1,
top_p: req.body?.top_p ?? 1,
presence_penalty: req.body?.presence_penalty ?? 0,
frequency_penalty: req.body?.frequency_penalty ?? 0
};
const availableModels = getOpenAIModels();
if (availableModels.find((model) => model === endpointOption.model) === undefined)
return handleError(res, { text: 'Illegal request: model' });
console.log('ask log', {
userMessage,
endpointOption,
conversationId
});
if (!overrideParentMessageId) {
await saveMessage(userMessage);
await saveConvo(req.user.id, {
...userMessage,
...endpointOption,
conversationId,
endpoint
});
}
// eslint-disable-next-line no-use-before-define
return await ask({
isNewConversation,
userMessage,
endpointOption,
conversationId,
preSendRequest: true,
overrideParentMessageId,
req,
res
});
});
const ask = async ({
isNewConversation,
userMessage,
endpointOption,
conversationId,
preSendRequest = true,
overrideParentMessageId = null,
req,
res
}) => {
let { text, parentMessageId: userParentMessageId, messageId: userMessageId } = userMessage;
const userId = req.user.id;
let responseMessageId = crypto.randomUUID();
res.writeHead(200, {
Connection: 'keep-alive',
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache, no-transform',
'Access-Control-Allow-Origin': '*',
'X-Accel-Buffering': 'no'
});
if (preSendRequest) sendMessage(res, { message: userMessage, created: true });
try {
let lastSavedTimestamp = 0;
const { onProgress: progressCallback, getPartialText } = createOnProgress({
onProgress: ({ text }) => {
const currentTimestamp = Date.now();
if (currentTimestamp - lastSavedTimestamp > 500) {
lastSavedTimestamp = currentTimestamp;
saveMessage({
messageId: responseMessageId,
sender: endpointOption?.chatGptLabel || 'ChatGPT',
conversationId,
parentMessageId: overrideParentMessageId || userMessageId,
text: text,
unfinished: true,
cancelled: false,
error: false
});
}
}
});
let abortController = new AbortController();
abortController.abortAsk = async function () {
this.abort();
const responseMessage = {
messageId: responseMessageId,
sender: endpointOption?.chatGptLabel || 'ChatGPT',
conversationId,
parentMessageId: overrideParentMessageId || userMessageId,
text: getPartialText(),
unfinished: false,
cancelled: true,
error: false
};
saveMessage(responseMessage);
await addToCache({ endpoint: 'openAI', endpointOption, userMessage, responseMessage });
return {
title: await getConvoTitle(req.user.id, conversationId),
final: true,
conversation: await getConvo(req.user.id, conversationId),
requestMessage: userMessage,
responseMessage: responseMessage
};
};
const abortKey = conversationId;
abortControllers.set(abortKey, { abortController, ...endpointOption });
const oaiApiKey = req.body?.token ?? null;
let response = await askClient({
text,
parentMessageId: userParentMessageId,
conversationId,
oaiApiKey,
...endpointOption,
onProgress: progressCallback.call(null, {
res,
text,
parentMessageId: overrideParentMessageId || userMessageId
}),
abortController,
userId
});
abortControllers.delete(abortKey);
console.log('CLIENT RESPONSE', response);
const newConversationId = response.conversationId || conversationId;
const newUserMessageId = response.parentMessageId || userMessageId;
const newResponseMessageId = response.messageId;
// STEP1 generate response message
response.text = response.response || '**ChatGPT refused to answer.**';
let responseMessage = {
conversationId: newConversationId,
messageId: responseMessageId,
newMessageId: newResponseMessageId,
parentMessageId: overrideParentMessageId || newUserMessageId,
text: await handleText(response),
sender: endpointOption?.chatGptLabel || 'ChatGPT',
unfinished: false,
cancelled: false,
error: false
};
await saveMessage(responseMessage);
responseMessage.messageId = newResponseMessageId;
// STEP2 update the conversation
let conversationUpdate = { conversationId: newConversationId, endpoint: 'openAI' };
if (conversationId != newConversationId)
if (isNewConversation) {
// change the conversationId to new one
conversationUpdate = {
...conversationUpdate,
conversationId: conversationId,
newConversationId: newConversationId
};
} else {
// create new conversation
conversationUpdate = {
...conversationUpdate,
...endpointOption
};
}
await saveConvo(req.user.id, conversationUpdate);
conversationId = newConversationId;
// STEP3 update the user message
userMessage.conversationId = newConversationId;
userMessage.messageId = newUserMessageId;
// If response has parentMessageId, the fake userMessage.messageId should be updated to the real one.
if (!overrideParentMessageId)
await saveMessage({
...userMessage,
messageId: userMessageId,
newMessageId: newUserMessageId
});
userMessageId = newUserMessageId;
sendMessage(res, {
title: await getConvoTitle(req.user.id, conversationId),
final: true,
conversation: await getConvo(req.user.id, conversationId),
requestMessage: userMessage,
responseMessage: responseMessage
});
res.end();
if (userParentMessageId == '00000000-0000-0000-0000-000000000000') {
const title = await titleConvo({
endpoint: endpointOption?.endpoint,
text,
response: responseMessage,
oaiApiKey
});
await saveConvo(req.user.id, {
conversationId: conversationId,
title
});
}
} catch (error) {
console.error(error);
const errorMessage = {
messageId: responseMessageId,
sender: endpointOption?.chatGptLabel || 'ChatGPT',
conversationId,
parentMessageId: overrideParentMessageId || userMessageId,
unfinished: false,
cancelled: false,
error: true,
text: error.message
};
await saveMessage(errorMessage);
handleError(res, errorMessage);
}
};
module.exports = router;

View File

@@ -1,8 +1,8 @@
const express = require('express');
const router = express.Router();
const crypto = require('crypto');
const { titleConvo } = require('../../../app/');
const GoogleClient = require('../../../app/google/GoogleClient');
const { titleConvo, GoogleClient } = require('../../../app');
// const GoogleClient = require('../../../app/google/GoogleClient');
const { saveMessage, getConvoTitle, saveConvo, getConvo } = require('../../../models');
const { handleError, sendMessage, createOnProgress } = require('./handlers');
const requireJwtAuth = require('../../../middleware/requireJwtAuth');
@@ -27,7 +27,7 @@ router.post('/', requireJwtAuth, async (req, res) => {
}
};
const availableModels = ['chat-bison', 'text-bison'];
const availableModels = ['chat-bison', 'text-bison', 'codechat-bison'];
if (availableModels.find((model) => model === endpointOption.modelOptions.model) === undefined) {
return handleError(res, { text: `Illegal request: model` });
}

View File

@@ -1,9 +1,7 @@
const express = require('express');
const router = express.Router();
const { titleConvo } = require('../../../app/');
const { getOpenAIModels } = require('../endpoints');
const ChatAgent = require('../../../app/langchain/ChatAgent');
const { validateTools } = require('../../../app/langchain/tools');
const { titleConvo, validateTools, PluginsClient } = require('../../../app');
const { abortMessage, getAzureCredentials } = require('../../../utils');
const { saveMessage, getConvoTitle, saveConvo, getConvo } = require('../../../models');
const {
handleError,
@@ -17,20 +15,7 @@ const requireJwtAuth = require('../../../middleware/requireJwtAuth');
const abortControllers = new Map();
router.post('/abort', requireJwtAuth, async (req, res) => {
const { abortKey } = req.body;
console.log(`req.body`, req.body);
if (!abortControllers.has(abortKey)) {
return res.status(404).send('Request not found');
}
const { abortController } = abortControllers.get(abortKey);
abortControllers.delete(abortKey);
const ret = await abortController.abortAsk();
console.log('Aborted request', abortKey);
console.log('Aborted message:', ret);
res.send(JSON.stringify(ret));
return await abortMessage(req, res, abortControllers);
});
router.post('/', requireJwtAuth, async (req, res) => {
@@ -39,14 +24,15 @@ router.post('/', requireJwtAuth, async (req, res) => {
if (endpoint !== 'gptPlugins') return handleError(res, { text: 'Illegal request' });
const agentOptions = req.body?.agentOptions ?? {
agent: 'functions',
skipCompletion: true,
model: 'gpt-3.5-turbo',
// model: 'gpt-4', // for agent model
temperature: 0,
// top_p: 1,
// presence_penalty: 0,
// frequency_penalty: 0
};
const tools = req.body?.tools.map((tool) => tool.pluginKey) ?? [];
// build endpoint option
const endpointOption = {
@@ -60,26 +46,19 @@ router.post('/', requireJwtAuth, async (req, res) => {
presence_penalty: req.body?.presence_penalty ?? 0,
frequency_penalty: req.body?.frequency_penalty ?? 0
},
agentOptions
agentOptions: {
...agentOptions,
// agent: 'functions'
}
};
const availableModels = getOpenAIModels();
if (availableModels.find((model) => model === endpointOption.modelOptions.model) === undefined) {
return handleError(res, { text: `Illegal request: model` });
}
// console.log('ask log', {
// text,
// conversationId,
// endpointOption
// });
console.log('ask log');
console.dir({ text, conversationId, endpointOption }, { depth: null });
// eslint-disable-next-line no-use-before-define
return await ask({
text,
endpoint,
endpointOption,
conversationId,
parentMessageId,
@@ -88,7 +67,7 @@ router.post('/', requireJwtAuth, async (req, res) => {
});
});
const ask = async ({ text, endpointOption, parentMessageId = null, conversationId, req, res }) => {
const ask = async ({ text, endpoint, endpointOption, parentMessageId = null, conversationId, req, res }) => {
res.writeHead(200, {
Connection: 'keep-alive',
'Content-Type': 'text/event-stream',
@@ -182,21 +161,23 @@ const ask = async ({ text, endpointOption, parentMessageId = null, conversationI
endpointOption.tools = await validateTools(user, endpointOption.tools);
const clientOptions = {
debug: true,
endpoint,
reverseProxyUrl: process.env.OPENAI_REVERSE_PROXY || null,
proxy: process.env.PROXY || null,
...endpointOption
};
if (process.env.AZURE_OPENAI_API_KEY) {
clientOptions.azure = {
azureOpenAIApiKey: process.env.AZURE_OPENAI_API_KEY,
azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME,
azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME,
azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION
};
let oaiApiKey = req.body?.token ?? process.env.OPENAI_API_KEY;
if (process.env.PLUGINS_USE_AZURE) {
clientOptions.azure = getAzureCredentials();
oaiApiKey = clientOptions.azure.azureOpenAIApiKey;
}
const chatAgent = new ChatAgent(process.env.OPENAI_API_KEY, clientOptions);
if (oaiApiKey && oaiApiKey.includes('azure') && !clientOptions.azure) {
clientOptions.azure = JSON.parse(req.body?.token) ?? getAzureCredentials();
oaiApiKey = clientOptions.azure.azureOpenAIApiKey;
}
const chatAgent = new PluginsClient(oaiApiKey, clientOptions);
const onAgentAction = (action) => {
const formattedAction = formatAction(action);
@@ -225,6 +206,7 @@ const ask = async ({ text, endpointOption, parentMessageId = null, conversationI
onAgentAction,
onChainEnd,
onStart,
...endpointOption,
onProgress: progressCallback.call(null, {
res,
text,
@@ -238,8 +220,8 @@ const ask = async ({ text, endpointOption, parentMessageId = null, conversationI
response.parentMessageId = overrideParentMessageId;
}
// console.log('CLIENT RESPONSE');
// console.dir(response, { depth: null });
console.log('CLIENT RESPONSE');
console.dir(response, { depth: null });
response.plugin = { ...plugin, loading: false };
await saveMessage(response);

View File

@@ -91,19 +91,22 @@ const handleText = async (response, bing = false) => {
return text;
};
const isObject = (item) => item && typeof item === 'object' && !Array.isArray(item);
const getString = (input) => isObject(input) ? JSON.stringify(input) : input ;
function formatSteps(steps) {
let output = '';
for (let i = 0; i < steps.length; i++) {
const step = steps[i];
const actionInput = step.action.toolInput;
const actionInput = getString(step.action.toolInput);
const observation = step.observation;
if (actionInput === 'N/A' || observation?.trim()?.length === 0) {
continue;
}
output += `Input: ${actionInput}\nOutput: ${observation}`;
output += `Input: ${actionInput}\nOutput: ${getString(observation)}`;
if (steps.length > 1 && i !== steps.length - 1) {
output += '\n---\n';
@@ -128,12 +131,14 @@ function formatAction(action) {
const formattedAction = {
plugin: capitalizeWords(action.tool) || action.tool,
input: action.toolInput,
input: getString(action.toolInput),
thought: action.log.includes('Thought: ')
? action.log.split('\n')[0].replace('Thought: ', '')
: action.log.split('\n')[0]
};
formattedAction.thought = getString(formattedAction.thought);
if (action.tool.toLowerCase() === 'self-reflection' || formattedAction.plugin === 'N/A') {
formattedAction.inputStr = `{\n\tthought: ${formattedAction.input}${
!formattedAction.thought.includes(formattedAction.input)
@@ -142,7 +147,9 @@ function formatAction(action) {
}\n}`;
formattedAction.inputStr = formattedAction.inputStr.replace('N/A - ', '');
} else {
formattedAction.inputStr = `{\n\tplugin: ${formattedAction.plugin}\n\tinput: ${formattedAction.input}\n\tthought: ${formattedAction.thought}\n}`;
const hasThought = formattedAction.thought.length > 0;
const thought = hasThought ? `\n\tthought: ${formattedAction.thought}` : '';
formattedAction.inputStr = `{\n\tplugin: ${formattedAction.plugin}\n\tinput: ${formattedAction.input}\n${thought}}`;
}
return formattedAction;

View File

@@ -1,17 +1,18 @@
const express = require('express');
const router = express.Router();
// const askAzureOpenAI = require('./askAzureOpenAI';)
const askOpenAI = require('./askOpenAI');
const askGoogle = require('./askGoogle');
// const askOpenAI = require('./askOpenAI');
const openAI = require('./openAI');
const google = require('./google');
const askBingAI = require('./askBingAI');
const gptPlugins = require('./gptPlugins');
const askChatGPTBrowser = require('./askChatGPTBrowser');
const askGPTPlugins = require('./askGPTPlugins');
// router.use('/azureOpenAI', askAzureOpenAI);
router.use('/openAI', askOpenAI);
router.use('/google', askGoogle);
router.use(['/azureOpenAI', '/openAI'], openAI);
router.use('/google', google);
router.use('/bingAI', askBingAI);
router.use('/chatGPTBrowser', askChatGPTBrowser);
router.use('/gptPlugins', askGPTPlugins);
router.use('/gptPlugins', gptPlugins);
module.exports = router;

View File

@@ -0,0 +1,211 @@
const express = require('express');
const router = express.Router();
const { titleConvo, OpenAIClient } = require('../../../app');
const { getAzureCredentials, abortMessage } = require('../../../utils');
const { saveMessage, getConvoTitle, saveConvo, getConvo } = require('../../../models');
const {
handleError,
sendMessage,
createOnProgress,
} = require('./handlers');
const requireJwtAuth = require('../../../middleware/requireJwtAuth');
const abortControllers = new Map();
router.post('/abort', requireJwtAuth, async (req, res) => {
return await abortMessage(req, res, abortControllers);
});
router.post('/', requireJwtAuth, async (req, res) => {
const { endpoint, text, parentMessageId, conversationId } = req.body;
if (text.length === 0) return handleError(res, { text: 'Prompt empty or too short' });
const isOpenAI = endpoint === 'openAI' || endpoint === 'azureOpenAI';
if (!isOpenAI) return handleError(res, { text: 'Illegal request' });
// build endpoint option
const endpointOption = {
chatGptLabel: req.body?.chatGptLabel ?? null,
promptPrefix: req.body?.promptPrefix ?? null,
modelOptions: {
model: req.body?.model ?? 'gpt-3.5-turbo',
temperature: req.body?.temperature ?? 1,
top_p: req.body?.top_p ?? 1,
presence_penalty: req.body?.presence_penalty ?? 0,
frequency_penalty: req.body?.frequency_penalty ?? 0
}
};
console.log('ask log');
console.dir({ text, conversationId, endpointOption }, { depth: null });
// eslint-disable-next-line no-use-before-define
return await ask({
text,
endpointOption,
conversationId,
parentMessageId,
endpoint,
req,
res
});
});
const ask = async ({ text, endpointOption, parentMessageId = null, endpoint, conversationId, req, res }) => {
res.writeHead(200, {
Connection: 'keep-alive',
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache, no-transform',
'Access-Control-Allow-Origin': '*',
'X-Accel-Buffering': 'no'
});
let userMessage;
let userMessageId;
let responseMessageId;
let lastSavedTimestamp = 0;
const newConvo = !conversationId;
const { overrideParentMessageId = null } = req.body;
const user = req.user.id;
const getIds = (data) => {
userMessage = data.userMessage;
userMessageId = userMessage.messageId;
responseMessageId = data.responseMessageId;
if (!conversationId) {
conversationId = data.conversationId;
}
};
const { onProgress: progressCallback, getPartialText } = createOnProgress({
onProgress: ({ text: partialText }) => {
const currentTimestamp = Date.now();
if (currentTimestamp - lastSavedTimestamp > 500) {
lastSavedTimestamp = currentTimestamp;
saveMessage({
messageId: responseMessageId,
sender: 'ChatGPT',
conversationId,
parentMessageId: overrideParentMessageId || userMessageId,
text: partialText,
model: endpointOption.modelOptions.model,
unfinished: false,
cancelled: true,
error: false
});
}
}
});
const abortController = new AbortController();
abortController.abortAsk = async function () {
this.abort();
const responseMessage = {
messageId: responseMessageId,
sender: endpointOption?.chatGptLabel || 'ChatGPT',
conversationId,
parentMessageId: overrideParentMessageId || userMessageId,
text: getPartialText(),
model: endpointOption.modelOptions.model,
unfinished: false,
cancelled: true,
error: false,
};
saveMessage(responseMessage);
return {
title: await getConvoTitle(req.user.id, conversationId),
final: true,
conversation: await getConvo(req.user.id, conversationId),
requestMessage: userMessage,
responseMessage: responseMessage
};
};
const onStart = (userMessage) => {
sendMessage(res, { message: userMessage, created: true });
abortControllers.set(userMessage.conversationId, { abortController, ...endpointOption });
};
try {
const clientOptions = {
// debug: true,
// contextStrategy: 'refine',
reverseProxyUrl: process.env.OPENAI_REVERSE_PROXY || null,
proxy: process.env.PROXY || null,
endpoint,
...endpointOption
};
let oaiApiKey = req.body?.token ?? process.env.OPENAI_API_KEY;
if (process.env.AZURE_API_KEY && endpoint === 'azureOpenAI') {
clientOptions.azure = JSON.parse(req.body?.token) ?? getAzureCredentials();
// clientOptions.reverseProxyUrl = process.env.AZURE_REVERSE_PROXY ?? genAzureChatCompletion({ ...clientOptions.azure });
oaiApiKey = clientOptions.azure.azureOpenAIApiKey;
}
const client = new OpenAIClient(oaiApiKey, clientOptions);
let response = await client.sendMessage(text, {
user,
parentMessageId,
conversationId,
overrideParentMessageId,
getIds,
onStart,
onProgress: progressCallback.call(null, {
res,
text,
parentMessageId: overrideParentMessageId || userMessageId
}),
abortController
});
if (overrideParentMessageId) {
response.parentMessageId = overrideParentMessageId;
}
console.log('promptTokens, completionTokens:', response.promptTokens, response.completionTokens);
await saveMessage(response);
sendMessage(res, {
title: await getConvoTitle(req.user.id, conversationId),
final: true,
conversation: await getConvo(req.user.id, conversationId),
requestMessage: userMessage,
responseMessage: response
});
res.end();
if (parentMessageId == '00000000-0000-0000-0000-000000000000' && newConvo) {
const title = await titleConvo({ text, response });
await saveConvo(req.user.id, {
conversationId,
title
});
}
} catch (error) {
console.error(error);
const partialText = getPartialText();
if (partialText?.length > 2) {
return await abortMessage(req, res, abortControllers);
} else {
const errorMessage = {
messageId: responseMessageId,
sender: 'ChatGPT',
conversationId,
parentMessageId: userMessageId,
unfinished: false,
cancelled: false,
error: true,
text: error.message
};
await saveMessage(errorMessage);
handleError(res, errorMessage);
}
}
};
module.exports = router;

View File

@@ -4,9 +4,9 @@ const {
resetPasswordController,
// refreshController,
registrationController
} = require('../controllers/auth.controller');
const { loginController } = require('../controllers/auth/login.controller');
const { logoutController } = require('../controllers/auth/logout.controller');
} = require('../controllers/AuthController');
const { loginController } = require('../controllers/auth/LoginController');
const { logoutController } = require('../controllers/auth/LogoutController');
const requireJwtAuth = require('../../middleware/requireJwtAuth');
const requireLocalAuth = require('../../middleware/requireLocalAuth');

View File

@@ -0,0 +1,25 @@
const express = require('express');
const router = express.Router();
router.get('/', async function (req, res) {
try {
const appTitle = process.env.APP_TITLE || 'LibreChat';
const googleLoginEnabled = !!process.env.GOOGLE_CLIENT_ID && !!process.env.GOOGLE_CLIENT_SECRET;
const openidLoginEnabled = !!process.env.OPENID_CLIENT_ID
&& !!process.env.OPENID_CLIENT_SECRET
&& !!process.env.OPENID_ISSUER
&& !!process.env.OPENID_SESSION_SECRET;
const openidLabel = process.env.OPENID_BUTTON_LABEL || 'Login with OpenID';
const openidImageUrl = process.env.OPENID_IMAGE_URL;
const githubLoginEnabled = !!process.env.GITHUB_CLIENT_ID && !!process.env.GITHUB_CLIENT_SECRET;
const serverDomain = process.env.DOMAIN_SERVER || 'http://localhost:3080';
const registrationEnabled = process.env.ALLOW_REGISTRATION === 'true';
return res.status(200).send({appTitle, googleLoginEnabled, openidLoginEnabled, openidLabel, openidImageUrl, githubLoginEnabled, serverDomain, registrationEnabled});
} catch (err) {
console.error(err);
return res.status(500).send({error: err.message});
}
});
module.exports = router;

View File

@@ -1,10 +1,11 @@
const express = require('express');
const router = express.Router();
const { availableTools } = require('../../app/langchain/tools');
const { availableTools } = require('../../app/clients/tools');
const getOpenAIModels = () => {
let models = ['gpt-4', 'text-davinci-003', 'gpt-3.5-turbo', 'gpt-3.5-turbo-0301'];
if (process.env.OPENAI_MODELS) models = String(process.env.OPENAI_MODELS).split(',');
const getOpenAIModels = (opts = { azure: false }) => {
let models = ['gpt-4', 'gpt-4-0613', 'gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-0613', 'gpt-3.5-turbo-0301', 'text-davinci-003' ];
const key = opts.azure ? 'AZURE_OPENAI_MODELS' : 'OPENAI_MODELS';
if (process.env[key]) models = String(process.env[key]).split(',');
return models;
};
@@ -16,6 +17,13 @@ const getChatGPTBrowserModels = () => {
return models;
};
const getPluginModels = () => {
let models = ['gpt-4', 'gpt-4-0613', 'gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-0613', 'gpt-3.5-turbo-0301'];
if (process.env.PLUGIN_MODELS) models = String(process.env.PLUGIN_MODELS).split(',');
return models;
};
let i = 0;
router.get('/', async function (req, res) {
let key, palmUser;
@@ -23,7 +31,7 @@ router.get('/', async function (req, res) {
key = require('../../data/auth.json');
} catch (e) {
if (i === 0) {
console.log("No 'auth.json' file (service account key) found in /api/data/ for PaLM models");
console.log('No \'auth.json\' file (service account key) found in /api/data/ for PaLM models');
i++;
}
}
@@ -38,15 +46,19 @@ router.get('/', async function (req, res) {
const google =
key || palmUser
? { userProvide: palmUser, availableModels: ['chat-bison', 'text-bison'] }
? { userProvide: palmUser, availableModels: ['chat-bison', 'text-bison', 'codechat-bison'] }
: false;
const azureOpenAI = !!process.env.AZURE_OPENAI_API_KEY;
const apiKey = process.env.OPENAI_API_KEY || process.env.AZURE_OPENAI_API_KEY;
const openAI = apiKey
? { availableModels: getOpenAIModels(), userProvide: apiKey === 'user_provided' }
const openAIApiKey = process.env.OPENAI_API_KEY;
const azureOpenAIApiKey = process.env.AZURE_API_KEY;
const userProvidedOpenAI = openAIApiKey ? openAIApiKey === 'user_provided' : azureOpenAIApiKey === 'user_provided';
const openAI = openAIApiKey
? { availableModels: getOpenAIModels(), userProvide: openAIApiKey === 'user_provided' }
: false;
const gptPlugins = apiKey
? { availableModels: ['gpt-4', 'gpt-3.5-turbo', 'gpt-3.5-turbo-0301'], availableTools }
const azureOpenAI = azureOpenAIApiKey
? { availableModels: getOpenAIModels({ azure: true}), userProvide: azureOpenAIApiKey === 'user_provided' }
: false;
const gptPlugins = openAIApiKey || azureOpenAIApiKey
? { availableModels: getPluginModels(), availableTools, availableAgents: ['classic', 'functions'], userProvide: userProvidedOpenAI }
: false;
const bingAI = process.env.BINGAI_TOKEN
? { userProvide: process.env.BINGAI_TOKEN == 'user_provided' }

View File

@@ -10,6 +10,7 @@ const oauth = require('./oauth');
const { router: endpoints } = require('./endpoints');
const plugins = require('./plugins');
const user = require('./user');
const config = require('./config');
module.exports = {
search,
@@ -23,5 +24,6 @@ module.exports = {
user,
tokenizer,
endpoints,
plugins
plugins,
config
};

View File

@@ -62,4 +62,57 @@ router.get(
}
);
router.get(
'/openid',
passport.authenticate('openid', {
session: false
})
);
router.get(
'/openid/callback',
passport.authenticate('openid', {
failureRedirect: `${domains.client}/login`,
failureMessage: true,
session: false
}),
(req, res) => {
const token = req.user.generateToken();
res.cookie('token', token, {
expires: new Date(Date.now() + eval(process.env.SESSION_EXPIRY)),
httpOnly: false,
secure: isProduction
});
res.redirect(domains.client);
}
);
router.get(
'/github',
passport.authenticate('github', {
scope: ['user:email', 'read:user'],
session: false
})
);
router.get(
'/github/callback',
passport.authenticate('github', {
failureRedirect: `${domains.client}/login`,
failureMessage: true,
session: false,
scope: ['user:email', 'read:user']
}),
(req, res) => {
const token = req.user.generateToken();
res.cookie('token', token, {
expires: new Date(Date.now() + eval(process.env.SESSION_EXPIRY)),
httpOnly: false,
secure: isProduction
});
res.redirect(domains.client);
}
);
module.exports = router;

View File

@@ -1,5 +1,5 @@
const PluginAuth = require('../../models/schema/pluginAuthSchema');
const { encrypt, decrypt } = require('../../utils/crypto');
const { encrypt, decrypt } = require('../../utils/');
const getUserPluginAuthValue = async (user, authField) => {
try {

View File

@@ -1,19 +1,18 @@
const User = require('../../models/User');
const Token = require('../../models/schema/tokenSchema');
const sendEmail = require('../../utils/sendEmail');
const crypto = require('crypto');
const bcrypt = require('bcryptjs');
const { registerSchema } = require('../../strategies/validators');
const migrateDataToFirstUser = require('../../utils/migrateDataToFirstUser');
const { sendEmail } = require('../../utils');
const config = require('../../../config/loader');
const domains = config.domains;
/**
* Logout user
*
* @param {Object} user
* @param {*} refreshToken
* @returns
* @param {Object} user
* @param {*} refreshToken
* @returns
*/
const logoutUser = async (user, refreshToken) => {
try {
@@ -38,7 +37,7 @@ const logoutUser = async (user, refreshToken) => {
* Register a new user
*
* @param {Object} user <email, password, name, username>
* @returns
* @returns
*/
const registerUser = async (user) => {
const { error } = registerSchema.validate(user);
@@ -63,7 +62,7 @@ const registerUser = async (user) => {
{ name: 'Request params:', value: user },
{ name: 'Existing user:', value: existingUser }
);
// Sleep for 1 second
await new Promise((resolve) => setTimeout(resolve, 1000));
@@ -92,20 +91,17 @@ const registerUser = async (user) => {
newUser.password = hash;
newUser.save();
if (isFirstRegisteredUser) {
migrateDataToFirstUser(newUser);
}
return { status: 200, user: newUser };
} catch (err) {
return { status: 500, message: err?.message || 'Something went wrong' };
}
};
/**
* Request password reset
*
* @param {String} email
* @returns
* @param {String} email
* @returns
*/
const requestPasswordReset = async (email) => {
const user = await User.findOne({ email });
@@ -142,10 +138,10 @@ const requestPasswordReset = async (email) => {
/**
* Reset Password
*
* @param {*} userId
* @param {String} token
* @param {String} password
* @returns
* @param {*} userId
* @param {String} token
* @param {String} password
* @returns
*/
const resetPassword = async (userId, token, password) => {
let passwordResetToken = await Token.findOne({ userId });

View File

@@ -0,0 +1,47 @@
const passport = require('passport');
const { Strategy: GitHubStrategy } = require('passport-github2');
const config = require('../../config/loader');
const domains = config.domains;
const User = require('../models/User');
// GitHub strategy
const githubLogin = new GitHubStrategy(
{
clientID: process.env.GITHUB_CLIENT_ID,
clientSecret: process.env.GITHUB_CLIENT_SECRET,
callbackURL: `${domains.server}${process.env.GITHUB_CALLBACK_URL}`,
proxy: false,
scope: ['user:email'] // Request email scope
},
async (accessToken, refreshToken, profile, cb) => {
try {
let email;
if (profile.emails && profile.emails.length > 0) {
email = profile.emails[0].value;
}
const oldUser = await User.findOne({ email });
if (oldUser) {
return cb(null, oldUser);
}
const newUser = await new User({
provider: 'github',
githubId: profile.id,
username: profile.username,
email,
emailVerified: profile.emails[0].verified,
name: profile.displayName,
avatar: profile.photos[0].value
}).save();
cb(null, newUser);
} catch (err) {
console.error(err);
cb(err);
}
}
);
passport.use(githubLogin);

View File

@@ -0,0 +1,123 @@
const passport = require('passport');
const jwt = require('jsonwebtoken');
const { Issuer, Strategy: OpenIDStrategy } = require('openid-client');
const axios = require('axios');
const fs = require('fs');
const path = require('path');
const config = require('../../config/loader');
const domains = config.domains;
const User = require('../models/User');
let crypto;
try {
crypto = require('node:crypto');
} catch (err) {
console.error('crypto support is disabled!');
}
const downloadImage = async (url, imagePath, accessToken) => {
try {
const response = await axios.get(url, {
headers: {
'Authorization': `Bearer ${accessToken}`
},
responseType: 'arraybuffer'
});
fs.mkdirSync(path.dirname(imagePath), { recursive: true });
fs.writeFileSync(imagePath, response.data);
const fileName = path.basename(imagePath);
return `/images/openid/${fileName}`;
} catch (error) {
console.error(`Error downloading image at URL "${url}": ${error}`);
return '';
}
};
Issuer.discover(process.env.OPENID_ISSUER)
.then(issuer => {
const client = new issuer.Client({
client_id: process.env.OPENID_CLIENT_ID,
client_secret: process.env.OPENID_CLIENT_SECRET,
redirect_uris: [domains.server + process.env.OPENID_CALLBACK_URL]
});
const openidLogin = new OpenIDStrategy(
{
client,
params: {
scope: process.env.OPENID_SCOPE
}
},
async (tokenset, userinfo, done) => {
try {
let user = await User.findOne({ openidId: userinfo.sub });
if (!user) {
user = await User.findOne({ email: userinfo.email });
}
let fullName = '';
if (userinfo.given_name && userinfo.family_name) {
fullName = userinfo.given_name + ' ' + userinfo.family_name;
} else if (userinfo.given_name) {
fullName = userinfo.given_name;
} else if (userinfo.family_name) {
fullName = userinfo.family_name;
}
if (!user) {
user = new User({
provider: 'openid',
openidId: userinfo.sub,
username: userinfo.given_name || '',
email: userinfo.email || '',
emailVerified: userinfo.email_verified || false,
name: fullName
});
} else {
user.provider = 'openid';
user.openidId = userinfo.sub;
user.username = userinfo.given_name || '';
user.name = fullName;
}
if (userinfo.picture) {
const imageUrl = userinfo.picture;
let fileName;
if (crypto) {
const hash = crypto.createHash('sha256');
hash.update(userinfo.sub);
fileName = hash.digest('hex') + '.png';
} else {
fileName = userinfo.sub + '.png';
}
const imagePath = path.join(__dirname, '..', '..', 'client', 'public', 'images', 'openid', fileName);
const imagePathOrEmpty = await downloadImage(imageUrl, imagePath, tokenset.access_token);
user.avatar = imagePathOrEmpty;
} else {
user.avatar = '';
}
await user.save();
done(null, user);
} catch (err) {
done(err);
}
}
);
passport.use('openid', openidLogin);
})
.catch(err => {
console.error(err);
});

View File

@@ -2,7 +2,7 @@ const Joi = require('joi');
const loginSchema = Joi.object().keys({
email: Joi.string().trim().email().required(),
password: Joi.string().trim().min(6).max(20).required()
password: Joi.string().trim().min(8).max(128).required()
});
const registerSchema = Joi.object().keys({
@@ -11,11 +11,11 @@ const registerSchema = Joi.object().keys({
.trim()
.min(2)
.max(20)
.regex(/^[a-zA-Z0-9_]+$/)
.regex(/^[a-zA-Z0-9_-]+$/)
.required(),
email: Joi.string().trim().email().required(),
password: Joi.string().trim().min(6).max(20).required(),
confirm_password: Joi.string().trim().min(6).max(20).required()
password: Joi.string().trim().min(8).max(128).required(),
confirm_password: Joi.string().trim().min(8).max(128).required()
});
module.exports = {

18
api/utils/abortMessage.js Normal file
View File

@@ -0,0 +1,18 @@
async function abortMessage(req, res, abortControllers) {
const { abortKey } = req.body;
console.log(`req.body`, req.body);
if (!abortControllers.has(abortKey)) {
return res.status(404).send('Request not found');
}
const { abortController } = abortControllers.get(abortKey);
abortControllers.delete(abortKey);
const ret = await abortController.abortAsk();
console.log('Aborted request', abortKey);
console.log('Aborted message:', ret);
res.send(JSON.stringify(ret));
}
module.exports = abortMessage;

22
api/utils/azureUtils.js Normal file
View File

@@ -0,0 +1,22 @@
const genAzureEndpoint = ({ azureOpenAIApiInstanceName, azureOpenAIApiDeploymentName }) => {
return `https://${azureOpenAIApiInstanceName}.openai.azure.com/openai/deployments/${azureOpenAIApiDeploymentName}`;
}
const genAzureChatCompletion = ({
azureOpenAIApiInstanceName,
azureOpenAIApiDeploymentName,
azureOpenAIApiVersion
}) => {
return `https://${azureOpenAIApiInstanceName}.openai.azure.com/openai/deployments/${azureOpenAIApiDeploymentName}/chat/completions?api-version=${azureOpenAIApiVersion}`;
}
const getAzureCredentials = () => {
return {
azureOpenAIApiKey: process.env.AZURE_API_KEY ?? process.env.AZURE_OPENAI_API_KEY,
azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME,
azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME,
azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION
}
}
module.exports = { genAzureEndpoint, genAzureChatCompletion, getAzureCredentials };

Some files were not shown because too many files have changed in this diff Show More