Compare commits

..

115 Commits

Author SHA1 Message Date
Danny Avila
e4c91dfbea feat: Plugins endpoint - Reverse Engineering of official Plugins features (#197)
* components for plugins in progress

* WIP: add langchain client implementation for tools/plugins
feat(langchain): add loadHistory function for loading chat history from database
feat(langchain): add saveMessageToDatabase function for saving chat messages to database

* chore(Memory.js): remove Memory.js file from the project directory.

* WIP: adding plugin functionality
——————————————————
fix(eslintrc.js): change arrow-parens rule to always require parentheses

refactor(agent.js): reorganize imports and add new imports
feat(agent.js): add support for saving and loading chat history
feat(agent.js): add support for saving messages to database
feat(agent.js): add ChatAgent class with initialize and sendMessage methods

fix(langchain): use getConvo and saveMessage functions from models.js instead of Conversation and Message models
feat(langchain): add user parameter to loadHistory and saveMessageToDatabase functions
chore(package.json): update langchain package version to 0.0.59 and add langchain script to run test2.js file
——————————————————

* WIP: testing agent initialization

* WIP: testing various agent methods

feat(agent.js): add CustomChatAgent class and initializeAgentExecutorWithOptions method
feat(customChatAgent.js): add CustomPromptTemplate and CustomOutputParser classes

refactor(langchain): uncomment code for input2 and options
feat(langchain): add input1 to read comments on a youtube video
docs(langchain): remove commented code and add whitespace to package.json

* WIP: feat: plugin endpoint, backend class working

* feat(agent.js): add support for Zapier NLA API key
feat(agent.js): add ZapierToolKit to tools if zapierApiKey is provided
feat(customAgent.js): change prompt prefix and suffix to reflect new task-based prompt
feat(test4.js): add test for new task-based prompt

* style(langchain): improve readability and add comments to code
feat(langchain): update prompt message for custom agent
fix(langchain): update message format in test4.js

* style(customAgent.js): remove unnecessary capitalization and rephrase some sentences
test(langchain): add test2 and test3 scripts to package.json

* chore(customAgent.js): fix typo in comment, change "an" to "identical"

* WIP: gpt-4 testing

* feat(langchain): add AIPluginTool and HumanTool classes
fix(langchain): remove zapierApiKey option from ChatAgent constructor
refactor(langchain): update langchain package to v0.0.64
misc(langchain): update test2, test3, and test4 scripts to use --inspect flag

* feat(langchain): add GoogleSearchAPI tool for searching the web using Google Custom Search API

* feat(askGPTPlugins.js): add support for progress callback in ask function
fix(agent.js): pass progress callback to sendApiMessage function

* refactor(agent.js): load tools from options and initialize them in constructor
feat(agent.js): add support for environment variable SERPAPI_API_KEY
feat(agent.js): add support for environment variable ZAPIER_NLA_API_KEY
docs(agent.js): remove commented out code and add comments to clarify code

* chore(langchain): remove unused files loadHistory.js and saveMessage.js

* feat(validateTools.js): add function to validate API keys for supported tools

* feat(langchain): update langchain package to version 0.0.66
feat(langchain): add support for GPT-4 model
fix(server/index.js): fix uncaughtException handler to ignore 'fetch failed' errors

* refactor(agent.js): remove FORMAT_INSTRUCTIONS and replace with a more concise message
refactor(agent.js): remove unused variable 'errorMessage'
refactor(agent.js): change 'result' variable initialization to an empty object instead of null
refactor(agent.js): change error message when response generation fails
refactor(agent.js): change output message when response generation fails
refactor(agent.js): change output message when response generation succeeds

* chore(langchain): comment out unused model in ChatAgent constructor
feat(langchain): add test5 script to package.json for running test5.js script

* refactor(agent.js): change response to answer and update message
refactor(test3.js, test5.js): remove commented out code and add comments

The changes in agent.js are to improve the message that is returned to the user. The word "response" has been changed to "answer" to better reflect the output of the chatbot. The message has also been updated to provide clearer instructions to the user.

The changes in test3.js and test5.js are to remove commented out code and add comments to improve readability.

* docs: update links to LOCAL_INSTALL.md and defaultSystemMessage.md
fix: fix typo in BingAI/Settings.jsx
feat: add Dockerfile for app containerization

docs(google_search.md): add guide for setting up Google Custom Search API key and ID

* docs: update link to system message guidelines in Bing AI Settings component
docs: update link to system message guidelines in GOOGLE_SEARCH.md
feat: add JAILBREAK_INFO.md guide for Bing AI jailbreak mode system message guidelines

* style(api): remove unnecessary quotes and empty values from .env.example
style(agent.js): refactor getActions method to accept an input parameter
feat(agent.js): add handleChainEnd method to CustomChatAgent class
style(customAgent.js): add a new line to the end of the file
style(test5.js): comment out unused variable and update input1 variable
style(googleSearch.js): change tool name to kebab-case

* chore(langchain): comment out handleChainEnd method in agent.js
feat(langchain): add browser tool to ChatAgent in test2.js
feat(langchain): add modelOptions to ChatAgent in test2.js
feat(langchain): change question in input1 and request article review summary in test5.js

* fix(askGPTPlugins.js): fix syntax error by removing extra comma in parentMessageId field
feat(askGPTPlugins.js): add default value of null to parentMessageId parameter in ask function

* fix(askGPTPlugins.js): change endpoint string from 'GPTPlugins' to 'gptPlugins'
feat(endpoints.js): add support for gptPlugins endpoint
feat(PresetItem.jsx): add support for gptPlugins endpoint
feat(HoverButtons.jsx): add support for gptPlugins endpoint
feat(createPayload.ts): add support for gptPlugins endpoint
feat(types.ts): add gptPlugins endpoint to EModelEndpoint enum
feat(endpoints.js): add gptPlugins endpoint to availableEndpoints selector
feat(cleanupPreset.js): add support for gptPlugins endpoint
feat(getDefaultConversation.js): add support for gptPlugins endpoint
feat(getIcon.jsx): add support for gptPlugins endpoint
feat(handleSubmit.js): add support for gptPlugins endpoint

* refactor(agent.js): remove debug option from options object
refactor(agent.js): change tool name from 'google-search' to 'google'
refactor(agent.js): update description for 'google' tool
feat(agent.js): add support for citing sources when using web links in response message
fix(agent.js): update error message to not mention error to user
feat(agent.js): add unique message ids for user message and response message
feat(agent.js): limit number of search results to 5 in 'google' tool
refactor(validateTools.js): add console log to show valid tools

* feat(askGPTPlugins.js): add support for GPT-3.5-turbo model and validate model option
refactor(askGPTPlugins.js): remove unused imports and variables
refactor(askGPTPlugins.js): remove commented code
refactor(askGPTPlugins.js): remove unused parameters in ask function
feat(ask/index.js): add askGPTPlugins route to router

* feat(NewConversationMenu): add alpha tag to gptPlugins endpoint and rename it to Plugins

* refactor(askGPTPlugins.js): remove commented code and unused imports
feat(askGPTPlugins.js): add support for debug option in endpointOption
feat(askGPTPlugins.js): add support for chatGptLabel, promptPrefix, temperature, top_p, presence_penalty, and frequency_penalty in endpointOption
feat(askGPTPlugins.js): add support for sending plugin and pluginend events
feat(askGPTPlugins.js): add onAgentAction and onChainEnd callbacks to ChatAgent.sendMessage
refactor(titleConvo.js): comment out unused imports
refactor(validateTools.js): comment out console.log statement
refactor(agent.js): change saveMessage to include unfinished property
feat(agent.js): add endpoint property to saveConvo call in saveMessageToDatabase
feat(askGPTPlugins.js): add validateTools import and use it to validate endpointOption.tools before passing to ChatAgent constructor
feat(askGPTPlugins.js

* refactor(MessageHeader.jsx): extract plugins section into a separate variable and add support for gptPlugins endpoint
fix(MessageHeader.jsx): disable clicking on non-clickable endpoints

* components for plugins in progress

* feat(Plugin.jsx): add plugin prop to Plugin component and display plugin name
feat(Plugin.jsx): add loading state and display loading spinner
feat(Plugin.jsx): add Disclosure component to Plugin component
feat(Plugin.jsx): add Disclosure.Panel to Plugin component to display team pricing information
feat(Spinner.jsx): add classProp prop to Spinner component to allow for custom styling
feat(Landing.jsx): add Plugin component to Landing page for testing

testing gpt plugins

feat(plugins): Milestone commit

- Add formatAction function to format plugin actions.
- Add prefix.js file to store the prefix message for ChatAgent.
- Update ask function to include plugin object to store plugin data.
- Update onAgentAction and onChainEnd functions to format plugin data and send intermediate messages.
- Update response object to include plugin data.

The `handlers.js` file now includes a `formatAction` function that formats the action object for display in the UI. The `createOnProgress` function now returns a `sendIntermediateMessage` function that sends intermediate messages to the client.

feat (client): add support for plugins in messages

This commit adds support for plugins in messages. It includes changes to the `handlers.js`, `index.jsx`, `CodeBlock.jsx`, `Message.jsx`, `MessageHeader.jsx`, and `Plugin.jsx` files.

The `index.jsx` file now includes a `plugin` property in the `messageHandler` function.

The `CodeBlock.jsx` file now includes a `plugin` property that determines the language of the code block.

The `Message.jsx` file now includes a `Plugin` component that displays the plugin used in the message.

The `MessageHeader.jsx` file now includes a `Plugins` component that displays the enabled plugins.

feat(langchain): add OpenAICreateImage tool for generating images based on user prompts
fix(langchain): update validateTools to include create-image tool
fix(langchain): save plugin data to messageSchema
fix(server/routes/askGPTPlugins.js): save userMessage and response to messageSchema

feat(langchain): add SelfReflectionTool

Add a new tool to the LangChain agent, SelfReflectionTool, which enhances the agent's self-awareness by reflecting on its thoughts before taking action. The tool provides a space for the agent to explore and organize its ideas in response to the user's message.

Also, update the prefix message to reflect the changes in the agent's behavior and the way it should engage with the user. The prefix message now emphasizes the use of tools when necessary, and relying on the agent's knowledge for creative requests. It also provides clear instructions on how to use the 'Action' input and how to carry out tasks in the sequence written by the human.

Finally, update the OpenAICreateImage tool to return the image URL in markdown format. The tool replaces newlines and spaces in the input text with hyphens to create a valid markdown link.

Milestone commit: better error handling with custom output parser, dir and file re-org

style(langchain): fix formatting and add comments to prefix.js
fix(langchain): remove commented out code in test6.js
feat(langchain): reduce maxAttempts from 3 to 2 in CustomChatAgent's buildPromptPrefix method
feat(langchain): add null check for result.output in CustomChatAgent's buildPromptPrefix method

style(langchain): improve consistency and readability of code

This commit improves the consistency and readability of the code in the langchain directory. Specifically, it:

- Changes the case of the "Thought" output in the CustomChatAgent class to match the "Thought" output in the SelfReflectionTool class.
- Adds a currentDateString property to the CustomChatAgent class to avoid repeating the same code in multiple places.
- Updates the prefix in the prefix.js file to match the current objectives of the ChatGPT model.
- Changes the description of the OpenAICreateImage tool to request a description of the image to be generated.
- Updates the tools used by the ChatAgent in the askGPTPlugins.js file to include the Google and Browser tools instead of the Calculator and Create-Image tools.

feat: add wolfram, improve image creation, rename to dall-e

* refactor(langchain): update language and formatting in various files

- Update tool-based instructions to use proper Markdown syntax for image URLs
- Adjust temperature for modelOptions in CustomChatAgent class
- Comment out console.debug statement in CustomChatAgent class
- Update prefix in initializeCustomAgent function to use proper line breaks
- Update prefix in instructions.js to use proper line breaks and change "user" to "human"
- Update input in test6.js to use Ezra Pound instead of Hemingway
- Update return statement in OpenAICreateImage class to use "generated-image" as alt-text
- Update description in SelfReflectionTool class to provide clearer instructions
- Update tools in ask function in askGPTPlugins.js to use only the DALL-E tool and enable debug mode

feat(ask): add support for DALL-E tool in formatAction function
feat(ask): add support for self-reflection tool in formatAction function
feat(Plugin.jsx): add support for self-reflection tool in Plugin component
fix(Plugin.jsx): fix Plugin component to not display 'None' when latest is not available

* docs(openaiCreateImage.js): update tool description to clarify usage

* feat(agent.js): add message parameter to initialize function
feat(agent.js): pass message parameter to SelfReflectionTool constructor
feat(customAgent.js): add longestToolName variable to CustomOutputParser
feat(openaiCreateImage.js): replace new lines with spaces in prompt parameter
feat(selfReflection.js): add message parameter to SelfReflectionTool constructor
feat(selfReflection.js): add placeholder response to selfReflect function

* feat: frontend plugin selection

* fix: agent updates, available tools via endpoint config

* fix: improve frontend plugin selection

* feat: further customize agent and bypass executor when no tools are provided

* fix: key issue in multiselect and allow setting changes during convo in plugins endpoint

* fix: convo will save modelOptions, fix persistent errors with agent

* fix: add looser final answer parsing and edit action formatting

* fix: handle edge case where stop token is not hit and causes long parsing error

* feat: trying new prompt for image creation

* fix: improvements based on gpt-3.5

* feat: allow setting model options throughout plugin conversation

* fix: agent adjustments

* improve final reply for gpt-4, gpt-3.5 needs a more stable approach

* fix: better context output for gpt-3.5

* fix: added clarification for better context output for gpt-3.5

* feat(PluginsOptions): add advanced mode to show/hide options
style(PluginsOptions): add styles for advanced mode and show/hide options

* minor changes to styling

* refactor(langchain): add support for custom GPT-4 agent

This commit adds support for a custom GPT-4 agent in the langchain
module. The `CustomGpt4Agent` class extends the `ZeroShotAgent` class
and includes a new `createPrompt` method that generates a prompt
template for the agent. The `initializeCustomAgent` function has been
updated to use the `CustomGpt4Agent` class when the model is not GPT-3.

The `instructions.js` file has also been updated to include new
instructions for the GPT-4 agent. The `formatInstructions` method has
been removed and replaced with `gpt4Instructions` and `prefix2` and
`suffix2` have been added to include the new instructions.

feat(langchain): add custom output parser for langchain agents

This commit adds a custom output parser for langchain agents. The new parser is called CustomOutputParser and it extends ZeroShotAgentOutputParser. It takes a fields object as a parameter and sets the tools and longestToolName properties. It also sets the finishToolNameRegex property to match the final answer. The parse method of the CustomOutputParser class takes a text parameter and returns an object with returnValues, log, and toolInput properties.

This commit also adds a Gpt4OutputParser class that extends ZeroShotAgentOutputParser. It takes a fields object as a parameter and sets the tools and longestToolName properties. It also sets the finishToolNameRegex property to match the final answer. The parse method of the Gpt4OutputParser class takes a text parameter and returns an object with returnValues, log, and toolInput properties.

feat(langchain): add isGpt3 parameter to

* Stable Diffusion Plugin (#204)

* Added stable diffusion plugin

* Added example prompt

* Fixed naming

* Removed brackets in the prompt

* fix: improved agent for gpt-3.5

* fix: outparser, gpt3 instructions, and wolfram error handling

* chore: update langchain to 0.0.71

* fix: long parsing action input fix

* fix: make plugin select close on clicking label/button

* fix: make plugin select close on clicking label/button

* fix: wolfram input formatting and gpt-3 payload without plugins

* chore(api): update axios package version to 1.3.4
feat(api): add requireJwtAuth middleware to askGPTPlugins endpoint
fix(api): replace session user with user id in askGPTPlugins endpoint

docs(LOCAL_INSTALL.md): update guide for local installation and testing

This commit updates the guide for local installation and testing of the
ChatGPT-Clone app. It includes instructions for locally running the app,
updating the app version, and running tests. It also includes a new
option for running the app using Docker. The commit also fixes some
typos and formatting issues.

* add reverseProxy to plugins client

* chore(Dockerfile-app): add Dockerfile for building and running the app in a container
docs: remove outdated guides on Google search and Bing jailbreak mode

docs(LOCAL_INSTALL.md): remove outdated Windows installation instructions and update MeiliSearch configuration file

* fix: handle n/a parsing error better, reduce token waste if no agentic behavior is needed

* style: fix formatting and add parentheses around arrow function parameter
style: change hover background color to white and dark hover background color to gray-700

* chore: re-organize agent dir and files

* feat(ChatAgent.js): add support for PlanAndExecuteAgentExecutor
feat(PlanAndExecuteAgentExecutor.js): add PlanAndExecuteAgentExecutor class
feat(planExecutor.js): add demo for PlanAndExecuteAgentExecutor

* feat: add azure support to plugins

* refactor(utils): add basePath endpoint for genAzureEndpoint
feat(api): add support for Azure OpenAI API in various modules and tools

* feat: add plugin api for fetching available tools

* feat: add data service for getting available plugins

* feat: first iteration plugin store UI

* refactor: rename files to follow proper naming convention

* feat: Plugin store UI components

* feat: create separate user routes, service, controller, and add plugins to user model

* feat: create data service for adding and removing plugins per user

* feat: UI for adding and removing plugins, displaying plugins in dropdown based on what user has installed

* fix: merge conflicts from main

* fix: fix plugin items titles

* fix: tool.value -> tool.pluginKey

* fix: testing returnDirect for self-reflection

* fix: add browser tool to manifest

* refactor(outputParser.js): remove commented out code
feat(outputParser.js): add support for thought input when there is no action input

* handling 'use tool' edge case

* merge main to langchain

* fix(User.js, auth.service.js, localStrategy.js): change deprecated Joi.validate() to schema.validate() method (#322)

* fix(auth.service.js): fixes deprecated error callback in mongoose save method (#323)

* chore: run formatting script with new rules

* refactor: add requiresAuth to manifest, fix uninstall button

* version with plugin auth as dialog modal

* feat: Complete frontend for plugin auth

* frontend styling updates

* feat: api for plugin auth

* feat: Add tooltip with field description to plugin auth form

* fix: issue with plugin that has no auth

* feat(tools): add support for user-specific API keys

This commit adds support for user-specific API keys for the following tools:
- Google Search API
- Web Browser
- SerpAPI
- Zapier
- DALL-E
- Wolfram Alpha API

It also adds support for OpenAI API key for the Web Browser tool.

The `validateTools` function now takes a `user` parameter and checks for user-specific API keys before falling back to environment variables.

The `loadTools` function now takes a `user` parameter and initializes the tools with user-specific API keys if available.

The `manifest.json` file has been updated to include the new `authConfig` fields for the tools that support user-specific API keys.

The `askGPTPlugins.js` file has been updated to use the `validateTools` function with the `user` parameter.

refactor(ChatAgent.js): add user parameter to initialize function and pass it to loadTools function

refactor(tools/index.js): set default value for tools parameter in validateTools function
refactor(askGPTPlugins.js): remove duplicate user variable declaration and use the one from req object

* refactor(ChatAgent.js): await validTool() before pushing to this.tools array
refactor(tools/index.js): use Map instead of Set to store valid tools
refactor(tools/index.js): filter availableTools to only validate tools passed in
refactor(PluginController.js): filter out duplicate plugins by pluginKey
refactor(crypto.js): use environment variables for encryption key and initialization vector
feat(PluginService.js): add null check for pluginAuth in getUserPluginAuthValue()

* feat(api): add credentials key and IV to .env.example for securely storing credentials

* Adds testing for handling tools, introducing a test env to the backend
Fixes bugs & optimizes code as revealed through testing, including:
- wolfram.js: fixes bug where wolfram was not handling authentication
- ChatAgent.js: ChatAgent modified to reflect 'handleTools' changes
- handleTools.js: Moves logic out of index file
- handleTools.js: loadTools: returns only requested tools
- handleTools.js: validTools: correctly returns tools based on authentication

* test(index.test.js): add test to validate a tool from an environment variable

* test(tools): add test for initializing an authenticated tool through Environment Variables

* refactor(ChatAgent.js): remove commented out code and unused imports

* refactor(ChatAgent.js): move instructions to a separate file and import them
fix(ChatAgent.js): replace hardcoded instructions with imported ones

* refactor(ChatAgent.js): change import path for TextStream
refactor(stream.js): remove unused TextStream class

* chore(.gitignore): add .env.test to gitignore
refactor(ChatAgent.js): rename CustomChatAgent to ChatAgent
test(ChatAgent.test.js): add tests for ChatAgent class
refactor(outputParser.js): remove OldOutputParser class
refactor(outputParser.js): rename CustomOutputParser to OutputParser
docs(.env.test.example): add comment explaining how to use OPENAI_API_KEY
refactor(jestSetup.js): use dotenv to load environment variables from .env.test file

* Various optimizations and config, add tests for PluginStoreDialog

* test(ChatAgent.test.js): add test to check if chat history is returned correctly

* test: unit tests for plugin store

* test: add frontend-test script to root package.json

* feat(ChatAgent.js, askGPTPlugins.js): add support for aborting chat requests (in progress)

* test: add more client tests

* feat(ChatAgent): allow plugin requests to be cancelled

* feat(ChatAgent): allow message regeneration

* feat(ChatAgent): remember last selected tools

* Remove plugins we don't yet have from manifest.json

* fix(ChatAgent.js): increase maxAttempts from 1 to 2
fix(ChatAgent.js): change error message to 'Cancelled.' if message was aborted mid-generation
fix(openaiCreateImage.js): replace unwanted characters in input string
fix(handlers.js): compare action.tool in lowercase to 'self-reflection'

* fix(ChatAgent): Fix up plugin I/O formatting for n/a actions

* refactor(Plugin.jsx): remove unused import statement
feat(Plugin.jsx): add Plugin component with svg paths and styles

* refactor: simplify credential encryption/decryption by using a single key and IV for all environments. Update crypto.js and .env.example files accordingly.

* fix(ChatAgent.js): reduce maxAttempts from 2 to 1
feat(ChatAgent.js): add model information to responseMessage object
feat(Message.js): add model field to messageSchema
feat(Message.js): add model field to message object
feat(Message.jsx): pass model information to getIcon function
feat(getIcon.jsx): add Plugin component and handle plugin messages differently

* feat(askGPTPlugins.js): add model property to the ask function response object
feat(EndpointItem.jsx): add message property to the EndpointItem component
feat(MessageHeader.jsx): add Plugin icon to the plugins section
feat(MessageHeader.jsx): change alpha to beta in the plugins section
feat(svg): add Plugin, GPTIcon, and BingIcon components to the svg folder
refactor(EndpointItems.jsx): remove unused import statement

* refactor(googleSearch.js, wolfram.js): change error handling to return a message instead of throwing an error

* refactor(CustomAgent): remove commented code and change return object to include returnValues property

* feat(CustomAgent.js): add currentDateString to createPrompt method options
deps(api/package.json): update langchain to v0.0.81

* fix: do not show pagination if the maxPage is 1

* Add Zapier back to manifest (accidentally removed)

* chore(api): update langchain dependency to version 0.0.84

* feat(DALL-E.js): add DALL-E tool for generating images using OpenAI's DALL-E API
refactor(handleTools.js): update import for DALL-E tool
refactor(index.test.js): update import for DALL-E tool
refactor(stablediffusion.js): add check for image directory existence before saving image

* refactor(CustomAgent): rename instructions prefix variable to gpt3 and add gpt4 instructions
feat(CustomAgent): add support for gpt-4 model
fix(initializeCustomAgent.js): pass model name to createPrompt method
fix(outputParser.js): set selectedTool to 'self-reflection' when tool parsing fails

* style(langchain/tools): update guidelines for image creation in DALL-E and StableDiffusion

- Update guidelines for image creation in DALL-E and StableDiffusion tools
- Emphasize the importance of "showing" and not "telling" the imagery in crafting input
- Update formatting for the example prompt for generating a realistic portrait photo of a man
- Generate images only once per human query unless explicitly requested by the user

* docs(tools): update tool descriptions for DALL-E and Stable Diffusion

- Update the description for DALL-E tool to indicate that it is exclusively for visual content and provide guidelines for generating images with a focus on visual attributes.
- Update the description for Stable Diffusion tool to indicate that it is exclusively for visual content and provide guidelines for generating images with a focus on visual attributes.

* chore(api): update "@waylaidwanderer/chatgpt-api" dependency to version "^1.36.3"

* refactor(ChatAgent.js): use environment variable for reverse proxy url
refactor(ChatAgent.js): use environment variable for openai base path
refactor(instructions.js): update gpt3 and gpt3-v2 instructions
refactor(outputParser.js): update finishToolNameRegex in CustomOutputParser class

* refactor(DALL-E.js): change apiKey and azureKey fields to uppercase
refactor(googleSearch.js): change cx and apiKey fields to uppercase
feat(manifest.json): add authConfig field for Stable Diffusion WebUI API URL
refactor(stablediffusion.js): add url field to constructor and change getServerURL() to this.url
refactor(wolfram.js): change apiKey field to uppercase WOLFRAM_APP_ID

* refactor(handleTools.js): simplify tool loading and add support for custom tool constructors and options

* refactor(handleTools.js): remove commented out code and unused imports

* refactor(handleTools.js, index.js): change file name from wolfram.js to Wolfram.js and selfReflection.js to SelfReflection.js to follow PascalCase convention

* refactor(outputParser.js, askGPTPlugins.js): improve code readability and remove unnecessary comments

* feat(GoogleSearch.js): add GoogleSearchAPI tool to allow agents to use the Google Custom Search API
feat(SelfReflection.js): add SelfReflectionTool to allow agents to reflect on their thoughts and actions
feat(StableDiffusion.js): add StableDiffusionAPI tool to allow agents to generate images using stable diffusion webui's api

feat(Wolfram.js): add WolframAlphaAPI tool for computation, math, curated knowledge & real-time data through WolframAlpha.

* testing openai specs

* doc: fix link in .env.example

* package-update

* fix(MultiSelectDropDown.jsx): handle null or undefined values in availableValues array

* refactor(DALL-E.js, StableDiffusion.js): remove 'dist/' from image path
feat(docker-compose.yml): add comments for reverse proxy configuration

* chore(.gitignore): ignore client/public/images/
fix(DALL-E.js, StableDiffusion.js): change image path from dist/ to public/
feat(index.js): add support for serving static files from client/public/ directory

* fix: remove selected tool when uninstalled

* plugin options in progress

* fix: fix issue with uninstalling a plugin that is in use and typescript errors

* feat(gptPlugins): add Preset support for GPT Plugins endpoint
feat(ChatAgent.js): add support for agentOptions object
feat(convoSchema.js): add agentOptions field to conversation schema
feat(defaults.js): add agentOptions object to defaults
feat(presetSchema.js): add agentOptions field to preset schema
feat(askGPTPlugins.js): add support for agentOptions object in request body

feat(EditPresetDialog.jsx): add support for showing/hiding GPT Plugins agent settings
feat(EditPresetDialog.jsx): add support for setting GPT Plugins agent options
fix(EndpointOptionsDialog.jsx): change endpoint name from 'gptPlugins' to 'Plugins'

feat(AgentSettings.jsx): add AgentSettings component for GPT plugins configuration

feat(client): add GPT Plugins settings component and endpoint to Settings component
fix(client): remove unused imports in GoogleOptions component

feat(PluginsOptions): add support for agent settings and refactor code
feat(PluginsOptions): add GPTIcon to show/hide agent settings button
feat(index.ts): export SVG components

feat(GPTIcon.jsx): add className prop to GPTIcon component
feat(GPTIcon.jsx): import cn function from utils
feat(BingIcon.tsx): export BingIcon component
feat(index.ts): export BingIcon component
feat(index.ts): export MessagesSquared component
refactor(cleanupPreset.js): add default values for agentOptions in gptPlugins endpoint

feat(getDefaultConversation.js, handleSubmit.js): add agentOptions object to conversation object for GPT plugins endpoint. Update default temperature value to 0.8. Add chatGptLabel and promptPrefix properties to conversation object.

* fix: set default convo back to null

* refactor(ChatAgent.js, askGPTPlugins.js, AgentSettings.jsx): change variable names for better readability and remove redundant code

* test: add RecoilRoot to layout-test-utils

* refactor(askGPTPlugins.js): remove redundant code and use endpointOption directly
feat(askGPTPlugins.js): add validation for tools in endpointOption before using it

* chore(ChatAgent.js, Settings.jsx): add agentOptions to saveConvo function and adjust Settings component height

The ChatAgent.js file was modified to include the agentOptions object in the saveConvo function. The Settings.jsx file was modified to adjust the height of the component to ensure that all content is visible.

* refactor(ChatAgent.js): extract reverseProxyUrl option to a class property and add support for it
feat(ChatAgent.js): add support for completionMode option in sendApiMessage method
feat(ChatAgent.js): add support for user-provided promptPrefix in buildPrompt method

* feat(plugins): allow preset change mid conversation

* chore: update OPENAI_KEY to OPENAI_API_KEY in .github/playwright.yml and api/.env.example
refactor(chatgpt-client.js): update OPENAI_KEY to OPENAI_API_KEY
feat(langchain): add demo-aiplugin.js and demo-yaml.js, remove test2.js, test3.js, and test4.js

chore: remove unused test files
fix(titleConvo.js): fix typo in environment variable name
fix(askGPTPlugins.js): fix typo in environment variable name
fix(endpoints.js): fix typo in environment variable name
docs: update installation guide to use OPENAI_API_KEY instead of OPENAI_KEY in .env file

* fix(index.test.js): change import of GoogleSearchAPI to use uppercase G in GoogleSearch

* chore(api): bump langchain version

* feat(PluginController.js): authenticate plugins from environment variables if they are set
feat(PluginStoreDialog.tsx): show plugin auth form only if plugin is not authenticated by env var and require authentication
feat(types.ts): add authenticated field to TPlugin type definition

* docs: update google_search.md and add stable_diffusion.md

* Update stable_diffusion.md

* refactor(Wolfram.js): remove newline characters from query before encoding
docs(wolfram.md): add instructions for setting WOLFRAM_APP_ID in api.env to bypass prompt for AppID in plugin

* refactor(Wolfram.js): replace deprecated replaceAll method with replace method

* Update wolfram.md

* fix(askGPTPlugins): error message will reference correct Parent Message

* refactor(chatgpt-client.js, ChatAgent.js): simplify maxContextTokens calculation and add promptPrefix parameter to buildPrompt method

* docs: initial draft of intro to plugins

* Update introduction.md

* Update introduction.md

* Feature: User/Reg cleanup + Install / Upgrade script for langchain (#427)

* test: login tests

* test: finish login tests

* test: initial tests for registration

* test: registration specs

* feature: Init a app config file
- Simplifies the ENV vars too
- Legacy fallbacks for older builds

* refactor(auth): Refactor log in/out controllers
- Moves both login and logout controllers to their own file

* chore(jwt): Throw warning if secret is default

* feature(frontend): Ability to disable registration

* feature(env): Env in the root + version support
ie .env.prod, .env.dev, .env.test

* feature: Upgrade .env script for users

* chore(config): Refactor and remove legacy env refs

* feature(upgrade): Upgrade script for .env changes

* feature: Install script and upgrade script

* bugfix: Uncomment line to remove old .env file

* chore: rename OPENAI_KEY to OPENAI_API_KEY

* chore: Cleanup config changes/bugs

* bugfix: Fix config and node env issues

* bugfix: Config validation logic

* bugfix: Handle unusual env configs gracefully

* bugfix: Revert route changes and fix registration disable calling

* bugfix: Fix env issues in frontend

* bugfix: Fix login

* bugfix: Fix frontend envs

* bugfix: Fix frontend jest tests

* bugfix: Fix upgrade scripts

* bugfix: Allow install in non-tty envs

* bugfix(windows): Use cross-env to set for windows

* bugfix(env): Handle .env being incorrect to begin with for client domain

* chore(merge-conflict): Update to LibreChat

* chore(merge-conflict): Update to package-lock

---------

Co-authored-by: Daniel D Orlando <dan@danorlando.com>

* chore: comment out unused agent options

* Update langchain plugins docs (#461)

* Update: install docs (LibreChat) (#458)

* Release: rename project from ChatGPT Clone to LibreChat

Release: rename project from ChatGPT Clone to LibreChat

* Release: rename project from ChatGPT Clone to LibreChat

Release: rename project from ChatGPT Clone to LibreChat

* Release: rename project from ChatGPT Clone to LibreChat

Release: rename project from ChatGPT Clone to LibreChat

* Release: rename project from ChatGPT Clone to LibreChat

Release: rename project from ChatGPT Clone to LibreChat

* Update documentation_guidelines.md

* Update introduction.md

add link to readme

* Update stable_diffusion.md

add link back to readme

* Update wolfram.md

add link back to readme

* Update README.md

add Plugins to ToC

* feat(ChatAgent.js): add support for langchainProxy configuration option

Add a new configuration option `langchainProxy` to the ChatAgent class. If the option is set, the `basePath` configuration option of the `ChatOpenAI` instance is set to the base path of `langchainProxy`.

* bugfix(errors): Possible workaround for error flashing (#463)

* Test/user auth system client tests (#462)

* test: login tests

* test: finish login tests

* test: initial tests for registration

* test: registration specs

* chore(api): update langchain dependency to version 0.0.91

* Update introduction.md

* Update introduction.md

* Update introduction.md

* fix: no longer renders html in markdown content
fix: patch XSS vulnerability completely by handling cursor on the frontend without css/html

* fix(Content.jsx): fix cursor logic so it never shows for static messages

* bugfix(langchain): Upgrade script, docker, env and docs (#465)

* bugfix(errors): Remove incorrect manual fix from misunderstanding

* chore(env): Lets not make a .env.prod and use the prod values in the default root .env
- .env.dev will still be created

* chore(upgrade.js): Lets tell the user about .env.dev if we create it

* bugfix(env): Move to full name environments for vite
- .env.prod => .env.production
- .env.dev => .env.development

* chore(env-example): Explain how to get google login working in production

* bugfix(oauth): Minor fix to point isProduction to a correct value

* bugfix: Typo in public

* chore(docs): Update docs to note the changes to .env

* chore(docs): Include note on how to get google auth working in dev and how to disable registration

* bugfix: Fix missing env changes

* bugfix: Fix up docker to work with new env / npm changes

* Update .env.example

Cleanup the env of the palm2 instruction and fix to formating

* chore(docker): Simplify Docker deployments
- Needs work to support dev env/hotreload

* bugfix: Remove volume map for client dir

* chore(env-example): Change instructions to be more user centric

---------

Co-authored-by: Fuegovic <32828263+fuegovic@users.noreply.github.com>

* update: install docs (#466)

* Add files via upload

* Update apis-and-tokens.md

* Update apis-and-tokens.md

* Update docker_install.md

* Update linux_install.md

* Rename apis-and-tokens.md to apis_and_tokens.md

* Update docker_install.md

* Update linux_install.md

* Update mac_install.md

* Update linux_install.md

* Update docker_install.md

* Update windows_install.md

* Update apis_and_tokens.md

* Update mac_install.md

* Update linux_install.md

* Update docker_install.md

* Update README.md

* Update README.md : Breaking Changes

---------

Co-authored-by: Danny Avila <110412045+danny-avila@users.noreply.github.com>

* Update README.md (#468)

add new API/Token docs to Toc

* docs: guide on how to create your own plugin

* Update make_your_own.md

* Update make_your_own.md

* feat(docker): add build args for frontend variables in Dockerfile
feat(docker-compose): add build args for frontend variables in docker-compose.yml

* Update docker_install.md

* Update docker_install.md

* Update docker_install.md

* Update docker_install.md

* docs: update (#469)

* Update: make_your_own.md

* Update README.md

add `make_your_own.md` to ToC

* Update linux_install.md

* Update mac_install.md

* Update windows_install.md

* Update apis_and_tokens.md

* Update docker_install.md

* Update docker_install.md

* Update linux_install.md

* Update mac_install.md

* Update windows_install.md

* Update apis_and_tokens.md

* Update user_auth_system.md

* Update docker_install.md

clean up of repeated information

* Update docker_install.md

* Update docker_install.md

typo

* fix: fix issue with pluginstore next and prev buttons going out of bounds

* fix: add icon for web browser plugin

* docs(GoogleSearch.js): update description of GoogleSearchAPI class to be more descriptive of its functionality

* feat(ask/handlers.js): add cursor to indicate ongoing progress of a long-running task
fix(Content.jsx): handle null content in the message stream by replacing it with an empty string (with a space so a text space is rendered)

* Update README.md

* Update README.md

* fix: plugin option stacking order

* update: web browser icon (#470)

* Delete web-browser.png

* update: web browser icon

* Update readme (#472)

* Update README.md

Discord badge now displays the number of online users
Project description has been updated to reflect current status
Feature section has been updated to reflect current capabilities
Sponsors section is now located just above the contributors section
Roadmap has been removed as it was outdated.

* Delete roadmap.md

Roadmap has been removed to streamline document maintenance.

* Update README.md

* Update README.md

* Delete CHANGELOG.md

* fix: pluginstore in mobile view getting clipped and not scrolling

* docs(linux_install.md): remove duplicate git clone command

* chore(Dockerfile): comment out nginx-client build stage
docs(README.md): update installation instructions and mention docker-compose changes
docs(features/plugins/introduction.md): bold plugin names and add emphasis to notes

* feat: add superscript and subscript support to markdown rendering
refactor: support markdown citations for BingAI

* refactor: support markdown citations for BingAI

---------

Co-authored-by: David Shin <42793498+dncc89@users.noreply.github.com>
Co-authored-by: Daniel D Orlando <dan@danorlando.com>
Co-authored-by: LaraClara <2524209+ClaraLeigh@users.noreply.github.com>
Co-authored-by: Fuegovic <32828263+fuegovic@users.noreply.github.com>
2023-06-10 19:10:03 -04:00
Fuegovic
aaa20309a0 Update: install docs (LibreChat) (#458)
* Release: rename project from ChatGPT Clone to LibreChat

Release: rename project from ChatGPT Clone to LibreChat

* Release: rename project from ChatGPT Clone to LibreChat

Release: rename project from ChatGPT Clone to LibreChat

* Release: rename project from ChatGPT Clone to LibreChat

Release: rename project from ChatGPT Clone to LibreChat

* Release: rename project from ChatGPT Clone to LibreChat

Release: rename project from ChatGPT Clone to LibreChat

* Update documentation_guidelines.md
2023-06-06 07:44:53 -04:00
Danny Avila
8c4a3b2729 bump version to 0.4.8 (#455) 2023-06-05 14:43:51 -04:00
Danny Avila
638faf9850 Release: rename project from ChatGPT Clone to LibreChat in various files and configurations (#454) 2023-06-05 14:24:08 -04:00
Danny Avila
f845192d2d Update README.md 2023-06-05 14:12:11 -04:00
Danny Avila
dfd93909e8 Update README.md 2023-06-05 14:11:59 -04:00
Danny Avila
47d0184990 npm all prod(deps): bump mdast-util-from-markdown from 1.3.0 to 1.3.1 (#447) (#453)
Bumps [mdast-util-from-markdown](https://github.com/syntax-tree/mdast-util-from-markdown) from 1.3.0 to 1.3.1.
- [Release notes](https://github.com/syntax-tree/mdast-util-from-markdown/releases)
- [Commits](https://github.com/syntax-tree/mdast-util-from-markdown/compare/1.3.0...1.3.1)

---
updated-dependencies:
- dependency-name: mdast-util-from-markdown
  dependency-type: indirect
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-05 12:15:53 -04:00
Danny Avila
ee59fa40f5 npm all prod(deps): bump @babel/plugin-transform-react-jsx (#446) (#452)
Bumps [@babel/plugin-transform-react-jsx](https://github.com/babel/babel/tree/HEAD/packages/babel-plugin-transform-react-jsx) from 7.21.5 to 7.22.3.
- [Release notes](https://github.com/babel/babel/releases)
- [Changelog](https://github.com/babel/babel/blob/main/CHANGELOG.md)
- [Commits](https://github.com/babel/babel/commits/v7.22.3/packages/babel-plugin-transform-react-jsx)

---
updated-dependencies:
- dependency-name: "@babel/plugin-transform-react-jsx"
  dependency-type: indirect
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-05 12:10:41 -04:00
Danny Avila
68b3731d06 npm all prod(deps): bump micromark-extension-gfm-task-list-item (#445) (#451)
Bumps [micromark-extension-gfm-task-list-item](https://github.com/micromark/micromark-extension-gfm-task-list-item) from 1.0.4 to 1.0.5.
- [Release notes](https://github.com/micromark/micromark-extension-gfm-task-list-item/releases)
- [Commits](https://github.com/micromark/micromark-extension-gfm-task-list-item/compare/1.0.4...1.0.5)

---
updated-dependencies:
- dependency-name: micromark-extension-gfm-task-list-item
  dependency-type: indirect
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-05 12:05:58 -04:00
Danny Avila
e4889ff8bb npm all prod(deps): bump postcss-double-position-gradients (#444) (#450)
Bumps [postcss-double-position-gradients](https://github.com/csstools/postcss-plugins/tree/HEAD/plugins/postcss-double-position-gradients) from 4.0.3 to 4.0.4.
- [Changelog](https://github.com/csstools/postcss-plugins/blob/main/plugins/postcss-double-position-gradients/CHANGELOG.md)
- [Commits](https://github.com/csstools/postcss-plugins/commits/HEAD/plugins/postcss-double-position-gradients)

---
updated-dependencies:
- dependency-name: postcss-double-position-gradients
  dependency-type: indirect
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-05 12:01:23 -04:00
Danny Avila
2c026d11a5 npm all prod(deps): bump @babel/helper-member-expression-to-functions (#443) (#449)
Bumps [@babel/helper-member-expression-to-functions](https://github.com/babel/babel/tree/HEAD/packages/babel-helper-member-expression-to-functions) from 7.21.5 to 7.22.3.
- [Release notes](https://github.com/babel/babel/releases)
- [Changelog](https://github.com/babel/babel/blob/main/CHANGELOG.md)
- [Commits](https://github.com/babel/babel/commits/v7.22.3/packages/babel-helper-member-expression-to-functions)

---
updated-dependencies:
- dependency-name: "@babel/helper-member-expression-to-functions"
  dependency-type: indirect
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-05 11:55:15 -04:00
Danny Avila
1a252170f5 chore(package): bump meilisearch (#448)
* npm api prod(deps): bump meilisearch from 0.32.5 to 0.33.0 in /api (#436)

Bumps [meilisearch](https://github.com/meilisearch/meilisearch-js) from 0.32.5 to 0.33.0.
- [Release notes](https://github.com/meilisearch/meilisearch-js/releases)
- [Commits](https://github.com/meilisearch/meilisearch-js/compare/v0.32.5...v0.33.0)

---
updated-dependencies:
- dependency-name: meilisearch
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(package): update meilisearch package from 0.32.3 to 0.33.0
chore(package): update cross-fetch package from 3.1.5 to 3.1.6 in meilisearch package dependencies

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-05 11:48:05 -04:00
Danny Avila
d40aaa703d chore: change dependabot settings for all packages (#442) 2023-06-05 11:22:14 -04:00
Danny Avila
44005258fc chore: create develop branch and change dependabot settings (#435) 2023-06-05 10:35:04 -04:00
Danny Avila
19495a461d chore(.gitignore): add .env.test to gitignore (#424)
feat(api): update @waylaidwanderer/chatgpt-api to version 1.37.0
2023-06-03 08:23:11 -04:00
Danny Avila
fcf068dddf style(NewConversationMenu): change dropdown menu background color to dark gray (#419)
style(EndpointItem): change active background color to light gray
2023-06-02 00:35:43 -04:00
Anirudh
7468b3011f Added Settings Modal (#342)
* Improve UI with style changes and add Settings button

- Improved the UI of the `Input` and `Message` components.
- Added a `Settings` button to the `NavLinks` component.
- Introduced a `Settings` component to handle user settings.
- Refactored the `Dialog` component for consistency.

* Revert not needed changes

* Updated style.css to only work for select

* feat: Remove Dark Mode component and add theme selection feature

This commit removes the Dark Mode component from the navigation bar and replaces it with a theme selection dropdown menu in the Settings dialog. The implementation of the theme selection feature includes a function that allows the user to set the theme based on the system, light, or dark mode.

* Add auto theme setting to Settings component.

This commit adds a new state variable to keep track of whether the auto theme is enabled or not. It also registers an event listener to update the theme based on system preference changes. The event listener is removed when the component is unmounted.

* Improve user experience by allowing customized themes
- Create `selectedOption` state to track user-selected theme
- Remove unused `isAutoTheme` state variable

* feat(Nav): Add SVG icon to settings gear

This commit adds an SVG icon to the settings gear in the Navigation component's Settings file. The new SVG icon replaces the previous GearIcon component.

* refactor(ui): Update overlay background color

This commit updates the background color of the overlay in the AlertDialog and Dialog components by changing the classes applied to the elements. The new color is a transition from `bg-black/50 backdrop-blur-sm` to `bg-gray-500/90 dark:bg-gray-800/90`. This change improves the readability of the dialog boxes.

* Refactor ThemeContext to include system theme and fix bug in Settings

The ThemeContext now includes a "system" theme and ClearConvos no longer relies on the "selected option" state to update the theme. The bug is now fixed if the system theme changes.

* Refactor DialogTemplate styles and color scheme

Adjusted the color scheme of the DialogTemplate component to dark mode, updated the background color to gray-900 and removed unnecessary classes.

* Refactor: Change button logic to require confirmation before clearing convos

This commit refactors the code by adding a confirmation dialog to prompt for a user's confirmation before clearing all conversations in the Settings.jsx file. The change ensures the user is aware of the irreversible action before initiating the clearConvos function. Additionally, the commit updates the clear chat button's class name and changes the button's onClick logic to call the confirmClearConvos function instead of directly invoking the clearConvos method.

* Refactor component name to reflect functionality change.

- Changed component name from ClearConvos to Settings to support potential future use cases.

* Refactor conversation clearing functionality in `Settings.jsx`

This commit optimizes the conversation clearing functionality in the `Settings.jsx` component by removing the `confirmClearConvos` function and directly calling the `clearConvos` function on confirmation. This change will simplify the code and improve the user experience.

* Refactor Input component UI styles

Simplify Input component styles by simplifying the gradient background, removing border color styles, and updating button styles.

* feat: Add e2e test for Settings modal

This commit adds an e2e test to verify whether the Settings modal is displayed on the landing page. It uses a headless browser to navigate to the page and interacts with it to verify if the dialog and its components are visible.

* test: Add Navigation and Settings tests

Add Navigation and Settings tests to verify that the navigation bar and Settings button are visible and that the Settings modal displays the expected content. The settings modal verification includes checking whether the modal is visible, if the modal title, tab list, clear conversation button and theme are present, and if the theme option can be selected to change the mode.

* Quick fix

* feat(navbar): Add confirmation before clearing conversations

Adding confirmation modal to prevent accidentally clearing conversations. Before, once you clicked on the "Clear" button it immediately clears all conversations. With this change, if you click on "Clear" the first time, it will change the text to "Confirm Clear" and if you click it again, it will clear all conversations.

* Add click functionality to the navigation bar and improve UI design

The code introduced click functionality to the nav bar and improved the user interface. It also used the new theme select feature to change the theme to dark.

* test: Add test for dark mode theme change

Refactor the test for Navigation suite to check for the 'dark' class in the HTML element when the 'dark' theme is selected in the modal. This ensures that the dark mode theme change works correctly, and improves test coverage.

* Improve navigation test clarity

This commit improves code clarity and adds more detailed test assertions to the navigation suite. New assert statements are added to check whether the modal theme selection changes the theme and that the HTML element receives the 'dark' class. A new function `changeMode` was introduced to avoid code repetition. A short description was added to the commit message to adhere to best practices.

* Improve navigation test clarity

This commit improves code clarity and adds more detailed test assertions to the navigation suite. New assert statements are added to check whether the modal theme selection changes the theme and that the HTML element receives the 'dark' class. A new function `changeMode` was introduced to avoid code repetition. A short description was added to the commit message to adhere to best practices.

* Hotfix

* Removed repetation

* Refactor: Change text-gray-400 to text-white/50 to make tailwind more cleaner

* style: Update CSS classes to improve the conversation UI

- Update Conversation component to improve UX
- Changed styling for group hover effect using shades of gray
- Improved color contrast of the Message component for easy readability
- Replaced class names in buildTree.js with a new class name
- Added a new color theme (gray-1000) in tailwind.config to replace an old background color.

* Refactor EndpointItem, EndpointItems, and NewConversationMenu for better user experience

- The `EndpointItem` component now accepts an `isSelected` prop instead of `onSelect` to better reflect its usage in `EndpointItems` and `NewConversationMenu`.
- `EndpointItems` component now has a `selectedEndpoint` prop to highlight the selected item in the list.
- `NewConversationMenu` now has a gap between the endpoint options to improve user experience.

* Added error messages

* refactor: Improve endpoint menu highlighting and error handling

In the UI, when the user selects an endpoint, the active class is now properly set. In the error handling function, `isJson` is now a private function called by `getError`, which provides better parsing of error messages, and returns more succinct messages upon encountering specific errors. Finally, a new end-to-end test has been added to check if the active class is properly set on selecting an endpoint in the new conversation menu.

* test: Add Conversation and Change Path of Auth JSON

In the Landing spec, test the functionality to create conversations and check that the number of items has increased. In the Popup spec, change the path of the Auth JSON used by the context.

* Fixed logo issues

* Make everything not rounded

* Added time

---------

Co-authored-by: Danny Avila <110412045+danny-avila@users.noreply.github.com>
2023-06-02 00:32:35 -04:00
Anirudh
dade7b450f feat: Add clear button to search bar (#328)
* feat: Add clear button to search bar

This commit introduces a clear button to the SearchBar component using the X icon from Lucide-React. When the user enters a query in the input field, the clear button appears allowing them to easily remove the search term. The clear button is hidden when there is no search term entered.

* Refactor SearchBar component to improve user experience

Changed SearchBar's input field to add padding on the left side and an absolute positioned search icon. Also, added absolute positioned X icon on the right side when there is an input value, ensuring a better user experience.

* Refactor SearchBar component to show Clear Search icon dynamically

This commit makes changes to the SearchBar React component to render the Clear Search X icon only when the input field has a value. A showClearIcon state using useState hook is added and updated every time the input value changes. The useEffect hook is used to handle the case when the user clears the input value. This allows better UX by providing clear intent to the user that the icon is clickable and will clear the search query.

* Improve UX: Add styling to clear button & export button

This commit modifies the NavLinks component to improve user experience by removing a rounded styling to the "Clear conversations" and "Export conversations" buttons. Prior to this change, the buttons had a rounded styling.

* Refactor submit button styling for improved accessibility and readability.

Changed submit button styling for better accessibility and readability, including adjustments to padding and hover effects. The new styles ensure that the button is easily clickable for all users, while also improving its visual appearance.

* hotfix

* Improve UI styling in Conversation component

Changed the background color and hover effect of the conversation link in Conversation component to make it more visually appealing. The previous background color was '#2A2B32' and now it's 'gray-800'. The 'px-4' class has also been changed to 'hover:pr-4' for better readability.

---------

Co-authored-by: Danny Avila <110412045+danny-avila@users.noreply.github.com>
2023-06-02 00:11:34 -04:00
Danny Avila
7fbf27c5aa chore(.gitignore): add client/public/images/ to ignore list (#417)
refactor(chatgpt-client.js): free encoder memory after use
feat(chatgpt-client.tokens.js): add script to test memory usage of ChatGPTClient
2023-06-02 00:08:19 -04:00
Fuegovic
4705975e59 feat:add hyperlink to bing.com in SetTokenDialog (#414) 2023-05-31 00:41:01 -04:00
Danny Avila
2f59c82bec chore(api): update chatgpt-api package version to 1.36.3 (#404)
docs(api): update BINGAI_TOKEN instructions in .env.example
docs(client): update BINGAI_TOKEN instructions in SetTokenDialog component
2023-05-29 11:00:51 -04:00
Fuegovic
6a34978e98 Fix: typo and phrasing (#393)
* Update FEATURE-REQUEST.yml

Fix typo and phrasing

* Update pull_request_template.md

add one option to type of change
2023-05-28 17:55:57 -04:00
Fuegovic
d437e4b8cd update: "documents" folder to "docs" (#391)
* Rename .github/PULL_REQUEST_TEMPLATE/PULL-REQUEST.md to .github/pull_request_template.md

fix: Pull Request Template Location

* documents -> docs

* Update windows_install.md

Fix: Docker hyperlink

* Update linux_install.md

Fix: Layout (step 6)

* Rename docs/contributions/code_of_conduct.md to CODE_OF_CONDUCT.md

fix: Code of Conduct location according to GitHub's Guide

* Update CODE_OF_CONDUCT.md

Update: Contact info

* Update README.md

Update: Code of Conduct hyperlink in TOC

* Update CODE_OF_CONDUCT.md

Update: Link to ReadMe

* Update CONTRIBUTORS.md

update: add new name to the list

* Update and rename docs/contributions/contributor_guidelines.md to CONTRIBUTING.md

fix: change location according to GitHub's standards

* Delete CONTRIBUTORS.md

delete: contributor.md from root (already present in readme)

* Update SECURITY.md

* Update CONTRIBUTING.md

Update discord link to point to rules

* Update README.md

Update discord link to point to rules

* Update README.md

fix: ToC
2023-05-27 07:03:28 -04:00
Fuegovic
f40a2f8ee8 update: documentation (#389)
* Update docker_install.md

update Bing Token instructions

* Update linux_install.md

Update Bing Token Instructions
Add # markers to sections

* Update mac_install.md

Update Bing Token Instructions
Fix Formating
Recommend Docker

* Update windows_install.md

Update Bing Token Instructions

* Update linux_install.md

Recommend Docker

* Create QUESTION.yml

Questions Template

* Update QUESTION.yml

fix syntax

* Update QUESTION.yml

* Update QUESTION.yml

* Create FEATURE-REQUEST

* Rename FEATURE-REQUEST to FEATURE-REQUEST.yml

add file extension
2023-05-26 22:22:11 -04:00
Danny Avila
2d31c9f8b6 chore: bump package versions to 0.4.7 (#388) 2023-05-26 17:56:23 -04:00
Danny Avila
fd5afc09a2 chore(tests): add e2e tests for messaging suite (#387)
* feat(NewConversationMenu): add id to the new conversation menu button
refactor(EndpointItem): remove onSelect prop and setTokenDialogOpen state variable
test(messages.spec.js): add e2e test for messaging suite to check if textbox is focused after receiving message

* test(Input): add test id to input field for e2e testing
test(messages.spec.js): add endpoint variable and refactor test to check if textbox is focused after receiving message

* test(messages.spec.js): refactor test to use a variable for message content

Refactored the test to use a variable for message content instead of a hardcoded string.
2023-05-26 17:34:08 -04:00
Danny Avila
c0845ad0b1 Fix Input losing focus (#382)
* fix(PaLM2): input losing focus on message stream ending

* fix(askOpenAI.js): fix typo in variable name from newUserMassageId to newUserMessageId

* feat(chatgpt-browser.js, askBingAI.js, askChatGPTBrowser.js): add onEventMessage callback to browserClient

Add onEventMessage callback to browserClient to handle event messages from the server. In askChatGPTBrowser.js, add a getPartialMessage variable to store the partial message text. In askBingAI.js, fix a typo in the variable name newUserMassageId to newUserMessageId. In askChatGPTBrowser.js, remove the preSendRequest parameter and move the sendMessage call to the onEventMessage callback. In askChatGPTBrowser.js, add a check for null or undefined value of getPartialMessage before appending it to the error message.

* fix(bing): input no longer loses input focus as convoId is persisted from beginning of convo

* refactor(Input): remove unused code and fix input autofocus
feat(package.json): add e2e:test-auth script to test authentication flow with saved storage
2023-05-26 14:32:13 -04:00
Danny Avila
11b98d3d13 refactor(chatgpt-client.js): initialize usage object with empty object instead of null (#386)
refactor(chatgpt-client.js): simplify usage object assignment
2023-05-26 09:43:35 -04:00
Danny Avila
4f17e69f1b fix(tokenizer): error handle encoding for invalid encoding data (#385) 2023-05-26 09:40:08 -04:00
Danny Avila
b912e7a3dd Update BUG-REPORT.yml 2023-05-26 09:02:32 -04:00
Danny Avila
743a9315ff Update BUG-REPORT.yml 2023-05-26 09:00:33 -04:00
Danny Avila
ea2135a237 chore(api): remove unused crypto dependency from package.json (#381) 2023-05-25 14:54:46 -04:00
Dan Orlando
6a1983bc6c refactor: remove bcrypt (#375) 2023-05-25 14:54:24 -04:00
Fuegovic
07796d9e48 Update BUG-REPORT.yml (#379)
remove other tags than "bug" from the bug report
2023-05-25 13:17:03 -04:00
Danny Avila
634849ec12 fix(Bing): Use full cookies string instead of just _U cookie (#369) 2023-05-23 13:58:18 -04:00
Danny Avila
112c6c5b19 fix (PaLM2): messages will properly regenerate (#368)
* making progress to fix regen for PaLM

* fix (PaLM2): messages will properly regenerate
2023-05-23 06:55:23 -04:00
Olivier Contant
b8c3ae5e8f Update SECURITY.md (#367)
Include OWASP reference to Vulnerability Disclosure process.
2023-05-23 06:41:01 -04:00
Danny Avila
07fa0f39fd Fix (PaLM2): Persist PaLM presets after initial message (#366)
* refactor(askGoogle.js): extract saveConvo function call to a separate function
feat(askGoogle.js): add endpoint property to the conversation object
refactor(handleSubmit.js): rename chatGptLabel to modelLabel in useMessageHandler function

* refactor(askGoogle.js): remove unused endpointOption spread operator
2023-05-22 20:50:10 -04:00
Dan Orlando
4eda4542b7 feat: Setup Unit Test Environment and Refactor Typescript Config (#365)
* modify tsconfig and set up unit tests

* generate .d.ts files

* setup project dependencies and configuration for unit tests

* Add test setup and layout-test-utils along with first spec

* Add paths back to tsconfig

* remove type=module from package.json

* Add typescript definition for .env

* update package-lock
2023-05-22 20:49:48 -04:00
dependabot[bot]
dbfef342e2 npm all prod(deps): bump fast-redact from 3.1.2 to 3.2.0 (#360)
Bumps [fast-redact](https://github.com/davidmarkclements/fast-redact) from 3.1.2 to 3.2.0.
- [Release notes](https://github.com/davidmarkclements/fast-redact/releases)
- [Commits](https://github.com/davidmarkclements/fast-redact/compare/v3.1.2...v3.2.0)

---
updated-dependencies:
- dependency-name: fast-redact
  dependency-type: indirect
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-22 13:28:19 -04:00
Danny Avila
735eb159db npm client dev(deps-dev): bump @vitejs/plugin-react from 3.1.0 to 4.0.0 (#364) 2023-05-22 13:23:17 -04:00
Danny Avila
bf911074cf npm client prod(deps): bump @types/node from 18.16.14 to 20.2.3 (#363) 2023-05-22 12:33:52 -04:00
dependabot[bot]
7ec061c694 npm all prod(deps): bump @csstools/postcss-oklab-function (#356)
Bumps [@csstools/postcss-oklab-function](https://github.com/csstools/postcss-plugins/tree/HEAD/plugins/postcss-oklab-function) from 2.2.1 to 2.2.2.
- [Changelog](https://github.com/csstools/postcss-plugins/blob/main/plugins/postcss-oklab-function/CHANGELOG.md)
- [Commits](https://github.com/csstools/postcss-plugins/commits/HEAD/plugins/postcss-oklab-function)

---
updated-dependencies:
- dependency-name: "@csstools/postcss-oklab-function"
  dependency-type: indirect
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-22 12:21:55 -04:00
Danny Avila
a6f3eb4c0d npm client prod(deps): bump esbuild from 0.17.15 to 0.17.19 (#361) 2023-05-22 12:17:34 -04:00
dependabot[bot]
dc5f9d8474 npm all prod(deps): bump eslint from 8.40.0 to 8.41.0 (#354)
Bumps [eslint](https://github.com/eslint/eslint) from 8.40.0 to 8.41.0.
- [Release notes](https://github.com/eslint/eslint/releases)
- [Changelog](https://github.com/eslint/eslint/blob/main/CHANGELOG.md)
- [Commits](https://github.com/eslint/eslint/compare/v8.40.0...v8.41.0)

---
updated-dependencies:
- dependency-name: eslint
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-22 12:10:50 -04:00
dependabot[bot]
c1349fbfaa npm all prod(deps): bump tar from 6.1.14 to 6.1.15 (#353)
Bumps [tar](https://github.com/isaacs/node-tar) from 6.1.14 to 6.1.15.
- [Release notes](https://github.com/isaacs/node-tar/releases)
- [Changelog](https://github.com/isaacs/node-tar/blob/main/CHANGELOG.md)
- [Commits](https://github.com/isaacs/node-tar/compare/v6.1.14...v6.1.15)

---
updated-dependencies:
- dependency-name: tar
  dependency-type: indirect
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-22 12:04:41 -04:00
Olivier Contant
6f9da5f7df Update coding_conventions.md (#350)
Fixed typo missing letter.
2023-05-22 00:09:33 -04:00
Danny Avila
10de50416b feat(HoverButtons.jsx): enable message regeneration for bingAI endpoint (#349)
feat(HoverButtons.jsx): add active class to copy button only if message is not created by user
2023-05-21 14:01:46 -04:00
Danny Avila
4beb06aa4b Minor fixes: tokenizer, default Bing toneStyle, SiblingSwitch (#348)
* fix: tokenizer will count completion tokens correctly, remove global var, will allow unofficial models for alternative endpoints

* refactor(askBingAI.js, Settings.jsx, types.ts, cleanupPreset.js, getDefaultConversation.js, handleSubmit.js): change default toneStyle to 'creative' instead of 'fast' for Bing AI endpoint.

* fix(SiblingSwitch): correctly appears now
style(HoverButtons.jsx): add 'active' class to hover buttons
2023-05-21 12:43:06 -04:00
Danny Avila
791b515937 Cleanup root dir, move dev-related files into /documents/ (#347)
* chore: cleanup root dir and move extraneous dev related files to documents/dev

* chore: cleanup root dir and move extraneous dev related files to documents/dev
2023-05-21 08:56:06 -04:00
Fuegovic
8d4ef16b7f docs : update the documentation (#345)
* Add files via upload

* Delete documents/report_templates directory

* Update PR-TEMPLATE.md

* Update README.md

removed templates from TOC

* Update SECURITY.md

- update to follow documentation guidelines
- update discord link to point to issues

* Update SECURITY.md

* Update README.md

add security to TOC

* Delete pull_request_template.md

moved to .github

* Rename PR-TEMPLATE.md to PULL-REQUEST.md

* Update mac_install.md

clean up and update

* Update windows_install.md

fix formating and change update instructions

* Update windows_install.md

add docker recommendation

* Update windows_install.md

* Update mac_install.md
2023-05-21 08:42:16 -04:00
Danny Avila
5964b71e14 Setup tests with new user system (#344)
* chore(.gitignore): add auth.json to gitignore
test(landing.spec.js): remove commented out code and add check for landing page title
test(login.spec.js): add test for login page title
feat(package.json): add e2e:auth script to generate auth.json storage file for e2e tests

* test(landing.spec.js): add beforeEach hook to create a new browser context with auth.json storage state
test(landing.spec.js): change test name from 'landing page' to 'Landing title'
fix(package.json): change e2e:auth script to save auth.json in e2e directory
2023-05-20 09:00:45 -04:00
Danny Avila
8c7ad09977 style(mobile.css): decrease z-index of .nav-mask to 35 (#337) 2023-05-19 21:09:07 -04:00
Danny Avila
cef2668f53 style(NewConversationMenu): add z-index to Dialog and DropdownMenuContent (#335)
style(mobile.css): decrease z-index of .nav to 40
2023-05-19 19:58:53 -04:00
Danny Avila
ab7cfc6041 Hotfix (#334)
* style(NavLinks.jsx): add 'as="div"' to Menu.Item components
refactor(Nav.jsx): remove unused code and add isMobile function to check if user is on mobile device

* conditionally render menuitem with search

---------

Co-authored-by: stunt_pilot <twitchstuntpilot@gmail.com>
2023-05-19 19:37:56 -04:00
Danny Avila
a9444b66a1 Release 0.4.6 (#332) 2023-05-19 16:21:45 -04:00
Danny Avila
ec561fcd7f Fixes all Nav Menu related errors and bugs (#331)
* chore(client): update lucide-react package to version 0.220.0
style(client): change color of MessageHeader component text to gray-500
style(client): change color of nav-close-button to gray-400 and nav-open-button to gray-500
feat(client): add Panel component to replace svg icons in Nav component

* fix: forwardRef errors in Nav Menu

* refactor(SearchBar.jsx): change clearSearch prop destructuring to props destructuring
refactor(SearchBar.jsx): add ref prop to SearchBar component
refactor(getIcon.jsx): remove unused imports
refactor(getIcon.jsx): add nullish coalescing operator to user.name and user.avatar properties

* fix (NavLinks): modals no longer close on nav menu close

* style(ExportModel.jsx): remove unnecessary z-index property from a div element

* style(ExportModel.jsx): remove trailing whitespace in input element

* refactor(Message.jsx): remove unused cancelled variable
fix(Message.jsx): fix error message length exceeding 512 characters
refactor(MenuItem.jsx): remove unused MenuItem component
2023-05-19 16:02:41 -04:00
Anirudh
ee2b3e4fb2 Refactor UI styles & configurations (#324)
* Refactor UI styles & configurations

-  Modify button styles and their color schemes to create a consistent user experience when interacting with buttons.
-  Adjust the design of the search bar to a more user-friendly layout by changing its background color and styling.
-  Create a responsive mobile behavior for the navigation bar to hide it behind a menu icon instead of permanently displaying it.

* Update .gitignore to exclude unnecessary files for Meilisearch

Update .gitignore to exclude meilisearch.exe and data.ms/*, which are not necessary for Meilisearch.

* feat: Add getCurrentBreakpoint function to get current breakpoint

This commit adds a getCurrentBreakpoint function to determine the current breakpoint of the viewport. The function uses fullConfig to determine the biggest breakpoint value of the window, and returns the corresponding breakpoint. It also updates the useEffect function to use getCurrentBreakpoint instead of checking if the userAgent matches a mobile regex.

* Update tailwind import path in Nav component

The import path for the tailwind config was updated in the Nav component to match the new project structure. This ensures that the correct Tailwind styles are applied to the component and improves maintainability.

* Add ThemeContext and cn utility function to Nav component

This commit adds the ThemeContext and cn utility function to the Nav component's dependencies with useContext and import respectively. It also modifies a class name with a ternary operator that toggles based on the theme value passed via ThemeContext.

* Update Nav button styles for better visibility

Changed the button styles for the Nav close and open buttons to enhance visibility. The text color for both buttons will now change when hovering to gray and gray-600 respectively.

* Improve message header styles and add transition effects

This commit updates the MessageHeader component styles by adjusting the text color, as well as adding transition effects to enhance the hover experience. The commit also tweaks mobile styles by adding a transition effect to `.nav` when resizing the window to mobile size.

* Refactor the message header component styling for better visual contrast

The message header component was refactored to improve its visual contrast by changing the text color for better readability. The styles of the component were modified to improve hover behavior as well as transition effects. The setSaveAsDialogShow method was shifted to the onClick prop to only execute when the endpoint is not 'chatGPTBrowser'.

* refactor: Update styling of MessageHeader and Nav buttons

The commit message describes changes made to the MessageHeader and Nav components. It summarizes the code changes as a refactor of the CSS styling for the buttons in both components, specifically updating the text and hover colors for the dark and light themes.
2023-05-19 10:51:34 -04:00
Danny Avila
67716f0d2d fix(auth.service.js): fixes deprecated error callback in mongoose save method (#323) 2023-05-18 20:08:35 -04:00
Danny Avila
e56d90e45a fix(User.js, auth.service.js, localStrategy.js): change deprecated Joi.validate() to schema.validate() method (#322) 2023-05-18 17:39:06 -04:00
Danny Avila
92eee52c52 feat (presets): hide/show endpoints, increase preset menu size in general and dynamic to endpoints (#320) 2023-05-18 16:01:16 -04:00
Danny Avila
f4d995be4c chore: dependabot updates (#319) 2023-05-18 15:32:03 -04:00
dependabot[bot]
fbdfbdd620 npm all prod(deps): bump react-router-dom from 6.11.1 to 6.11.2 (#308)
Bumps [react-router-dom](https://github.com/remix-run/react-router/tree/HEAD/packages/react-router-dom) from 6.11.1 to 6.11.2.
- [Release notes](https://github.com/remix-run/react-router/releases)
- [Changelog](https://github.com/remix-run/react-router/blob/main/packages/react-router-dom/CHANGELOG.md)
- [Commits](https://github.com/remix-run/react-router/commits/react-router-dom@6.11.2/packages/react-router-dom)

---
updated-dependencies:
- dependency-name: react-router-dom
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-18 15:28:36 -04:00
dependabot[bot]
fec733e10b npm client dev(deps-dev): bump source-map-loader in /client (#307)
Bumps [source-map-loader](https://github.com/webpack-contrib/source-map-loader) from 1.1.3 to 4.0.1.
- [Release notes](https://github.com/webpack-contrib/source-map-loader/releases)
- [Changelog](https://github.com/webpack-contrib/source-map-loader/blob/master/CHANGELOG.md)
- [Commits](https://github.com/webpack-contrib/source-map-loader/compare/v1.1.3...v4.0.1)

---
updated-dependencies:
- dependency-name: source-map-loader
  dependency-type: direct:development
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-18 15:28:19 -04:00
dependabot[bot]
23905dd344 npm all prod(deps): bump jake from 10.8.5 to 10.8.6 (#306)
Bumps [jake](https://github.com/jakejs/jake) from 10.8.5 to 10.8.6.
- [Changelog](https://github.com/jakejs/jake/blob/main/changelog.md)
- [Commits](https://github.com/jakejs/jake/compare/v10.8.5...v10.8.6)

---
updated-dependencies:
- dependency-name: jake
  dependency-type: indirect
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-18 15:27:58 -04:00
dependabot[bot]
ec13d74b84 npm client prod(deps): bump class-variance-authority in /client (#304)
Bumps [class-variance-authority](https://github.com/joe-bell/cva) from 0.4.0 to 0.6.0.
- [Release notes](https://github.com/joe-bell/cva/releases)
- [Commits](https://github.com/joe-bell/cva/compare/v0.4.0...v0.6.0)

---
updated-dependencies:
- dependency-name: class-variance-authority
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-18 15:27:40 -04:00
dependabot[bot]
231906161b npm client dev(deps-dev): bump typescript from 4.9.5 to 5.0.4 in /client (#303)
Bumps [typescript](https://github.com/Microsoft/TypeScript) from 4.9.5 to 5.0.4.
- [Release notes](https://github.com/Microsoft/TypeScript/releases)
- [Commits](https://github.com/Microsoft/TypeScript/compare/v4.9.5...v5.0.4)

---
updated-dependencies:
- dependency-name: typescript
  dependency-type: direct:development
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-18 15:27:20 -04:00
dependabot[bot]
5c787035e5 npm all prod(deps): bump remark-parse from 10.0.1 to 10.0.2 (#302)
Bumps [remark-parse](https://github.com/remarkjs/remark) from 10.0.1 to 10.0.2.
- [Release notes](https://github.com/remarkjs/remark/releases)
- [Changelog](https://github.com/remarkjs/remark/blob/main/changelog.md)
- [Commits](https://github.com/remarkjs/remark/compare/remark-parse@10.0.1...remark-parse@10.0.2)

---
updated-dependencies:
- dependency-name: remark-parse
  dependency-type: indirect
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-18 15:27:00 -04:00
dependabot[bot]
2694690ed0 npm all prod(deps): bump cross-fetch from 3.1.5 to 3.1.6 (#301)
Bumps [cross-fetch](https://github.com/lquixada/cross-fetch) from 3.1.5 to 3.1.6.
- [Release notes](https://github.com/lquixada/cross-fetch/releases)
- [Changelog](https://github.com/lquixada/cross-fetch/blob/v3.1.6/CHANGELOG.md)
- [Commits](https://github.com/lquixada/cross-fetch/compare/v3.1.5...v3.1.6)

---
updated-dependencies:
- dependency-name: cross-fetch
  dependency-type: indirect
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-18 15:26:39 -04:00
dependabot[bot]
562bf8c920 npm api prod(deps): bump joi from 14.3.1 to 17.9.2 in /api (#300)
Bumps [joi](https://github.com/hapijs/joi) from 14.3.1 to 17.9.2.
- [Commits](https://github.com/hapijs/joi/compare/v14.3.1...v17.9.2)

---
updated-dependencies:
- dependency-name: joi
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-18 15:26:14 -04:00
dependabot[bot]
2781154df3 npm all prod(deps): bump mongoose from 6.11.1 to 7.1.1 (#299)
Bumps [mongoose](https://github.com/Automattic/mongoose) from 6.11.1 to 7.1.1.
- [Release notes](https://github.com/Automattic/mongoose/releases)
- [Changelog](https://github.com/Automattic/mongoose/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Automattic/mongoose/compare/6.11.1...7.1.1)

---
updated-dependencies:
- dependency-name: mongoose
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-18 15:25:55 -04:00
dependabot[bot]
691b6d9029 npm api prod(deps): bump mongoose from 6.11.1 to 7.1.1 in /api (#298)
Bumps [mongoose](https://github.com/Automattic/mongoose) from 6.11.1 to 7.1.1.
- [Release notes](https://github.com/Automattic/mongoose/releases)
- [Changelog](https://github.com/Automattic/mongoose/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Automattic/mongoose/compare/6.11.1...7.1.1)

---
updated-dependencies:
- dependency-name: mongoose
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-18 15:25:36 -04:00
dependabot[bot]
c6c3054c22 npm client prod(deps): bump filenamify from 5.1.1 to 6.0.0 in /client (#297)
Bumps [filenamify](https://github.com/sindresorhus/filenamify) from 5.1.1 to 6.0.0.
- [Release notes](https://github.com/sindresorhus/filenamify/releases)
- [Commits](https://github.com/sindresorhus/filenamify/compare/v5.1.1...v6.0.0)

---
updated-dependencies:
- dependency-name: filenamify
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-18 15:25:17 -04:00
Danny Avila
d71b61ad71 minor fixes (#318)
* refactor(SearchBar.jsx): extract onChange function to a separate function and add onKeyDown event listener to prevent spacebar from propagating

* refactor(SearchBar.jsx): extract onChange function to a separate function and add onKeyDown event listener to prevent spacebar from propagating

* refactor(SearchBar.jsx): remove unused React import statement
2023-05-18 15:22:48 -04:00
Dan Orlando
47533736e3 fix: turn off react-in-jsx-scope rule (#317) 2023-05-18 15:12:19 -04:00
Dan Orlando
a17b878617 refactor: reformat files to require parens around params (#316) 2023-05-18 14:44:07 -04:00
dependabot[bot]
91ef4872d6 npm api prod(deps): bump meilisearch from 0.31.1 to 0.32.3 in /api (#296)
Bumps [meilisearch](https://github.com/meilisearch/meilisearch-js) from 0.31.1 to 0.32.3.
- [Release notes](https://github.com/meilisearch/meilisearch-js/releases)
- [Commits](https://github.com/meilisearch/meilisearch-js/compare/v0.31.1...v0.32.3)

---
updated-dependencies:
- dependency-name: meilisearch
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-18 14:30:23 -04:00
Danny Avila
c1ddd07166 chore(EditPresetDialog.jsx): fix formatting and linting issues
feat(Settings.jsx): change max-height of the settings dialog to fit the content better
2023-05-18 14:20:41 -04:00
Dan Orlando
7fdc862042 Build/Refactor: lint pre-commit hook and reformat repo to spec (#314)
* build/refactor: move lint/prettier packages to project root, install husky, add pre-commit hook

* refactor: reformat files

* build: put full eslintrc back with all rules
2023-05-18 14:09:31 -04:00
Danny Avila
8d75b25104 Fixes (#313)
* refactor(endpoints.js): remove console.log statement
refactor(index.html): change title to "ChatGPT Clone"

* feat(Chat.jsx): set document title to conversation title or VITE_APP_TITLE or 'Chat' if conversation is null
2023-05-18 07:45:07 -04:00
Danny Avila
26152d7e5f feat(api): add support for user-provided OpenAI API key (#311)
- Add support for user-provided OpenAI API key by setting OPENAI_KEY to
  "user_provided" in .env.example
- Pass oaiApiKey to titleConvo function in titleConvo.js
- Pass oaiApiKey to askClient function in askOpenAI.js
- Modify openAI object in endpoints.js to include userProvide property
  based on whether OPENAI_KEY is set to "user_provided" or not.
2023-05-17 21:58:56 -04:00
John Chen
61a4231feb fix duplicate instructions (#310) 2023-05-17 20:18:50 -04:00
Olivier Contant
c9b035a0bd Docs/security guideline (#295)
* Create dependabot.yml

Initial dependabot.yml

* Create SECURITY.md

Guideline for security researcher to report vulnerabilities and communicate the discovery to our project community.

* Update SECURITY.md

Change wording for Discord channel initial contact and added Github Issues guideline.
2023-05-17 19:23:58 -04:00
dncc89
44ea3601c9 feat: Frontend app title environment variable (#291)
* Add app name change support

* fix indentation
2023-05-17 19:23:13 -04:00
Pawan Kumar
782a899ab3 calculate and add token usage to streaming chat (#287) 2023-05-17 19:22:35 -04:00
Anirudh
14104b276f Added functionality to allow users to set custom api keys (#276)
* Added functionality to allow users to set custom api keys

* Added error handling

* Changed token to apiKey

* Changed apiKey to oaiApiKey

* added azure openai ui

* Removed logging

* Changed configure to Use

* Made checked position more rounded

* Made setting api key optional if it is openai

* Modified error handling

* Add support for insufficient_quota errors

* Fixed faulty error detection

* removed logging
2023-05-17 19:21:30 -04:00
Danny Avila
08f3a77d58 Update README.md 2023-05-17 14:48:47 -04:00
Danny Avila
ca26732cb8 Update docker_install.md 2023-05-17 09:58:10 -04:00
Danny Avila
dbf45196ee Release 0.4.5 (#282)
* Release 0.4.5

* Update @waylaidwanderer/node-chatgpt-api to latest version
* Update dockerfiles to use workspaces and ensure packages are @ latest
* Remove package-lock.json files from workspace directories as no longer needed

* refactor(api): remove deprecated text-davinci-002-render-paid model from CHATGPT_MODELS
refactor(api/client): change model comparison to use startsWith() instead of === for GPT-4 models
2023-05-16 14:30:24 -04:00
Fuegovic
45a2aaf7b8 docs : add basic info document in multiple languages (#285)
* Create multilingual_information.md

add a multilingual document with basic information about the project for non-native English speakers

* Update README toc to add multilingual info

add the multilingual info doc to the table of content (under General Information)
2023-05-16 13:30:52 -04:00
Dan Orlando
c02c62f3b1 fix: fix link to coding conventions doc in contributor guidelines (#283)
* doc: coding conventions and proposal submissions

* make coding_contention.md path relative in contributor guidelines

* fix: remove / from coding conventions link
2023-05-16 12:09:17 -04:00
Dan Orlando
4718674688 doc: coding conventions and proposal submissions (#250)
* doc: coding conventions and proposal submissions

* doc: add code standards to TOC

* make coding_contention.md path relative in contributor guidelines
2023-05-16 09:50:16 -04:00
Danny Avila
0e3c115368 Update README.md 2023-05-16 09:48:54 -04:00
Danny Avila
cc506c23af Update README.md 2023-05-16 06:53:56 -04:00
Danny Avila
1f77d94b7e Update README.md 2023-05-16 06:52:47 -04:00
Danny Avila
5711ff27ee fix(getIcon.jsx): match initial styling better with official (#277) 2023-05-15 12:15:33 -04:00
David Shin
3120602d6a feat: Add user icon in messages (#275)
* Update GPT4 model icon

* Add user icon support in messages
2023-05-15 11:51:58 -04:00
David Shin
9f36e195bc Update GPT4 model icon (#274) 2023-05-15 10:08:30 -04:00
Fuegovic
9de7da91a7 Fix: install instructions (#272)
* Update windows_install.md

removed -dev argument

* Update mac_install.md

removed `-dev` arguments

* Update linux_install.md

removed "-dev" argument

* Update windows_install.md

correction to update procedure

* Update windows_install.md

updat bat file instruction

* Update mac_install.md

update bash command

* Update linux_install.md

update bash script and update instructions

* Update linux_install.md

fix mistake in update instruction
2023-05-15 07:49:49 -04:00
Danny Avila
501a15a18f Release 0.4.4 (#271) 2023-05-14 20:39:40 -04:00
Danny Avila
6049c9e3ff Fix react errors, max context tokens, and preset mobile view (#269)
* fix: react errors

* fix: max tokens issue

* fix: max tokens issue
2023-05-14 17:26:21 -04:00
Pawan Kumar
262b402606 fix code to adjust max_tokens according to model selection (#263) 2023-05-14 12:16:38 -04:00
Danny Avila
56ea9563b8 refactor(style.css): change font file paths (#268) 2023-05-14 12:12:56 -04:00
Anirudh
2cd6612620 Fonts (#261) 2023-05-14 12:06:53 -04:00
Danny Avila
5d40396fb2 refactor(Conversation.js): change default pageSize from 12 to 14 in getConvosByPage and getConvosQueried functions. Remove unnecessary parentheses and curly braces in getConvosQueried function. Remove unnecessary parentheses in deleteConvos function. (#267) 2023-05-14 11:45:18 -04:00
Anirudh
93dd1eb036 Add Popup Menu to Save Space in Sidebar (#260)
---------

Co-authored-by: Danny Avila <110412045+danny-avila@users.noreply.github.com>
2023-05-14 11:42:17 -04:00
Anton Volnuhin
542a46dc7c Correct the typo in auth.json for accessing Google Palm (#266)
Co-authored-by: Anton Volnuhin <anton@volnuhin.ru>
2023-05-14 11:25:22 -04:00
Anirudh
bf31b1fea0 Msg Clipboard to checkmark (optimistic UX) (#247)
* revert unintended package-lock.json change

* used default checkmark which is included in project

---------

Co-authored-by: Danny Avila <110412045+danny-avila@users.noreply.github.com>
2023-05-14 09:00:20 -04:00
Danny Avila
25d4529ff9 Release v0.4.3 2023-05-13 17:10:19 -04:00
Danny Avila
33d7c67c04 Release v0.4.3 2023-05-13 17:09:25 -04:00
Danny Avila
dc8f762bac Release v0.4.3 2023-05-13 17:08:28 -04:00
Danny Avila
49041e16c7 chore: bump package versions to 0.4.3 (#265) 2023-05-13 16:59:45 -04:00
Danny Avila
3414690e42 Feat: PaLM 2 (#262)
* feat(api): add googleapis package to package.json
feat(api): add reqDemo.js file to make a request to Google Cloud AI Platform API to get a response from a chatbot model.

* feat: add PaLM2 support

* feat(conversationPreset.js): add support for topP and topK for google endpoint
feat(askGoogle.js): add support for topP and topK for google endpoint
feat(ask/index.js): add google endpoint
feat(endpoints.js): add google endpoint
feat(MessageHeader.jsx): add support for modelLabel for google endpoint
feat(PresetItem.jsx): add support for modelLabel for google endpoint
feat(HoverButtons.jsx): add support for google endpoint
feat(createPayload.ts): add google endpoint
feat(types.ts): add google endpoint
feat(store/endpoints.js): add google endpoint
feat(cleanupPreset.js): add support for topP and topK for google endpoint
feat(getDefaultConversation.js): add support for topP and topK for google endpoint
feat(handleSubmit.js): add support for topP and topK for google endpoint

* fix: messages payload

* refactor(GoogleClient.js): set maxContextTokens based on isTextModel value
feat(GoogleClient.js): add delay option to TextStream constructor
feat(getIcon.jsx): add support for google endpoint and PaLM2 model label

* feat: palm frontend changes

* feat(askGoogle.js): set default example to empty input and output
feat(Examples.jsx): add ability to add and remove examples
refactor(Settings.jsx): remove examples from props and setOption function

style(GoogleOptions): remove unnecessary whitespace after Settings2 import
feat(GoogleOptions): add addExample and removeExample functions to manage examples
fix(cleanupPreset): set default example to [{ input: '', output: ''}]
fix(getDefaultConversation): set default example to [{ input: '', output: ''}]
fix(handleSubmit): set default example to [{ input: '', output: ''}]

* style(client): adjust height of settings and examples components to 350px
fix(client): fix path to palm.png image in getIcon.jsx file

* style(EndpointOptionsPopover.jsx, Examples.jsx, Settings.jsx): improve button styles and update input placeholders

* feat (palm): finalize examples on the frontend

* feat(GoogleClient.js): filter out empty examples in options
feat(GoogleClient.js): add support for promptPrefix in buildPayload method
feat(GoogleClient.js): add support for examples in buildPayload method
feat(conversationPreset.js): add maxOutputTokens field to conversation preset schema
feat(presetSchema.js): add examples field to preset schema
feat(askGoogle.js): add support for examples and promptPrefix in endpointOption
feat(EditPresetDialog.jsx): add Examples component for Google endpoint
feat(EditPresetDialog.jsx): add button to show/hide Examples component
feat(EditPresetDialog.jsx): add functionality to add, remove, and edit examples in Examples component
feat(EndpointOptionsDialog.jsx): change endpoint name to PaLM for Google endpoint
feat(Settings.jsx): add maxHeight prop to limit height of Settings component in EditPresetDialog and EndpointOptionsDialog

fix(Settings.jsx): add examples prop to ChatGPTBrowser component
fix(EndpointItem.jsx): add alternate name for google endpoint
fix(MessageHeader.jsx): change title for google endpoint to PaLM
feat(endpoints.js): add google endpoint to endpointsConfig
fix(cleanupPreset.js): add missing comma in examples array

* chore: change endpoint order

* feat(PaLM 2): complete for testing

* fix(PaLM): handle blocked messages
2023-05-13 16:29:06 -04:00
LaraClara
95c97561ae chore: NPM Workspaces and scripts (#244)
* chore: NPM Workspaces and scripts
- Allows everything to be run in the root directory

* chore:Update package-lock after workspace change

* docs: Minor docs typo fix
- most people run in dev mode, ie vite runs the server, this defaults to that method
2023-05-12 09:40:14 -04:00
Danny Avila
8bb4d7d590 Release 0.4.2 2023-05-11 16:46:27 -04:00
348 changed files with 44509 additions and 35638 deletions

View File

@@ -1,2 +1,4 @@
**/node_modules
api/.env
.env
client/dist/images

View File

@@ -10,27 +10,27 @@
# Set Node env to development if running in dev mode.
HOST=localhost
PORT=3080
NODE_ENV=production
# Change this to proxy any API request.
# It's useful if your machine has difficulty calling the original API server.
# PROXY=
# Change this to your MongoDB URI if different. I recommend appending chatgpt-clone.
MONGO_URI=mongodb://127.0.0.1:27017/chatgpt-clone
# Change this to your MongoDB URI if different. I recommend appending LibreChat.
MONGO_URI=mongodb://127.0.0.1:27017/LibreChat
##########################
# OpenAI Endpoint:
##########################
# Access key from OpenAI platform.
# Leave it blank to disable this feature.
OPENAI_KEY=
# Leave it blank to disable this feature.
# Set to "user_provided" to allow the user to provide their API key from the UI.
OPENAI_API_KEY=user_provided
# Identify the available models, separated by commas *without spaces*.
# The first will be default.
# Leave it blank to use internal settings.
OPENAI_MODELS=gpt-3.5-turbo,gpt-3.5-turbo-0301,text-davinci-003,gpt-4
OPENAI_MODELS=gpt-3.5-turbo,gpt-3.5-turbo-0301,text-davinci-003,gpt-4,gpt-4-0314
# Reverse proxy settings for OpenAI:
# https://github.com/waylaidwanderer/node-chatgpt-api#using-a-reverse-proxy
@@ -42,24 +42,26 @@ OPENAI_MODELS=gpt-3.5-turbo,gpt-3.5-turbo-0301,text-davinci-003,gpt-4
# To use Azure with this project, set the following variables. These will be used to build the API URL.
# Chat completion:
# `https://{AZURE_OPENAI_API_INSTANCE_NAME}.openai.azure.com/openai/deployments/{AZURE_OPENAI_API_DEPLOYMENT_NAME}/chat/completions?api-version={AZURE_OPENAI_API_VERSION}`;
# `https://{AZURE_OPENAI_API_INSTANCE_NAME}.openai.azure.com/openai/deployments/{AZURE_OPENAI_API_DEPLOYMENT_NAME}/chat/completions?api-version={AZURE_OPENAI_API_VERSION}`;
# You should also consider changing the `OPENAI_MODELS` variable above to the models available in your instance/deployment.
# Note: I've noticed that the Azure API is much faster than the OpenAI API, so the streaming looks almost instantaneous.
# Note "AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME" and "AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME" are optional but might be used in the future
# AZURE_OPENAI_API_KEY=
# AZURE_OPENAI_API_INSTANCE_NAME=
# AZURE_OPENAI_API_DEPLOYMENT_NAME=
# AZURE_OPENAI_API_VERSION=
# AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME= # Optional, but may be used in future updates
# AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME= # Optional, but may be used in future updates
# AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME=
# AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME=
##########################
# BingAI Endpoint:
##########################
# Also used for Sydney and jailbreak
# BingAI Tokens: the "_U" cookies value from bing.com
# To get your Access token for Bing, login to https://www.bing.com
# Use dev tools or an extension while logged into the site to copy the content of the _U cookie.
#If this fails, follow these instructions https://github.com/danny-avila/LibreChat/issues/370#issuecomment-1560382302 to provide the full cookie strings.
# Set to "user_provided" to allow the user to provide its token from the UI.
# Leave it blank to disable this endpoint.
BINGAI_TOKEN="user_provided"
@@ -82,12 +84,56 @@ CHATGPT_TOKEN="user_provided"
# Identify the available models, separated by commas. The first will be default.
# Leave it blank to use internal settings.
CHATGPT_MODELS=text-davinci-002-render-sha,text-davinci-002-render-paid,gpt-4
CHATGPT_MODELS=text-davinci-002-render-sha,gpt-4
# NOTE: you can add gpt-4-plugins, gpt-4-code-interpreter, and gpt-4-browsing to the list above and use the models for these features;
# however, the view/display portion of these features are not supported, but you can use the underlying models, which have higher token context
# Also: text-davinci-002-render-paid is deprecated as of May 2023
# Reverse proxy settings for ChatGPT
# https://github.com/waylaidwanderer/node-chatgpt-api#using-a-reverse-proxy
# By default, the server will use the node-chatgpt-api recommended proxy (a third party server).
# CHATGPT_REVERSE_PROXY=
# Reverse proxy setting for OpenAI
# https://github.com/waylaidwanderer/node-chatgpt-api#using-a-reverse-proxy
# By default it will use the node-chatgpt-api recommended proxy, (it's a third party server)
# CHATGPT_REVERSE_PROXY=<YOUR REVERSE PROXY>
#############################
# Plugins:
#############################
# For securely storing credentials, you need a fixed key and IV. You can set them here for prod and dev environments
# If you don't set them, the app will crash on startup.
# You need a 32-byte key (64 characters in hex) and 16-byte IV (32 characters in hex)
# Use this replit to generate some quickly: https://replit.com/@daavila/crypto#index.js
# Here are some examples (THESE ARE NOT SECURE!)
CREDS_KEY=f34be427ebb29de8d88c107a71546019685ed8b241d8f2ed00c3df97ad2566f0
CREDS_IV=e2341419ec3dd3d19b13a1a87fafcbfb
# AI-Assisted Google Search
# This bot supports searching google for answers to your questions with assistance from GPT!
# See detailed instructions here: https://github.com/danny-avila/chatgpt-clone/blob/main/docs/features/plugins/google_search.md
GOOGLE_API_KEY=
GOOGLE_CSE_ID=
# StableDiffusion WebUI
# This bot supports StableDiffusion WebUI, using it's API to generated requested images.
SD_WEBUI_URL=http://0.0.0.0:7860
##########################
# PaLM (Google) Endpoint:
##########################
# Follow the instruction here to setup:
# https://github.com/danny-avila/LibreChat/blob/main/docs/install/apis_and_tokens.md
PALM_KEY="user_provided"
# In case you need a reverse proxy for this endpoint:
# GOOGLE_REVERSE_PROXY=
##########################
# Proxy: To be Used by all endpoints
##########################
PROXY=
##########################
# Search:
@@ -118,6 +164,10 @@ MEILI_MASTER_KEY=DrhYf7zENyR6AlUCKmnz0eYASOQdl6zxH7s7MKFSfFCt
# User System:
##########################
# JWT Secrets
JWT_SECRET=secret
JWT_REFRESH_SECRET=secret
# Google:
# Add your Google Client ID and Secret here, you must register an app with Google Cloud to get these values
# https://cloud.google.com/
@@ -125,22 +175,32 @@ GOOGLE_CLIENT_ID=
GOOGLE_CLIENT_SECRET=
GOOGLE_CALLBACK_URL=/oauth/google/callback
#JWT:
JWT_SECRET_DEV=secret
# Add a secure secret for production if deploying to live domain.
JWT_SECRET_PROD=secret
# Set the expiration delay for the secure cookie with the JWT token
# Delay is in millisecond e.g. 7 days is 1000*60*60*24*7
SESSION_EXPIRY=1000 * 60 * 60 * 24 * 7
SESSION_EXPIRY=(1000 * 60 * 60 * 24) * 7
# Site URLs:
# Don't forget to set Node env to development in the Server configuration section above
# if you want to run in dev mode
CLIENT_URL_DEV=http://localhost:3090
SERVER_URL_DEV=http://localhost:3080
###########################
# Application Domains
###########################
# Change these values to domain if deploying:
CLIENT_URL_PROD=http://localhost:3080
SERVER_URL_PROD=http://localhost:3080
# Note: server = backend, client = public (the client is the url you visit)
# For the google login to work in dev mode, you will likely need to change DOMAIN_SERVER to localhost:3090 or place it in .env.development
DOMAIN_CLIENT=http://localhost:3080
DOMAIN_SERVER=http://localhost:3080
###########################
# Frontend Configuration (Vite):
###########################
# Custom app name, this text will be displayed in the landing page and the footer.
VITE_APP_TITLE="LibreChat"
# Enable Social Login
# This enables/disables the Login with Google button on the login page.
# Set to true if you have registered the app with google cloud services
# and have set the GOOGLE_CLIENT_ID and GOOGLE_CLIENT_SECRET in the /api/.env file
VITE_SHOW_GOOGLE_LOGIN_OPTION=false
# Allow Public Registration
ALLOW_REGISTRATION=true

115
.eslintrc.js Normal file
View File

@@ -0,0 +1,115 @@
module.exports = {
env: {
browser: true,
es2021: true,
node: true,
commonjs: true,
es6: true
},
extends: [
'eslint:recommended',
'plugin:react/recommended',
'plugin:react-hooks/recommended',
"plugin:jest/recommended",
'prettier'
],
parser: '@typescript-eslint/parser',
parserOptions: {
ecmaVersion: 'latest',
sourceType: 'module',
ecmaFeatures: {
jsx: true
}
},
plugins: ['react', 'react-hooks', '@typescript-eslint'],
rules: {
'react/react-in-jsx-scope': 'off',
'@typescript-eslint/ban-ts-comment': ['error', { 'ts-ignore': 'allow-with-description' }],
indent: ['error', 2, { SwitchCase: 1 }],
'max-len': [
'error',
{
code: 150,
ignoreStrings: true,
ignoreTemplateLiterals: true,
ignoreComments: true
}
],
'linebreak-style': 0,
// "arrow-parens": [2, "as-needed", { requireForBlockBody: true }],
// 'no-plusplus': ['error', { allowForLoopAfterthoughts: true }],
'no-console': 'off',
'import/extensions': 'off',
'no-use-before-define': [
'error',
{
functions: false
}
],
'no-promise-executor-return': 'off',
'no-param-reassign': 'off',
'no-continue': 'off',
'no-restricted-syntax': 'off',
'react/prop-types': ['off'],
'react/display-name': ['off']
},
overrides: [
{
files: ['**/*.ts', '**/*.tsx'],
rules: {
'no-unused-vars': 'off', // off because it conflicts with '@typescript-eslint/no-unused-vars'
'react/display-name': 'off',
'@typescript-eslint/no-unused-vars': 'warn'
}
},
{
files: ['rollup.config.js', '.eslintrc.js', 'jest.config.js'],
env: {
node: true,
}
},
{
files: [
'**/*.test.js',
'**/*.test.jsx',
'**/*.test.ts',
'**/*.test.tsx',
'**/*.spec.js',
'**/*.spec.jsx',
'**/*.spec.ts',
'**/*.spec.tsx',
'setupTests.js'
],
env: {
jest: true,
node: true
},
rules: {
'react/display-name': 'off',
'react/prop-types': 'off',
'react/no-unescaped-entities': 'off'
}
},
{
files: '**/*.+(ts)',
parser: '@typescript-eslint/parser',
parserOptions: {
project: './client/tsconfig.json'
},
plugins: ['@typescript-eslint/eslint-plugin', 'jest'],
extends: [
'plugin:@typescript-eslint/eslint-recommended',
'plugin:@typescript-eslint/recommended'
]
}
],
settings: {
react: {
createClass: 'createReactClass', // Regex for Component Factory to use,
// default to "createReactClass"
pragma: 'React', // Pragma to use, default to "React"
fragment: 'Fragment', // Fragment to use (may be a property of <pragma>), default to "Fragment"
version: 'detect' // React version. "detect" automatically picks the version you have installed.
}
}
};

64
.github/ISSUE_TEMPLATE/BUG-REPORT.yml vendored Normal file
View File

@@ -0,0 +1,64 @@
name: Bug Report
description: File a bug report
title: "[Bug]: "
labels: ["bug"]
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this bug report!
- type: input
id: contact
attributes:
label: Contact Details
description: How can we get in touch with you if we need more info?
placeholder: ex. email@example.com
validations:
required: false
- type: textarea
id: what-happened
attributes:
label: What happened?
description: Also tell us, what did you expect to happen?
placeholder: Please give as many details as possible
validations:
required: true
- type: textarea
id: steps-to-reproduce
attributes:
label: Steps to Reproduce
description: Please list the steps needed to reproduce the issue.
placeholder: "1. Step 1\n2. Step 2\n3. Step 3"
validations:
required: true
- type: dropdown
id: browsers
attributes:
label: What browsers are you seeing the problem on?
multiple: true
options:
- Firefox
- Chrome
- Safari
- Microsoft Edge
- Mobile (iOS)
- Mobile (Android)
- type: textarea
id: logs
attributes:
label: Relevant log output
description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
render: shell
- type: textarea
id: screenshots
attributes:
label: Screenshots
description: If applicable, add screenshots to help explain your problem. You can drag and drop, paste images directly here or link to them.
- type: checkboxes
id: terms
attributes:
label: Code of Conduct
description: By submitting this issue, you agree to follow our [Code of Conduct](https://github.com/danny-avila/chatgpt-clone/blob/main/documents/contributions/code_of_conduct.md)
options:
- label: I agree to follow this project's Code of Conduct
required: true

View File

@@ -0,0 +1,57 @@
name: Feature Request
description: File a feature request
title: "Enhancement: "
labels: ["enhancement"]
body:
- type: markdown
attributes:
value: |
Thank you for taking the time to fill this out!
- type: input
id: contact
attributes:
label: Contact Details
description: How can we contact you if we need more information?
placeholder: ex. email@example.com
validations:
required: false
- type: textarea
id: what
attributes:
label: What features would you like to see added?
description: Please provide as many details as possible.
placeholder: Please provide as many details as possible.
validations:
required: true
- type: textarea
id: details
attributes:
label: More details
description: Please provide additional details if needed.
placeholder: Please provide additional details if needed.
validations:
required: true
- type: dropdown
id: subject
attributes:
label: Which components are impacted by your request?
multiple: true
options:
- General
- UI
- Endpoints
- Plugins
- Other
- type: textarea
id: screenshots
attributes:
label: Pictures
description: If relevant, please include images to help clarify your request. You can drag and drop images directly here, paste them, or provide a link to them.
- type: checkboxes
id: terms
attributes:
label: Code of Conduct
description: By submitting this issue, you agree to follow our [Code of Conduct](https://github.com/danny-avila/chatgpt-clone/blob/main/documents/contributions/code_of_conduct.md)
options:
- label: I agree to follow this project's Code of Conduct
required: true

58
.github/ISSUE_TEMPLATE/QUESTION.yml vendored Normal file
View File

@@ -0,0 +1,58 @@
name: Question
description: Ask your question
title: "[Question]: "
labels: ["question"]
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill this!
- type: input
id: contact
attributes:
label: Contact Details
description: How can we get in touch with you if we need more info?
placeholder: ex. email@example.com
validations:
required: false
- type: textarea
id: what-is-your-question
attributes:
label: What is your question?
description: Please give as many details as possible
placeholder: Please give as many details as possible
validations:
required: true
- type: textarea
id: more-details
attributes:
label: More Details
description: Please provide more details if needed.
placeholder: Please provide more details if needed.
validations:
required: true
- type: dropdown
id: browsers
attributes:
label: What is the main subject of your question?
multiple: true
options:
- Documentation
- Installation
- UI
- Endpoints
- User System/OAuth
- Other
- type: textarea
id: screenshots
attributes:
label: Screenshots
description: If applicable, add screenshots to help explain your problem. You can drag and drop, paste images directly here or link to them.
- type: checkboxes
id: terms
attributes:
label: Code of Conduct
description: By submitting this issue, you agree to follow our [Code of Conduct](https://github.com/danny-avila/chatgpt-clone/blob/main/documents/contributions/code_of_conduct.md)
options:
- label: I agree to follow this project's Code of Conduct
required: true

47
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,47 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
version: 2
updates:
- package-ecosystem: "npm" # See documentation for possible values
directory: "/api" # Location of package manifests
target-branch: "develop"
versioning-strategy: increase-if-necessary
schedule:
interval: "weekly"
allow:
# Allow both direct and indirect updates for all packages
- dependency-type: "all"
commit-message:
prefix: "npm api prod"
prefix-development: "npm api dev"
include: "scope"
- package-ecosystem: "npm" # See documentation for possible values
directory: "/client" # Location of package manifests
target-branch: "develop"
versioning-strategy: increase-if-necessary
schedule:
interval: "weekly"
allow:
# Allow both direct and indirect updates for all packages
- dependency-type: "all"
commit-message:
prefix: "npm client prod"
prefix-development: "npm client dev"
include: "scope"
- package-ecosystem: "npm" # See documentation for possible values
directory: "/" # Location of package manifests
target-branch: "develop"
versioning-strategy: increase-if-necessary
schedule:
interval: "weekly"
allow:
# Allow both direct and indirect updates for all packages
- dependency-type: "all"
commit-message:
prefix: "npm all prod"
prefix-development: "npm all dev"
include: "scope"

View File

@@ -13,7 +13,7 @@ jobs:
BINGAI_TOKEN: ${{ secrets.BINGAI_TOKEN }}
CHATGPT_TOKEN: ${{ secrets.CHATGPT_TOKEN }}
MONGO_URI: ${{ secrets.MONGO_URI }}
OPENAI_KEY: ${{ secrets.OPENAI_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
steps:
- uses: actions/checkout@v3
- uses: actions/setup-node@v3

View File

@@ -1,40 +1,35 @@
Please include a summary of the changes and the related issue. Please also include relevant motivation and context. List any dependencies that are required for this change.
Fixes # (issue)
## Type of change
Please delete options that are not relevant.
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] This change requires a documentation update
# How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
- [ ] Test A
- [ ] Test B
**Test Configuration**:
* Firmware version:
* Hardware:
* Toolchain:
* SDK:
# Checklist:
- [ ] My code follows the style guidelines of this project
- [ ] I have performed a self-review of my code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have made corresponding changes to the documentation
- [ ] My changes generate no new warnings
- [ ] I have added tests that prove my fix is effective or that my feature works
- [ ] New and existing unit tests pass locally with my changes
- [ ] Any dependent changes have been merged and published in downstream modules
##
## [Go Back to ReadMe](../../README.md)
Please include a summary of the changes and the related issue. Please also include relevant motivation and context. List any dependencies that are required for this change.
## Type of change
Please delete options that are not relevant.
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] This change requires a documentation update
- [ ] Documentation update
## How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration:
##
### **Test Configuration**:
##
## Checklist:
- [ ] My code follows the style guidelines of this project
- [ ] I have performed a self-review of my code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have made corresponding changes to the documentation
- [ ] My changes generate no new warnings
- [ ] I have added tests that prove my fix is effective or that my feature works
- [ ] New and existing unit tests pass locally with my changes
- [ ] Any dependent changes have been merged and published in downstream modules

8
.gitignore vendored
View File

@@ -26,6 +26,7 @@ dist/
public/main.js
public/main.js.map
public/main.js.LICENSE.txt
client/public/images/
client/public/main.js
client/public/main.js.map
client/public/main.js.LICENSE.txt
@@ -48,6 +49,9 @@ bower_components/
# Environment
.npmrc
.env
!.env.example
!.env.test.example
.env*
cache.json
api/data/
owner.yml
@@ -59,8 +63,10 @@ src/style - official.css
/playwright/.cache/
.DS_Store
*.code-workspace
.idea
junit.xml
# meilisearch
meilisearch
data.ms/*
auth.json

5
.husky/pre-commit Executable file
View File

@@ -0,0 +1,5 @@
#!/usr/bin/env sh
. "$(dirname -- "$0")/_/husky.sh"
npx lint-staged

19
.prettierrc.js Normal file
View File

@@ -0,0 +1,19 @@
module.exports = {
printWidth: 100,
useTabs: false,
tabWidth: 2,
semi: true,
singleQuote: true,
// bracketSpacing: false,
trailingComma: 'none',
arrowParens: 'always',
embeddedLanguageFormatting: 'auto',
insertPragma: false,
proseWrap: 'preserve',
quoteProps: 'as-needed',
requirePragma: false,
rangeStart: 0,
endOfLine: 'auto',
jsxBracketSameLine: false,
jsxSingleQuote: false,
};

View File

@@ -1,83 +0,0 @@
# # Changelog
<details open>
<summary><strong>2023-05-11</strong></summary>
**Released [v0.4.2](https://github.com/danny-avila/chatgpt-clone/releases/tag/v0.4.2)**
ChatGPT-Clone received some important upgrades and improvements. A new contributor, [@qcgm1978](https://github.com/qcgm1978), makes their first contribution by adding a null check for adaptiveCards variable. Additionally, support for titling conversations with the Azure endpoint is added by [@danny-avila](https://github.com/danny-avila) in PR [#234](https://github.com/danny-avila/chatgpt-clone/pull/234). In PR [#235](https://github.com/danny-avila/chatgpt-clone/pull/235), [@danny-avila](https://github.com/danny-avila) also makes some necessary fixes to titling, quotation marks, and endpoints being unavailable with only the Azure key provided. The logging system is now powered by Pino and sanitization, thanks to [@danorlando](https://github.com/danorlando) in PR [#227](https://github.com/danny-avila/chatgpt-clone/pull/227). To bulletproof the Docker container, the .dockerignore file is updated to include the client/.env file by [@danny-avila](https://github.com/danny-avila) in PR [#241](https://github.com/danny-avila/chatgpt-clone/pull/241). This issue was brought to our attention on discord.
There is active work on the new Plugins feature, converting the frontend to Typescript, and looking to integrate Palm2, google's new generative AI accessible via API, to the project as a new endpoint.
You can check the full changelog in between [v0.4.1](https://github.com/danny-avila/chatgpt-clone/releases/tag/v0.4.1) and [v0.4.2](https://github.com/danny-avila/chatgpt-clone/releases/tag/v0.4.2) [here](https://github.com/danny-avila/chatgpt-clone/compare/v0.4.1...v0.4.2)."
For discussion and suggestion you can join us: **[community discord server](https://discord.gg/NGaa9RPCft)**
</details>
<details>
<summary><strong>2023-05-09</strong></summary>
**Released [v0.4.1](https://github.com/danny-avila/chatgpt-clone/releases/tag/v0.4.1)**
* update user system section of readme by @danorlando in #207
* remove github-passport and update package.lock files by @danorlando in #208
* Update README.md by @fuegovic in #209
* fix: fix browser refresh redirecting to /chat/new by @danorlando in #210
* fix: fix issue with validation when google account has multiple spaces in username by @danorlando in #211
* chore: update docker image version to use latest by @danny-avila in #218
* update documentation structure by @fuegovic in #220
* Feat: Add Azure support by @danny-avila in #219
* Update Message.js by @DavidDev1334 in #191
⚠️ **IMPORTANT :** Since V0.4.0 You should register and login with a local account (email and password) for the first time sign-up. if you use login for the first time with a social login account (eg. Google, facebook, etc.), the conversations and presets that you created before the user system was implemented will NOT be migrated to that account.
⚠️ **Breaking - new Env Variables :** Since V0.4.0 You will need to add the new env variables from .env.example for the app to work, even if you're not using multiple users for your purposes.
For discussion and suggestion you can join us: **[community discord server](https://discord.gg/NGaa9RPCft)**
</details>
<details>
<summary><strong>2023-05-07</strong></summary>
**Released [v0.4.0](https://github.com/danny-avila/chatgpt-clone/releases/tag/v0.4.0)**, Introducing User/Auth System and OAuth2/Social Login! You can now register and login with an email account or use Google login. Your your previous conversations and presets will migrate to your new profile upon creation. Check out the details in the [User/Auth System](#userauth-system) section of the README.md.
⚠️ **IMPORTANT :** You should register and login with a local account (email and password) for the first time sign-up. if you use login for the first time with a social login account (eg. Google, facebook, etc.), the conversations and presets that you created before the user system was implemented will NOT be migrated to that account.
⚠️ **Breaking - new Env Variables :** You will need to add the new env variables from .env.example for the app to work, even if you're not using multiple users for your purposes.
For discussion and suggestion you can join us: **[community discord server](https://discord.gg/NGaa9RPCft)**
</details>
<details>
<summary><strong>2023-04-05</strong></summary>
**Released [v0.3.0](https://github.com/danny-avila/chatgpt-clone/releases/tag/v0.3.0)**, Introducing more customization for both OpenAI & BingAI conversations! This is one of the biggest updates yet and will make integrating future LLM's a lot easier, providing a lot of customization features as well, including sharing presets! Please feel free to share them in the **[community discord server](https://discord.gg/NGaa9RPCft)**
</details>
<details>
<summary><strong>2023-03-23</strong></summary>
**Released [v0.1.0](https://github.com/danny-avila/chatgpt-clone/releases/tag/v0.1.0)**, **searching messages/conversations is live!** Up next is more custom parameters for customGpt's. Join the discord server for more immediate assistance and update: **[community discord server](https://discord.gg/NGaa9RPCft)**
</details>
<details>
<summary><strong>2023-03-22</strong></summary>
**Released [v0.0.6](https://github.com/danny-avila/chatgpt-clone/releases/tag/v0.0.6)**, the latest stable release before **Searching messages** goes live tomorrow. See exact updates to date in the tag link. By request, there is now also a **[community discord server](https://s
</details>
<details>
<summary><strong>2023-03-20</strong></summary>
**Searching messages** is almost here as I test more of its functionality. There've been a lot of great features requested and great contributions and I will work on some soon, namely, further customizing the custom gpt params with sliders similar to the OpenAI playground, and including the custom params and system messages available to Bing.
The above features are next and then I will have to focus on building the **test environment.** I would **greatly appreciate** help in this area with any test environment you're familiar with (mocha, chai, jest, playwright, puppeteer). This is to aid in the velocity of contributing and to save time I spend debugging.
On that note, I had to switch the default branch due to some breaking changes that haven't been straight forward to debug, mainly related to node-chat-gpt the main dependency of the project. Thankfully, my working branch, now switched to default as main, is working as expected.
</details>
##
## [Go Back to ReadMe](README.md)

View File

@@ -59,8 +59,8 @@ representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
https://t.me/proffapt.
reported to the community leaders responsible for enforcement here on GitHub or
on the official [Discord Server](https://discord.gg/uDyZ5Tzhct).
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
@@ -129,4 +129,4 @@ https://www.contributor-covenant.org/translations.
##
## [Go Back to ReadMe](../../README.md)
## [Go Back to ReadMe](README.md)

View File

@@ -5,11 +5,19 @@ such as bug reports, documentation improvements, feature requests, and code cont
## Contributing Guidelines
When contributing to this repository, please first discuss the change you wish to make via [issue](https://github.com/danny-avila/chatgpt-clone/issues) or
join our discord [Discord community](https://discord.gg/NGaa9RPCft).
If the feature you would like to contribute has not already received prior approval from the project maintainers (ie. the feature is currently on the roadmap or on the [trello board]()), please submit a proposal in the [proposals category](https://github.com/danny-avila/chatgpt-clone/discussions/categories/proposals) of the discussions board before beginning work on it.
- Proposals should include specific implementation details including areas of the application that will be effected by the change inlcuding designs if applicable, and any other relevant information that might be required for a speedy review.
- Proposals are not required for small changes, bug fixes, or documentation improvements.
- Small changes and bug fixes should be tied to an [issue](https://github.com/danny-avila/chatgpt-clone/issues) and included in the corresponding pull request for tracking purposes.
*Please note that a pull request involving a feature that has not been reviewed and approved by the project maintainers may be rejected.*
If you would like to discuss the changes you wish to make, join our [Discord community](https://discord.gg/uDyZ5Tzhct).
## Our Standards
Please read our [Coding Standards and Conventions](docs/contributions/coding_conventions.md) before beginning on a contribution.
Examples of behavior that contributes to creating a positive environment
include:
@@ -172,6 +180,6 @@ Apply the following naming conventions to branches, labels, and other Git-relate
##
## [Go Back to ReadMe](../../README.md)
## [Go Back to ReadMe](README.md)

View File

@@ -1,26 +0,0 @@
# Contributors List
We appreciate all the contributors who helped make this project possible:
- danny-avila (Admin)
- wtlyu (Contributor)
- danorlando (Contributor)
- alfredo-f (Contributor)
- HyunggyuJang (Contributor)
- fuegovic (Contributor)
- DavidDev1334
- toordog (Contributor)
- heathriel (External Contributor)
- hackreactor-bot (Contributor)
- git-bruh (Contributor)
- zhangsean (Contributor)
- llk89 (Contributor)
- adamrb (Contributor)
If you have contributed to this project and would like to be added to the list of contributors, please submit a pull request updating this file with your name and GitHub username.
##
## [Go Back to ReadMe](README.md)

View File

@@ -1,38 +1,30 @@
FROM node:19-alpine AS react-client
WORKDIR /client
# copy package.json into the container at /client
COPY /client/.env /client/.env
COPY /client/package*.json /client/
# install dependencies
# Base node image
FROM node:19-alpine AS node
COPY . /app
# Install dependencies
WORKDIR /app
RUN npm ci
# Copy the current directory contents into the container at /client
COPY /client/ /client/
# Set the memory limit for Node.js
ENV NODE_OPTIONS="--max-old-space-size=2048"
# Build artifacts
RUN npm run build
FROM node:19-alpine AS node-api
WORKDIR /api
# copy package.json into the container at /api
COPY /api/package*.json /api/
# install dependencies
RUN npm ci
# Copy the current directory contents into the container at /api
COPY /api/ /api/
# Copy the client side code
COPY --from=react-client /client/dist /client/dist
# Make port 3080 available to the world outside this container
# Frontend variables as build args
ARG VITE_APP_TITLE
ARG VITE_SHOW_GOOGLE_LOGIN_OPTION
# You will need to add your VITE variables to the docker-compose file
ENV VITE_APP_TITLE=$VITE_APP_TITLE
ENV VITE_SHOW_GOOGLE_LOGIN_OPTION=$VITE_SHOW_GOOGLE_LOGIN_OPTION
# React client build
ENV NODE_OPTIONS="--max-old-space-size=2048"
RUN npm run frontend
# Node API setup
EXPOSE 3080
# Expose the server to 0.0.0.0
ENV HOST=0.0.0.0
# Run the app when the container launches
CMD ["npm", "start"]
CMD ["npm", "run", "backend"]
# Optional: for client with nginx routing
FROM nginx:stable-alpine AS nginx-client
WORKDIR /usr/share/nginx/html
COPY --from=react-client /client/dist /usr/share/nginx/html
# Add your nginx.conf
COPY /client/nginx.conf /etc/nginx/conf.d/default.conf
ENTRYPOINT ["nginx", "-g", "daemon off;"]
# FROM nginx:stable-alpine AS nginx-client
# WORKDIR /usr/share/nginx/html
# COPY --from=node /app/client/dist /usr/share/nginx/html
# COPY client/nginx.conf /etc/nginx/conf.d/default.conf
# ENTRYPOINT ["nginx", "-g", "daemon off;"]

141
README.md
View File

@@ -1,135 +1,152 @@
<p align="center">
<a href="https://discord.gg/NGaa9RPCft">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://user-images.githubusercontent.com/110412045/228325485-9d3e618f-a980-44fe-89e9-d6d39164680e.png">
<img src="https://user-images.githubusercontent.com/110412045/228325485-9d3e618f-a980-44fe-89e9-d6d39164680e.png" height="128">
</picture>
<h1 align="center">ChatGPT Clone</h1>
<h1 align="center">LibreChat</h1>
</a>
</p>
<p align="center">
<a aria-label="Join the community on Discord" href="https://discord.gg/NGaa9RPCft">
<img alt="" src="https://img.shields.io/badge/Join%20the%20community-blueviolet.svg?style=for-the-badge&logo=DISCORD&labelColor=000000&logoWidth=20">
<a href="https://discord.gg/NGaa9RPCft">
<img src="https://img.shields.io/discord/1086345563026489514?label=&logo=discord&style=for-the-badge&logoWidth=20&labelColor=000000&color=blueviolet">
</a>
<a aria-label="Sponsors" href="#sponsors">
<img alt="" src="https://img.shields.io/badge/SPONSORS-brightgreen.svg?style=for-the-badge&labelColor=000000&logoWidth=20">
</a>
</p>
## All AI Conversations under One Roof. ##
Assistant AIs are the future and OpenAI revolutionized this movement with ChatGPT. While numerous UIs exist, this app commemorates the original styling of ChatGPT, with the ability to integrate any current/future AI models, while integrating and improving upon original client features, such as conversation/message search and prompt templates (currently WIP). Through this clone, you can avoid ChatGPT Plus in favor of free or pay-per-call APIs. I will soon deploy a demo of this app. Feel free to contribute, clone, or fork. Currently dockerized.
## All-In-One AI Conversations with LibreChat ##
LibreChat brings together the future of assistant AIs with the revolutionary technology of OpenAI's ChatGPT. Celebrating the original styling, LibreChat gives you the ability to integrate multiple AI models. It also integrates and enhances original client features such as conversation and message search, prompt templates and plugins.
With LibreChat, you no longer need to opt for ChatGPT Plus and can instead use free or pay-per-call APIs. We welcome contributions, cloning, and forking to enhance the capabilities of this advanced chatbot platform.
![clone3](https://user-images.githubusercontent.com/110412045/230538752-9b99dc6e-cd02-483a-bff0-6c6e780fa7ae.gif)
<!-- ![clone3](https://user-images.githubusercontent.com/110412045/230538752-9b99dc6e-cd02-483a-bff0-6c6e780fa7ae.gif) -->
https://github.com/danny-avila/LibreChat/assets/110412045/c1eb0c0f-41f6-4335-b982-84b278b53d59
# Features
- Response streaming identical to ChatGPT through server-sent events
- UI from original ChatGPT, including Dark mode
- AI model selection (through 3 endpoints: OpenAI API, BingAI, and ChatGPT Browser)
- Create, Save, & Share custom presets for OpenAI and BingAI endpoints - [More info on customization here](https://github.com/danny-avila/chatgpt-clone/releases/tag/v0.3.0)
- Edit and Resubmit messages just like the official site (with conversation branching)
- AI model selection (through 5 endpoints: OpenAI API, BingAI, ChatGPT Browser, PaLM2, Plugins)
- Create, Save, & Share custom presets - [More info on prompt presets here](https://github.com/danny-avila/chatgpt-clone/releases/tag/v0.3.0)
- Edit and Resubmit messages with conversation branching
- Search all messages/conversations - [More info here](https://github.com/danny-avila/chatgpt-clone/releases/tag/v0.1.0)
- Integrating plugins soon
- Plugins now available (including web access, image generation and more)
##
# Sponsors
---
# ⚠️ **Breaking Changes** ⚠️
Note: These changes only apply to users who are updating from a previous version of the app.
Sponsored by <a href="https://github.com/DavidDev1334"><b>@DavidDev1334</b></a>, <a href="https://github.com/mjtechguy"><b>@mjtechguy</b></a>, <a href="https://github.com/Pharrcyde"><b>@Pharrcyde</b></a>, & <a href="https://github.com/fuegovic"><b>@fuegovic</b></a>
- We have simplified the configuration process by using a single `.env` file in the root folder instead of separate `/api/.env` and `/client/.env` files.
- If you had installed a previous version, you can run `npm run upgrade` to automatically copy the content of both files to the new `.env` file and backup the old ones in the root dir.
- If you are installing the project for the first time, it's recommend you run the installation script `npm run install` to guide your local setup (otherwise continue to use docker)
- The docker-compose file had some change. Review the [new docker instructions](docs\install\docker_install.md) to make sure you are setup properly. This is still the simplest and most effective method.
- The upgrade script requires both `/api/.env` and `/client/.env` files to run properly. If you get an error about a missing client env file, just rename the `/client/.env.example` file to `/client/.env` and run the script again.
- We have renamed the `OPENAI_KEY` variable to `OPENAI_API_KEY` to match the official documentation. The upgrade script should do this automatically for you, but please double-check that your key is correct in the new `.env` file.
- After running the upgrade script, the `OPENAI_API_KEY` variable might be placed in a different section in the new `.env` file than before. This does not affect the functionality of the app, but if you want to keep it organized, you can look for it near the bottom of the file and move it to its usual section.
##
<details open>
<summary><strong>2023-05-11</strong></summary>
**Released [v0.4.2](https://github.com/danny-avila/chatgpt-clone/releases/tag/v0.4.2)**
ChatGPT-Clone received some important upgrades and improvements. A new contributor, [@qcgm1978](https://github.com/qcgm1978), makes their first contribution by adding a null check for adaptiveCards variable. Additionally, support for titling conversations with the Azure endpoint is added by [@danny-avila](https://github.com/danny-avila) in PR [#234](https://github.com/danny-avila/chatgpt-clone/pull/234). In PR [#235](https://github.com/danny-avila/chatgpt-clone/pull/235), [@danny-avila](https://github.com/danny-avila) also makes some necessary fixes to titling, quotation marks, and endpoints being unavailable with only the Azure key provided. The logging system is now powered by Pino and sanitization, thanks to [@danorlando](https://github.com/danorlando) in PR [#227](https://github.com/danny-avila/chatgpt-clone/pull/227). To bulletproof the Docker container, the .dockerignore file is updated to include the client/.env file by [@danny-avila](https://github.com/danny-avila) in PR [#241](https://github.com/danny-avila/chatgpt-clone/pull/241). This issue was brought to our attention on discord.
- For enhanced security, we are now asking for crypto keys for securely storing credentials in the `.env` file. Crypto keys are used to encrypt and decrypt sensitive data such as passwords and access keys. If you don't set them, the app will crash on startup.
- You need to fill the following variables in the `.env` file with 32-byte (64 characters in hex) or 16-byte (32 characters in hex) values:
- `CREDS_KEY` (32-byte)
- `CREDS_IV` (16-byte)
- `JWT_SECRET` (32-byte, optional but recommended)
- You can use this replit to generate some crypto keys quickly: https://replit.com/@daavila/crypto#index.js
- Make sure you keep your crypto keys safe and don't share them with anyone.
There is active work on the new Plugins feature, converting the frontend to Typescript, and looking to integrate Palm2, google's new generative AI accessible via API, to the project as a new endpoint.
We apologize for any inconvenience caused by these changes. We hope you enjoy the new and improved version of our app!
You can check the full changelog in between [v0.4.1](https://github.com/danny-avila/chatgpt-clone/releases/tag/v0.4.1) and [v0.4.2](https://github.com/danny-avila/chatgpt-clone/releases/tag/v0.4.2) [here](https://github.com/danny-avila/chatgpt-clone/compare/v0.4.1...v0.4.2)."
---
⚠️ **IMPORTANT :** Since V0.4.0 You should register and login with a local account (email and password) for the first time sign-up. if you use login for the first time with a social login account (eg. Google, facebook, etc.), the conversations and presets that you created before the user system was implemented will NOT be migrated to that account.
## Changelog
- Keep up with the latest updates by visiting the releases page - [Releases](https://github.com/danny-avila/LibreChat/releases)
⚠️ **Breaking - new Env Variables :** Since V0.4.0 You will need to add the new env variables from .env.example for the app to work, even if you're not using multiple users for your purposes.
For discussion and suggestion you can join us: **[community discord server](https://discord.gg/NGaa9RPCft)**
</details>
[Past Updates](CHANGELOG.md)
##
---
<h1>Table of Contents</h1>
<details open>
<summary><strong>Getting Started</strong></summary>
* [Docker Install](/documents/install/docker_install.md)
* [Linux Install](documents/install/linux_install.md)
* [Mac Install](documents/install/mac_install.md)
* [Windows Install](documents/install/windows_install.md)
* [Docker Install](/docs/install/docker_install.md)
* [Linux Install](docs/install/linux_install.md)
* [Mac Install](docs/install/mac_install.md)
* [Windows Install](docs/install/windows_install.md)
* [APIs and Tokens](docs/install/apis_and_tokens.md)
</details>
<details>
<summary><strong>General Information</strong></summary>
* [Project Origin](documents/general_info/project_origin.md)
* [Roadmap](documents/general_info/roadmap.md)
* [Tech Stack](documents/general_info/tech_stack.md)
* [Code of Conduct](CODE_OF_CONDUCT.md)
* [Project Origin](docs/general_info/project_origin.md)
* [Multilingual Information](docs/general_info/multilingual_information.md)
* [Tech Stack](docs/general_info/tech_stack.md)
* [Changelog](CHANGELOG.md)
* [Bing Jailbreak Info](documents/general_info/bing_jailbreak_info.md)
* [Bing Jailbreak Info](docs/general_info/bing_jailbreak_info.md)
</details>
<details>
<summary><strong>Features</strong></summary>
* [User Auth System](documents/features/user_auth_system.md)
* [Proxy](documents/features/proxy.md)
* **Plugins**
* [Introduction](docs/features/plugins/introduction.md)
* [Google](docs/features/plugins/google_search.md)
* [Stable Diffusion](docs/features/plugins/stable_diffusion.md)
* [Wolfram](docs/features/plugins/wolfram.md)
* [Make Your Own Plugin](docs/features/plugins/make_your_own.md)
* [User Auth System](docs/features/user_auth_system.md)
* [Proxy](docs/features/proxy.md)
</details>
<details>
<summary><strong>Cloud Deployment</strong></summary>
* [Heroku](documents/deployment/heroku.md)
* [Heroku](docs/deployment/heroku.md)
</details>
<details>
<summary><strong>Contributions</strong></summary>
* [Code of Conduct](documents/contributions/code_of_conduct.md)
* [Contributor Guidelines](documents/contributions/contributor_guidelines.md)
* [Documentation Guidelines](documents/contributions/documentation_guidelines.md)
* [Testing](documents/contributions/testing.md)
* [Pull Request Template](documents/contributions/pull_request_template.md)
* [Contributors](CONTRIBUTORS.md)
* [Contributor Guidelines](CONTRIBUTING.md)
* [Documentation Guidelines](docs/contributions/documentation_guidelines.md)
* [Code Standards and Conventions](docs/contributions/coding_conventions.md)
* [Testing](docs/contributions/testing.md)
* [Security](SECURITY.md)
* [Trello Board](https://trello.com/b/17z094kq/chatgpt-clone)
</details>
<details>
<summary><strong>Report Templates</strong></summary>
* [Bug Report Template](documents/report_templates/bug_report_template.md)
* [Custom Issue Template](documents/report_templates/custom_issue_template.md)
* [Feature Request Template](documents/report_templates/feature_request_template.md)
</details>
---
##
### [Alternative Documentation](https://chatgpt-clone.gitbook.io/chatgpt-clone-docs/get-started/docker)
## Star History
##
[![Star History Chart](https://api.star-history.com/svg?repos=danny-avila/chatgpt-clone&type=Date)](https://star-history.com/#danny-avila/chatgpt-clone&Date)
## Contributing
---
## Sponsors
Sponsored by <a href="https://github.com/DavidDev1334"><b>@DavidDev1334</b></a>, <a href="https://github.com/mjtechguy"><b>@mjtechguy</b></a>, <a href="https://github.com/Pharrcyde"><b>@Pharrcyde</b></a>, & <a href="https://github.com/fuegovic"><b>@fuegovic</b></a>
---
## Contributors
Contributions and suggestions bug reports and fixes are welcome!
Please read the documentation before you do!
---
For new features, components, or extensions, please open an issue and discuss before sending a PR.
- Join the [Discord community](https://discord.gg/NGaa9RPCft)
## License
This project is licensed under the [MIT License](LICENSE.md).
##
- Join the [Discord community](https://discord.gg/uDyZ5Tzhct)
This project exists in its current state thanks to all the people who contribute
---
<a href="https://github.com/danny-avila/chatgpt-clone/graphs/contributors">
<img src="https://contrib.rocks/image?repo=danny-avila/chatgpt-clone" />
</a>

55
SECURITY.md Normal file
View File

@@ -0,0 +1,55 @@
# Security Policy
## Reporting a Vulnerability
We take security seriously and appreciate the efforts of security researchers to improve the security of our codebase.
If you discover a security vulnerability within our project, please follow these guidelines to report it to us:
**Note: Only report sensible vulnerability report details via Github Security Advisory System. Every other communication channel are public and should be used only to initiate first contact and to initiate a private communication channel.**
### Communication channels
- **Option 1: GitHub Security Advisory System**: We encourage you to use GitHub's Security Advisory system to report any security vulnerabilities you find. This allows us to receive vulnerability reports directly through GitHub. You can find more information on how to submit a security advisory report in the [GitHub Security Advisories documentation](https://docs.github.com/en/code-security/getting-started-with-security-vulnerability-alerts/about-github-security-advisories).
- **Option 2: Github issues**: You can initiate first contact via Github Issues. **Please note that initial contact through Discord should not include any sensitive details.**
- **Option 3: Discord Server**: You can join our [Discord community](https://discord.gg/5rbRxn4uME) and initiate first contact in the `#issues` channel. **Please note that initial contact through Discord should not include any sensitive details.**
_After initial contact, we will use this initial contact to establish a private communication channel for further discussion._
### When submitting a vulnerability report, please provide us with the following information:
- A clear description of the vulnerability, including steps to reproduce it
- The version(s) of the project affected by the vulnerability
- Any additional information that may be useful for understanding and addressing the issue
We will make every effort to acknowledge your report within 72 hours and keep you informed of its progress towards resolution.
## Security Updates and Patching
We are committed to maintaining the security of our open-source project named LibreChat and promptly addressing any identified vulnerabilities. To ensure the security of our project, we follow these practices:
- We prioritize security updates for the current major release of our software.
- We actively monitor the GitHub Security Advisory system and the `#issues` channel on Discord for any vulnerability reports.
- We promptly review and validate reported vulnerabilities and take appropriate actions to address them.
- We release security patches and updates in a timely manner to mitigate any identified vulnerabilities.
Please note that as a security-conscious community, we may not always disclose detailed information about security issues until we have determined that doing so would not put our users or the project at risk. We appreciate your understanding and cooperation in these matters.
## Scope
This security policy applies to the following GitHub repository:
- Repository: [LibreChat](https://github.com/danny-avila/chatgpt-clone)
## Contact
If you have any questions or concerns regarding the security of our project, please join our [Discord community](https://discord.gg/NGaa9RPCft) and report them in the appropriate channel.
You can also reach out to us by [opening an issue](https://github.com/danny-avila/chatgpt-clone/issues/new) on GitHub.
Please note that the response time may vary depending on the nature and severity of the inquiry.
## Acknowledgments
We would like to express our gratitude to the security researchers and community members who help us improve the security of our project. Your contributions are invaluable, and we sincerely appreciate your efforts.
## Bug Bounty Program
We do not currently have a bug bounty program in place. However, we welcome and appreciate any security-related contributions through pull requests (PRs) that address vulnerabilities in our codebase.
We believe in the power of collaboration to improve the security of our project and invite you to join us in making it more robust.
**Reference**
- https://cheatsheetseries.owasp.org/cheatsheets/Vulnerability_Disclosure_Cheat_Sheet.html
##
## [Go Back to ReadMe](README.md)

View File

@@ -1,39 +0,0 @@
module.exports = {
env: {
es2021: true,
node: true
},
extends: ['eslint:recommended'],
overrides: [],
parserOptions: {
ecmaVersion: 'latest',
sourceType: 'module'
},
rules: {
indent: ['error', 2, { SwitchCase: 1 }],
'max-len': [
'error',
{
code: 150,
ignoreStrings: true,
ignoreTemplateLiterals: true,
ignoreComments: true
}
],
'linebreak-style': 0,
'arrow-parens': [2, 'as-needed', { requireForBlockBody: true }],
// 'no-plusplus': ['error', { allowForLoopAfterthoughts: true }],
'no-console': 'off',
'import/extensions': 'off',
'no-use-before-define': [
'error',
{
functions: false
}
],
'no-promise-executor-return': 'off',
'no-param-reassign': 'off',
'no-continue': 'off',
'no-restricted-syntax': 'off'
}
};

View File

@@ -1,22 +0,0 @@
{
"arrowParens": "always",
"bracketSpacing": true,
"endOfLine": "lf",
"htmlWhitespaceSensitivity": "css",
"insertPragma": false,
"singleAttributePerLine": true,
"bracketSameLine": false,
"jsxBracketSameLine": false,
"jsxSingleQuote": false,
"printWidth": 110,
"proseWrap": "preserve",
"quoteProps": "as-needed",
"requirePragma": false,
"semi": true,
"singleQuote": true,
"tabWidth": 2,
"trailingComma": "none",
"useTabs": false,
"vueIndentScriptAndStyle": false,
"parser": "babel"
}

View File

@@ -16,16 +16,17 @@ const askBing = async ({
token,
onProgress
}) => {
const { BingAIClient } = await import('og-chatgpt-api');
const { BingAIClient } = await import('@waylaidwanderer/chatgpt-api');
const store = {
store: new KeyvFile({ filename: './data/cache.json' })
};
const bingAIClient = new BingAIClient({
// "_U" cookie from bing.com
userToken: process.env.BINGAI_TOKEN == 'user_provided' ? token : process.env.BINGAI_TOKEN ?? null,
// userToken:
// process.env.BINGAI_TOKEN == 'user_provided' ? token : process.env.BINGAI_TOKEN ?? null,
// If the above doesn't work, provide all your cookies as a string instead
// cookies: '',
cookies: process.env.BINGAI_TOKEN == 'user_provided' ? token : process.env.BINGAI_TOKEN ?? null,
debug: false,
cache: store,
host: process.env.BINGAI_HOST || null,

View File

@@ -8,19 +8,22 @@ const browserClient = async ({
model,
token,
onProgress,
onEventMessage,
abortController,
userId
}) => {
const { ChatGPTBrowserClient } = await import('og-chatgpt-api');
const { ChatGPTBrowserClient } = await import('@waylaidwanderer/chatgpt-api');
const store = {
store: new KeyvFile({ filename: './data/cache.json' })
};
const clientOptions = {
// Warning: This will expose your access token to a third party. Consider the risks before using this.
reverseProxyUrl: process.env.CHATGPT_REVERSE_PROXY || 'https://ai.fakeopen.com/api/conversation',
reverseProxyUrl:
process.env.CHATGPT_REVERSE_PROXY || 'https://ai.fakeopen.com/api/conversation',
// Access token from https://chat.openai.com/api/auth/session
accessToken: process.env.CHATGPT_TOKEN == 'user_provided' ? token : process.env.CHATGPT_TOKEN ?? null,
accessToken:
process.env.CHATGPT_TOKEN == 'user_provided' ? token : process.env.CHATGPT_TOKEN ?? null,
model: model,
debug: false,
proxy: process.env.PROXY || null,
@@ -28,7 +31,7 @@ const browserClient = async ({
};
const client = new ChatGPTBrowserClient(clientOptions, store);
let options = { onProgress, abortController };
let options = { onProgress, onEventMessage, abortController };
if (!!parentMessageId && !!conversationId) {
options = { ...options, parentMessageId, conversationId };

View File

@@ -1,12 +1,16 @@
require('dotenv').config();
const { KeyvFile } = require('keyv-file');
const { genAzureEndpoint } = require('../../utils/genAzureEndpoints');
const { genAzureChatCompletion } = require('../../utils/genAzureEndpoints');
const tiktoken = require('@dqbd/tiktoken');
const tiktokenModels = require('../../utils/tiktokenModels');
const encoding_for_model = tiktoken.encoding_for_model;
const askClient = async ({
text,
parentMessageId,
conversationId,
model,
oaiApiKey,
chatGptLabel,
promptPrefix,
temperature,
@@ -23,12 +27,17 @@ const askClient = async ({
};
const azure = process.env.AZURE_OPENAI_API_KEY ? true : false;
let promptText = 'You are ChatGPT, a large language model trained by OpenAI.';
if (promptPrefix) {
promptText = promptPrefix;
}
const maxContextTokens = model === 'gpt-4-32k' ? 32767 : model.startsWith('gpt-4') ? 8191 : 4095; // 1 less than maximum
const clientOptions = {
reverseProxyUrl: process.env.OPENAI_REVERSE_PROXY || null,
azure,
maxContextTokens,
modelOptions: {
model: model,
model,
temperature,
top_p,
presence_penalty,
@@ -36,30 +45,49 @@ const askClient = async ({
},
chatGptLabel,
promptPrefix,
proxy: process.env.PROXY || null,
debug: false
proxy: process.env.PROXY || null
// debug: true
};
let apiKey = process.env.OPENAI_KEY;
let apiKey = oaiApiKey ? oaiApiKey : process.env.OPENAI_API_KEY || null;
if (azure) {
apiKey = process.env.AZURE_OPENAI_API_KEY;
clientOptions.reverseProxyUrl = genAzureEndpoint({
azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME,
azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME,
apiKey = oaiApiKey ? oaiApiKey : process.env.AZURE_OPENAI_API_KEY || null;
clientOptions.reverseProxyUrl = genAzureChatCompletion({
azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME,
azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME,
azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION
});
}
const client = new ChatGPTClient(apiKey, clientOptions, store);
const options = {
onProgress,
abortController,
...(parentMessageId && conversationId ? { parentMessageId, conversationId } : {})
};
let usage = {};
let enc = null;
try {
enc = encoding_for_model(tiktokenModels.has(model) ? model : 'gpt-3.5-turbo');
usage.prompt_tokens = (enc.encode(promptText)).length + (enc.encode(text)).length;
} catch (e) {
console.log('Error encoding prompt text', e);
}
const res = await client.sendMessage(text, { ...options, userId });
try {
usage.completion_tokens = (enc.encode(res.response)).length;
enc.free();
usage.total_tokens = usage.prompt_tokens + usage.completion_tokens;
res.usage = usage;
} catch (e) {
console.log('Error encoding response text', e);
}
return res;
};

View File

@@ -0,0 +1,89 @@
require('dotenv').config();
const run = async () => {
const { ChatGPTClient } = await import('@waylaidwanderer/chatgpt-api');
const text = `
The standard Lorem Ipsum passage, used since the 1500s
"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum."
Section 1.10.32 of "de Finibus Bonorum et Malorum", written by Cicero in 45 BC
"Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et dolore magnam aliquam quaerat voluptatem. Ut enim ad minima veniam, quis nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut aliquid ex ea commodi consequatur? Quis autem vel eum iure reprehenderit qui in ea voluptate velit esse quam nihil molestiae consequatur, vel illum qui dolorem eum fugiat quo voluptas nulla pariatur?"
1914 translation by H. Rackham
"But I must explain to you how all this mistaken idea of denouncing pleasure and praising pain was born and I will give you a complete account of the system, and expound the actual teachings of the great explorer of the truth, the master-builder of human happiness. No one rejects, dislikes, or avoids pleasure itself, because it is pleasure, but because those who do not know how to pursue pleasure rationally encounter consequences that are extremely painful. Nor again is there anyone who loves or pursues or desires to obtain pain of itself, because it is pain, but because occasionally circumstances occur in which toil and pain can procure him some great pleasure. To take a trivial example, which of us ever undertakes laborious physical exercise, except to obtain some advantage from it? But who has any right to find fault with a man who chooses to enjoy a pleasure that has no annoying consequences, or one who avoids a pain that produces no resultant pleasure?"
Section 1.10.33 of "de Finibus Bonorum et Malorum", written by Cicero in 45 BC
"At vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium voluptatum deleniti atque corrupti quos dolores et quas molestias excepturi sint occaecati cupiditate non provident, similique sunt in culpa qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam libero tempore, cum soluta nobis est eligendi optio cumque nihil impedit quo minus id quod maxime placeat facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Temporibus autem quibusdam et aut officiis debitis aut rerum necessitatibus saepe eveniet ut et voluptates repudiandae sint et molestiae non recusandae. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat."
1914 translation by H. Rackham
"On the other hand, we denounce with righteous indignation and dislike men who are so beguiled and demoralized by the charms of pleasure of the moment, so blinded by desire, that they cannot foresee the pain and trouble that are bound to ensue; and equal blame belongs to those who fail in their duty through weakness of will, which is the same as saying through shrinking from toil and pain. These cases are perfectly simple and easy to distinguish. In a free hour, when our power of choice is untrammelled and when nothing prevents our being able to do what we like best, every pleasure is to be welcomed and every pain avoided. But in certain circumstances and owing to the claims of duty or the obligations of business it will frequently occur that pleasures have to be repudiated and annoyances accepted. The wise man therefore always holds in these matters to this principle of selection: he rejects pleasures to secure other greater pleasures, or else he endures pains to avoid worse pains."
`;
const model = 'gpt-3.5-turbo';
const maxContextTokens = model === 'gpt-4' ? 8191 : model === 'gpt-4-32k' ? 32767 : 4095; // 1 less than maximum
const clientOptions = {
reverseProxyUrl: process.env.OPENAI_REVERSE_PROXY || null,
maxContextTokens,
modelOptions: {
model,
},
proxy: process.env.PROXY || null,
debug: true
};
let apiKey = process.env.OPENAI_API_KEY;
const maxMemory = 0.05 * 1024 * 1024 * 1024;
// Calculate initial percentage of memory used
const initialMemoryUsage = process.memoryUsage().heapUsed;
function printProgressBar(percentageUsed) {
const filledBlocks = Math.round(percentageUsed / 2); // Each block represents 2%
const emptyBlocks = 50 - filledBlocks; // Total blocks is 50 (each represents 2%), so the rest are empty
const progressBar = '[' + '█'.repeat(filledBlocks) + ' '.repeat(emptyBlocks) + '] ' + percentageUsed.toFixed(2) + '%';
console.log(progressBar);
}
const iterations = 16000;
console.time('loopTime');
// Trying to catch the error doesn't help; all future calls will immediately crash
for (let i = 0; i < iterations; i++) {
try {
console.log(`Iteration ${i}`);
const client = new ChatGPTClient(apiKey, clientOptions);
client.getTokenCount(text);
// const encoder = client.constructor.getTokenizer('cl100k_base');
// console.log(`Iteration ${i}: call encode()...`);
// encoder.encode(text, 'all');
// encoder.free();
const memoryUsageDuringLoop = process.memoryUsage().heapUsed;
const percentageUsed = memoryUsageDuringLoop / maxMemory * 100;
printProgressBar(percentageUsed);
if (i === (iterations - 1)) {
console.log(' done');
// encoder.free();
}
} catch (e) {
console.log(`caught error! in Iteration ${i}`);
console.log(e);
}
}
console.timeEnd('loopTime');
// Calculate final percentage of memory used
const finalMemoryUsage = process.memoryUsage().heapUsed;
// const finalPercentageUsed = finalMemoryUsage / maxMemory * 100;
console.log(`Initial memory usage: ${initialMemoryUsage / 1024 / 1024} megabytes`);
console.log(`Final memory usage: ${finalMemoryUsage / 1024 / 1024} megabytes`);
setTimeout(() => {
const memoryUsageAfterTimeout = process.memoryUsage().heapUsed;
console.log(`Post timeout: ${memoryUsageAfterTimeout / 1024 / 1024} megabytes`);
} , 10000);
}
run();

View File

@@ -0,0 +1,397 @@
const crypto = require('crypto');
const TextStream = require('../stream');
const { google } = require('googleapis');
const { Agent, ProxyAgent } = require('undici');
const { getMessages, saveMessage, saveConvo } = require('../../models');
const {
encoding_for_model: encodingForModel,
get_encoding: getEncoding
} = require('@dqbd/tiktoken');
const tokenizersCache = {};
class GoogleAgent {
constructor(credentials, options = {}) {
this.client_email = credentials.client_email;
this.project_id = credentials.project_id;
this.private_key = credentials.private_key;
this.setOptions(options);
this.currentDateString = new Date().toLocaleDateString('en-us', {
year: 'numeric',
month: 'long',
day: 'numeric'
});
}
constructUrl() {
return `https://us-central1-aiplatform.googleapis.com/v1/projects/${this.project_id}/locations/us-central1/publishers/google/models/${this.modelOptions.model}:predict`;
}
setOptions(options) {
if (this.options && !this.options.replaceOptions) {
// nested options aren't spread properly, so we need to do this manually
this.options.modelOptions = {
...this.options.modelOptions,
...options.modelOptions
};
delete options.modelOptions;
// now we can merge options
this.options = {
...this.options,
...options
};
} else {
this.options = options;
}
this.options.examples = this.options.examples.filter(
(obj) => obj.input.content !== '' && obj.output.content !== ''
);
const modelOptions = this.options.modelOptions || {};
this.modelOptions = {
...modelOptions,
// set some good defaults (check for undefined in some cases because they may be 0)
model: modelOptions.model || 'chat-bison',
temperature: typeof modelOptions.temperature === 'undefined' ? 0.2 : modelOptions.temperature, // 0 - 1, 0.2 is recommended
topP: typeof modelOptions.topP === 'undefined' ? 0.95 : modelOptions.topP, // 0 - 1, default: 0.95
topK: typeof modelOptions.topK === 'undefined' ? 40 : modelOptions.topK // 1-40, default: 40
// stop: modelOptions.stop // no stop method for now
};
this.isChatModel = this.modelOptions.model.startsWith('chat-');
const { isChatModel } = this;
this.isTextModel = this.modelOptions.model.startsWith('text-');
const { isTextModel } = this;
this.maxContextTokens = this.options.maxContextTokens || (isTextModel ? 8000 : 4096);
// The max prompt tokens is determined by the max context tokens minus the max response tokens.
// Earlier messages will be dropped until the prompt is within the limit.
this.maxResponseTokens = this.modelOptions.maxOutputTokens || 1024;
this.maxPromptTokens =
this.options.maxPromptTokens || this.maxContextTokens - this.maxResponseTokens;
if (this.maxPromptTokens + this.maxResponseTokens > this.maxContextTokens) {
throw new Error(
`maxPromptTokens + maxOutputTokens (${this.maxPromptTokens} + ${this.maxResponseTokens} = ${
this.maxPromptTokens + this.maxResponseTokens
}) must be less than or equal to maxContextTokens (${this.maxContextTokens})`
);
}
this.userLabel = this.options.userLabel || 'User';
this.modelLabel = this.options.modelLabel || 'Assistant';
if (isChatModel) {
// Use these faux tokens to help the AI understand the context since we are building the chat log ourselves.
// Trying to use "<|im_start|>" causes the AI to still generate "<" or "<|" at the end sometimes for some reason,
// without tripping the stop sequences, so I'm using "||>" instead.
this.startToken = '||>';
this.endToken = '';
this.gptEncoder = this.constructor.getTokenizer('cl100k_base');
} else if (isTextModel) {
this.startToken = '<|im_start|>';
this.endToken = '<|im_end|>';
this.gptEncoder = this.constructor.getTokenizer('text-davinci-003', true, {
'<|im_start|>': 100264,
'<|im_end|>': 100265
});
} else {
// Previously I was trying to use "<|endoftext|>" but there seems to be some bug with OpenAI's token counting
// system that causes only the first "<|endoftext|>" to be counted as 1 token, and the rest are not treated
// as a single token. So we're using this instead.
this.startToken = '||>';
this.endToken = '';
try {
this.gptEncoder = this.constructor.getTokenizer(this.modelOptions.model, true);
} catch {
this.gptEncoder = this.constructor.getTokenizer('text-davinci-003', true);
}
}
if (!this.modelOptions.stop) {
const stopTokens = [this.startToken];
if (this.endToken && this.endToken !== this.startToken) {
stopTokens.push(this.endToken);
}
stopTokens.push(`\n${this.userLabel}:`);
stopTokens.push('<|diff_marker|>');
// I chose not to do one for `modelLabel` because I've never seen it happen
this.modelOptions.stop = stopTokens;
}
if (this.options.reverseProxyUrl) {
this.completionsUrl = this.options.reverseProxyUrl;
} else {
this.completionsUrl = this.constructUrl();
}
return this;
}
static getTokenizer(encoding, isModelName = false, extendSpecialTokens = {}) {
if (tokenizersCache[encoding]) {
return tokenizersCache[encoding];
}
let tokenizer;
if (isModelName) {
tokenizer = encodingForModel(encoding, extendSpecialTokens);
} else {
tokenizer = getEncoding(encoding, extendSpecialTokens);
}
tokenizersCache[encoding] = tokenizer;
return tokenizer;
}
async getClient() {
const scopes = ['https://www.googleapis.com/auth/cloud-platform'];
const jwtClient = new google.auth.JWT(this.client_email, null, this.private_key, scopes);
jwtClient.authorize((err) => {
if (err) {
console.log(err);
throw err;
}
});
return jwtClient;
}
buildPayload(input, { messages = [] }) {
let payload = {
instances: [
{
messages: [...messages, { author: this.userLabel, content: input }]
}
],
parameters: this.options.modelOptions
};
if (this.options.promptPrefix) {
payload.instances[0].context = this.options.promptPrefix;
}
if (this.options.examples.length > 0) {
payload.instances[0].examples = this.options.examples;
}
if (this.isTextModel) {
payload.instances = [
{
prompt: input
}
];
}
if (this.options.debug) {
console.debug('buildPayload');
console.dir(payload, { depth: null });
}
return payload;
}
async getCompletion(input, messages = [], abortController = null) {
if (!abortController) {
abortController = new AbortController();
}
const { debug } = this.options;
const url = this.completionsUrl;
if (debug) {
console.debug();
console.debug(url);
console.debug(this.modelOptions);
console.debug();
}
const opts = {
method: 'POST',
agent: new Agent({
bodyTimeout: 0,
headersTimeout: 0
}),
signal: abortController.signal
};
if (this.options.proxy) {
opts.agent = new ProxyAgent(this.options.proxy);
}
const client = await this.getClient();
const payload = this.buildPayload(input, { messages });
const res = await client.request({ url, method: 'POST', data: payload });
console.dir(res.data, { depth: null });
return res.data;
}
async loadHistory(conversationId, parentMessageId = null) {
if (this.options.debug) {
console.debug('Loading history for conversation', conversationId, parentMessageId);
}
if (!parentMessageId) {
return [];
}
const messages = (await getMessages({ conversationId })) || [];
if (messages.length === 0) {
this.currentMessages = [];
return [];
}
const orderedMessages = this.constructor.getMessagesForConversation(messages, parentMessageId);
return orderedMessages.map((message) => {
return {
author: message.isCreatedByUser ? this.userLabel : this.modelLabel,
content: message.content
};
});
}
async saveMessageToDatabase(message, user = null) {
await saveMessage({ ...message, unfinished: false });
await saveConvo(user, {
conversationId: message.conversationId,
endpoint: 'google',
...this.modelOptions
});
}
async sendMessage(message, opts = {}) {
if (opts && typeof opts === 'object') {
this.setOptions(opts);
}
console.log('sendMessage', message, opts);
const user = opts.user || null;
const conversationId = opts.conversationId || crypto.randomUUID();
const parentMessageId = opts.parentMessageId || '00000000-0000-0000-0000-000000000000';
const userMessageId = opts.overrideParentMessageId || crypto.randomUUID();
const responseMessageId = crypto.randomUUID();
const messages = await this.loadHistory(conversationId, this.options?.parentMessageId);
const userMessage = {
messageId: userMessageId,
parentMessageId,
conversationId,
sender: 'User',
text: message,
isCreatedByUser: true
};
if (typeof opts?.getIds === 'function') {
opts.getIds({
userMessage,
conversationId,
responseMessageId
});
}
console.log('userMessage', userMessage);
await this.saveMessageToDatabase(userMessage, user);
let reply = '';
let blocked = false;
try {
const result = await this.getCompletion(message, messages, opts.abortController);
blocked = result?.predictions?.[0]?.safetyAttributes?.blocked;
reply =
result?.predictions?.[0]?.candidates?.[0]?.content ||
result?.predictions?.[0]?.content ||
'';
if (blocked === true) {
reply = `Google blocked a proper response to your message:\n${JSON.stringify(
result.predictions[0].safetyAttributes
)}${reply.length > 0 ? `\nAI Response:\n${reply}` : ''}`;
}
if (this.options.debug) {
console.debug('result');
console.debug(result);
}
} catch (err) {
console.error(err);
}
if (this.options.debug) {
console.debug('options');
console.debug(this.options);
}
if (!blocked) {
const textStream = new TextStream(reply, { delay: 0.5 });
await textStream.processTextStream(opts.onProgress);
}
const responseMessage = {
messageId: responseMessageId,
conversationId,
parentMessageId: userMessage.messageId,
sender: 'PaLM2',
text: reply,
error: blocked,
isCreatedByUser: false
};
await this.saveMessageToDatabase(responseMessage, user);
return responseMessage;
}
getTokenCount(text) {
return this.gptEncoder.encode(text, 'all').length;
}
/**
* Algorithm adapted from "6. Counting tokens for chat API calls" of
* https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb
*
* An additional 2 tokens need to be added for metadata after all messages have been counted.
*
* @param {*} message
*/
getTokenCountForMessage(message) {
// Map each property of the message to the number of tokens it contains
const propertyTokenCounts = Object.entries(message).map(([key, value]) => {
// Count the number of tokens in the property value
const numTokens = this.getTokenCount(value);
// Subtract 1 token if the property key is 'name'
const adjustment = key === 'name' ? 1 : 0;
return numTokens - adjustment;
});
// Sum the number of tokens in all properties and add 4 for metadata
return propertyTokenCounts.reduce((a, b) => a + b, 4);
}
/**
* Iterate through messages, building an array based on the parentMessageId.
* Each message has an id and a parentMessageId. The parentMessageId is the id of the message that this message is a reply to.
* @param messages
* @param parentMessageId
* @returns {*[]} An array containing the messages in the order they should be displayed, starting with the root message.
*/
static getMessagesForConversation(messages, parentMessageId) {
const orderedMessages = [];
let currentMessageId = parentMessageId;
while (currentMessageId) {
// eslint-disable-next-line no-loop-func
const message = messages.find((m) => m.messageId === currentMessageId);
if (!message) {
break;
}
orderedMessages.unshift(message);
currentMessageId = message.parentMessageId;
}
if (orderedMessages.length === 0) {
return [];
}
return orderedMessages.map((msg) => ({
isCreatedByUser: msg.isCreatedByUser,
content: msg.text
}));
}
}
module.exports = GoogleAgent;

View File

@@ -0,0 +1,904 @@
const crypto = require('crypto');
const { genAzureChatCompletion } = require('../../utils/genAzureEndpoints');
const {
encoding_for_model: encodingForModel,
get_encoding: getEncoding
} = require('@dqbd/tiktoken');
const { fetchEventSource } = require('@waylaidwanderer/fetch-event-source');
const { Agent, ProxyAgent } = require('undici');
const TextStream = require('../stream');
const { ChatOpenAI } = require('langchain/chat_models/openai');
const { CallbackManager } = require('langchain/callbacks');
const { HumanChatMessage, AIChatMessage } = require('langchain/schema');
const { initializeCustomAgent } = require('./agents/CustomAgent/initializeCustomAgent');
const { getMessages, saveMessage, saveConvo } = require('../../models');
const { loadTools, SelfReflectionTool } = require('./tools');
const {
instructions,
imageInstructions,
errorInstructions,
completionInstructions
} = require('./instructions');
const tokenizersCache = {};
class ChatAgent {
constructor(apiKey, options = {}) {
this.tools = [];
this.actions = [];
this.openAIApiKey = apiKey;
this.azure = options.azure || false;
if (this.azure) {
const { azureOpenAIApiInstanceName, azureOpenAIApiDeploymentName, azureOpenAIApiVersion } =
this.azure;
this.azureEndpoint = genAzureChatCompletion({
azureOpenAIApiInstanceName,
azureOpenAIApiDeploymentName,
azureOpenAIApiVersion
});
}
this.setOptions(options);
this.executor = null;
this.currentDateString = new Date().toLocaleDateString('en-us', {
year: 'numeric',
month: 'long',
day: 'numeric'
});
}
getActions(input = null) {
let output = 'Internal thoughts & actions taken:\n"';
let actions = input || this.actions;
if (actions[0]?.action) {
actions = actions.map((step) => ({
log: `${step.action.log}\nObservation: ${step.observation}`
}));
}
actions.forEach((actionObj, index) => {
output += `${actionObj.log}`;
if (index < actions.length - 1) {
output += '\n';
}
});
return output + '"';
}
buildErrorInput(message, errorMessage) {
const log = errorMessage.includes('Could not parse LLM output:')
? `A formatting error occurred with your response to the human's last message. You didn't follow the formatting instructions. Remember to ${instructions}`
: `You encountered an error while replying to the human's last message. Attempt to answer again or admit an answer cannot be given.\nError: ${errorMessage}`;
return `
${log}
${this.getActions()}
Human's last message: ${message}
`;
}
buildPromptPrefix(result, message) {
if ((result.output && result.output.includes('N/A')) || result.output === undefined) {
return null;
}
if (
result?.intermediateSteps?.length === 1 &&
result?.intermediateSteps[0]?.action?.toolInput === 'N/A'
) {
return null;
}
const internalActions =
result?.intermediateSteps?.length > 0
? this.getActions(result.intermediateSteps)
: 'Internal Actions Taken: None';
const toolBasedInstructions = internalActions.toLowerCase().includes('image')
? imageInstructions
: '';
const errorMessage = result.errorMessage ? `${errorInstructions} ${result.errorMessage}\n` : '';
const preliminaryAnswer =
result.output?.length > 0 ? `Preliminary Answer: "${result.output.trim()}"` : '';
const prefix = preliminaryAnswer
? `review and improve the answer you generated using plugins in response to the User Message below. The answer hasn't been sent to the user yet.`
: 'respond to the User Message below based on your preliminary thoughts & actions.';
return `As ChatGPT, ${prefix}${errorMessage}\n${internalActions}
${preliminaryAnswer}
Reply conversationally to the User based on your ${
preliminaryAnswer ? 'preliminary answer, ' : ''
}internal actions, thoughts, and observations, making improvements wherever possible, but do not modify URLs.
${
preliminaryAnswer
? ''
: '\nIf there is an incomplete thought or action, you are expected to complete it in your response now.\n'
}You must cite sources if you are using any web links. ${toolBasedInstructions}
Only respond with your conversational reply to the following User Message:
"${message}"`;
}
setOptions(options) {
if (this.options && !this.options.replaceOptions) {
// nested options aren't spread properly, so we need to do this manually
this.options.modelOptions = {
...this.options.modelOptions,
...options.modelOptions
};
this.options.agentOptions = {
...this.options.agentOptions,
...options.agentOptions
};
delete options.modelOptions;
delete options.agentOptions;
// now we can merge options
this.options = {
...this.options,
...options
};
} else {
this.options = options;
}
this.agentOptions = this.options.agentOptions || {};
this.agentIsGpt3 = this.agentOptions.model.startsWith('gpt-3');
const modelOptions = this.options.modelOptions || {};
this.modelOptions = {
...modelOptions,
model: modelOptions.model || 'gpt-3.5-turbo',
temperature: typeof modelOptions.temperature === 'undefined' ? 0.8 : modelOptions.temperature,
top_p: typeof modelOptions.top_p === 'undefined' ? 1 : modelOptions.top_p,
presence_penalty:
typeof modelOptions.presence_penalty === 'undefined' ? 0 : modelOptions.presence_penalty,
frequency_penalty:
typeof modelOptions.frequency_penalty === 'undefined' ? 0 : modelOptions.frequency_penalty,
stop: modelOptions.stop
};
this.isChatGptModel = this.modelOptions.model.startsWith('gpt-');
this.isGpt3 = this.modelOptions.model.startsWith('gpt-3');
this.maxContextTokens = this.modelOptions.model === 'gpt-4-32k' ? 32767 : this.modelOptions.model.startsWith('gpt-4') ? 8191 : 4095,
// Reserve 1024 tokens for the response.
// The max prompt tokens is determined by the max context tokens minus the max response tokens.
// Earlier messages will be dropped until the prompt is within the limit.
this.maxResponseTokens = this.modelOptions.max_tokens || 1024;
this.maxPromptTokens =
this.options.maxPromptTokens || this.maxContextTokens - this.maxResponseTokens;
if (this.maxPromptTokens + this.maxResponseTokens > this.maxContextTokens) {
throw new Error(
`maxPromptTokens + max_tokens (${this.maxPromptTokens} + ${this.maxResponseTokens} = ${
this.maxPromptTokens + this.maxResponseTokens
}) must be less than or equal to maxContextTokens (${this.maxContextTokens})`
);
}
this.userLabel = this.options.userLabel || 'User';
this.chatGptLabel = this.options.chatGptLabel || 'ChatGPT';
// Use these faux tokens to help the AI understand the context since we are building the chat log ourselves.
// Trying to use "<|im_start|>" causes the AI to still generate "<" or "<|" at the end sometimes for some reason,
// without tripping the stop sequences, so I'm using "||>" instead.
this.startToken = '||>';
this.endToken = '';
this.gptEncoder = this.constructor.getTokenizer('cl100k_base');
this.completionsUrl = 'https://api.openai.com/v1/chat/completions';
this.reverseProxyUrl = this.options.reverseProxyUrl || process.env.OPENAI_REVERSE_PROXY;
if (this.reverseProxyUrl) {
this.completionsUrl = this.reverseProxyUrl;
this.langchainProxy = this.reverseProxyUrl.substring(0, this.reverseProxyUrl.indexOf('v1') + 'v1'.length)
}
if (this.azureEndpoint) {
this.completionsUrl = this.azureEndpoint;
}
if (this.azureEndpoint && this.options.debug) {
console.debug(`Using Azure endpoint: ${this.azureEndpoint}`, this.azure);
}
}
static getTokenizer(encoding, isModelName = false, extendSpecialTokens = {}) {
if (tokenizersCache[encoding]) {
return tokenizersCache[encoding];
}
let tokenizer;
if (isModelName) {
tokenizer = encodingForModel(encoding, extendSpecialTokens);
} else {
tokenizer = getEncoding(encoding, extendSpecialTokens);
}
tokenizersCache[encoding] = tokenizer;
return tokenizer;
}
async getCompletion(input, onProgress, abortController = null) {
if (!abortController) {
abortController = new AbortController();
}
const modelOptions = this.modelOptions;
if (typeof onProgress === 'function') {
modelOptions.stream = true;
}
if (this.isChatGptModel) {
modelOptions.messages = input;
} else {
modelOptions.prompt = input;
}
const { debug } = this.options;
const url = this.completionsUrl;
if (debug) {
console.debug();
console.debug(url);
console.debug(modelOptions);
console.debug();
}
const opts = {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify(modelOptions),
dispatcher: new Agent({
bodyTimeout: 0,
headersTimeout: 0
})
};
if (this.azureEndpoint) {
opts.headers['api-key'] = this.azure.azureOpenAIApiKey;
} else if (this.openAIApiKey) {
opts.headers.Authorization = `Bearer ${this.openAIApiKey}`;
}
if (this.options.proxy) {
opts.dispatcher = new ProxyAgent(this.options.proxy);
}
if (modelOptions.stream) {
// eslint-disable-next-line no-async-promise-executor
return new Promise(async (resolve, reject) => {
try {
let done = false;
await fetchEventSource(url, {
...opts,
signal: abortController.signal,
async onopen(response) {
if (response.status === 200) {
return;
}
if (debug) {
// console.debug(response);
}
let error;
try {
const body = await response.text();
error = new Error(`Failed to send message. HTTP ${response.status} - ${body}`);
error.status = response.status;
error.json = JSON.parse(body);
} catch {
error = error || new Error(`Failed to send message. HTTP ${response.status}`);
}
throw error;
},
onclose() {
if (debug) {
console.debug('Server closed the connection unexpectedly, returning...');
}
// workaround for private API not sending [DONE] event
if (!done) {
onProgress('[DONE]');
abortController.abort();
resolve();
}
},
onerror(err) {
if (debug) {
console.debug(err);
}
// rethrow to stop the operation
throw err;
},
onmessage(message) {
if (debug) {
// console.debug(message);
}
if (!message.data || message.event === 'ping') {
return;
}
if (message.data === '[DONE]') {
onProgress('[DONE]');
abortController.abort();
resolve();
done = true;
return;
}
onProgress(JSON.parse(message.data));
}
});
} catch (err) {
reject(err);
}
});
}
const response = await fetch(url, {
...opts,
signal: abortController.signal
});
if (response.status !== 200) {
const body = await response.text();
const error = new Error(`Failed to send message. HTTP ${response.status} - ${body}`);
error.status = response.status;
try {
error.json = JSON.parse(body);
} catch {
error.body = body;
}
throw error;
}
return response.json();
}
async loadHistory(conversationId, parentMessageId = null) {
if (this.options.debug) {
console.debug('Loading history for conversation', conversationId, parentMessageId);
}
const messages = (await getMessages({ conversationId })) || [];
if (messages.length === 0) {
this.currentMessages = [];
return [];
}
const orderedMessages = this.constructor.getMessagesForConversation(messages, parentMessageId);
// Convert Message documents into appropriate ChatMessage instances
const chatMessages = orderedMessages.map((msg) =>
msg?.isCreatedByUser || msg?.role.toLowerCase() === 'user'
? new HumanChatMessage(msg.text)
: new AIChatMessage(msg.text)
);
this.currentMessages = orderedMessages;
return chatMessages;
}
async saveMessageToDatabase(message, user = null) {
await saveMessage({ ...message, unfinished: false });
await saveConvo(user, {
conversationId: message.conversationId,
endpoint: 'gptPlugins',
chatGptLabel: this.options.chatGptLabel,
promptPrefix: this.options.promptPrefix,
...this.modelOptions,
agentOptions: this.agentOptions
});
}
saveLatestAction(action) {
this.actions.push(action);
}
async initialize({ user, message, onAgentAction, onChainEnd, signal }) {
const modelOptions = {
modelName: this.agentOptions.model,
temperature: this.agentOptions.temperature
};
const configOptions = {};
if (this.langchainProxy) {
configOptions.basePath = this.langchainProxy;
}
const model = this.azure
? new ChatOpenAI({
...this.azure,
...modelOptions
})
: new ChatOpenAI(
{
openAIApiKey: this.openAIApiKey,
...modelOptions
},
configOptions
// {
// basePath: 'http://localhost:8080/v1'
// }
);
if (this.options.debug) {
console.debug(`<-----Agent Model: ${model.modelName} | Temp: ${model.temperature}----->`);
}
this.availableTools = await loadTools({
user,
model,
tools: this.options.tools,
options: {
openAIApiKey: this.openAIApiKey
}
});
// load tools
for (const tool of this.options.tools) {
const validTool = this.availableTools[tool];
if (tool === 'plugins') {
const plugins = await validTool();
this.tools = [...this.tools, ...plugins];
} else if (validTool) {
this.tools.push(await validTool());
}
}
if (this.options.debug) {
console.debug('Requested Tools');
console.debug(this.options.tools);
console.debug('Loaded Tools');
console.debug(this.tools.map((tool) => tool.name));
}
if (this.tools.length > 0) {
this.tools.push(new SelfReflectionTool({ message, isGpt3: false }));
} else if (this.tools.length === 0) {
return;
}
const handleAction = (action, callback = null) => {
this.saveLatestAction(action);
if (this.options.debug) {
console.debug('Latest Agent Action ', this.actions[this.actions.length - 1]);
}
if (typeof callback === 'function') {
callback(action);
}
};
// initialize agent
this.executor = await initializeCustomAgent({
model,
signal,
tools: this.tools,
pastMessages: this.pastMessages,
currentDateString: this.currentDateString,
verbose: this.options.debug,
returnIntermediateSteps: true,
callbackManager: CallbackManager.fromHandlers({
async handleAgentAction(action) {
handleAction(action, onAgentAction);
},
async handleChainEnd(action) {
if (typeof onChainEnd === 'function') {
onChainEnd(action);
}
}
})
});
if (this.options.debug) {
console.debug('Loaded agent.');
}
}
async sendApiMessage(messages, userMessage, opts = {}) {
// Doing it this way instead of having each message be a separate element in the array seems to be more reliable,
// especially when it comes to keeping the AI in character. It also seems to improve coherency and context retention.
let payload = await this.buildPrompt({
messages: [
...messages,
{
messageId: userMessage.messageId,
parentMessageId: userMessage.parentMessageId,
role: 'User',
text: userMessage.text
}
],
...opts
});
let reply = '';
let result = {};
if (typeof opts.onProgress === 'function') {
await this.getCompletion(
payload,
(progressMessage) => {
if (progressMessage === '[DONE]') {
return;
}
const token = this.isChatGptModel
? progressMessage.choices[0].delta.content
: progressMessage.choices[0].text;
// first event's delta content is always undefined
if (!token) {
return;
}
if (token === this.endToken) {
return;
}
opts.onProgress(token);
reply += token;
},
opts.abortController || new AbortController()
);
} else {
result = await this.getCompletion(
payload,
null,
opts.abortController || new AbortController()
);
if (this.options.debug) {
console.debug(JSON.stringify(result));
}
if (this.isChatGptModel) {
reply = result.choices[0].message.content;
} else {
reply = result.choices[0].text.replace(this.endToken, '');
}
}
if (this.options.debug) {
console.debug();
}
return reply.trim();
}
async executorCall(message, signal) {
let errorMessage = '';
const maxAttempts = 1;
for (let attempts = 1; attempts <= maxAttempts; attempts++) {
const errorInput = this.buildErrorInput(message, errorMessage);
const input = attempts > 1 ? errorInput : message;
if (this.options.debug) {
console.debug(`Attempt ${attempts} of ${maxAttempts}`);
}
if (this.options.debug && errorMessage.length > 0) {
console.debug('Caught error, input:', input);
}
try {
this.result = await this.executor.call({ input, signal });
break; // Exit the loop if the function call is successful
} catch (err) {
console.error(err);
errorMessage = err.message;
if (attempts === maxAttempts) {
this.result.output = `Encountered an error while attempting to respond. Error: ${err.message}`;
this.result.intermediateSteps = this.actions;
this.result.errorMessage = errorMessage;
break;
}
}
}
}
async sendMessage(message, opts = {}) {
if (opts && typeof opts === 'object') {
this.setOptions(opts);
}
console.log('sendMessage', message, opts);
const user = opts.user || null;
const { onAgentAction, onChainEnd, onProgress } = opts;
const conversationId = opts.conversationId || crypto.randomUUID();
const parentMessageId = opts.parentMessageId || '00000000-0000-0000-0000-000000000000';
const userMessageId = opts.overrideParentMessageId || crypto.randomUUID();
const responseMessageId = crypto.randomUUID();
this.pastMessages = await this.loadHistory(conversationId, this.options?.parentMessageId);
const userMessage = {
messageId: userMessageId,
parentMessageId,
conversationId,
sender: 'User',
text: message,
isCreatedByUser: true
};
if (typeof opts?.getIds === 'function') {
opts.getIds({
userMessage,
conversationId,
responseMessageId
});
}
if (typeof opts?.onStart === 'function') {
opts.onStart(userMessage);
}
await this.saveMessageToDatabase(userMessage, user);
this.result = {};
const responseMessage = {
messageId: responseMessageId,
conversationId,
parentMessageId: userMessage.messageId,
isCreatedByUser: false,
model: this.modelOptions.model,
sender: 'ChatGPT'
};
if (this.options.debug) {
console.debug('options');
console.debug(this.options);
}
const completionMode = this.options.tools.length === 0;
if (!completionMode) {
await this.initialize({
user,
message,
onAgentAction,
onChainEnd,
signal: opts.abortController.signal
});
await this.executorCall(message, opts.abortController.signal);
}
// If message was aborted mid-generation
if (this.result?.errorMessage?.length > 0 && this.result?.errorMessage?.includes('cancel')) {
responseMessage.text = 'Cancelled.';
await this.saveMessageToDatabase(responseMessage, user);
return { ...responseMessage, ...this.result };
}
if (!this.agentIsGpt3 && this.result.output) {
responseMessage.text = this.result.output;
await this.saveMessageToDatabase(responseMessage, user);
const textStream = new TextStream(this.result.output);
await textStream.processTextStream(onProgress);
return { ...responseMessage, ...this.result };
}
if (this.options.debug) {
console.debug('this.result', this.result);
}
const userProvidedPrefix = completionMode && this.options?.promptPrefix?.length > 0;
const promptPrefix = userProvidedPrefix
? this.options.promptPrefix
: this.buildPromptPrefix(this.result, message);
if (this.options.debug) {
console.debug('promptPrefix', promptPrefix);
}
const finalReply = await this.sendApiMessage(this.currentMessages, userMessage, { ...opts, completionMode, promptPrefix });
responseMessage.text = finalReply;
await this.saveMessageToDatabase(responseMessage, user);
return { ...responseMessage, ...this.result };
}
async buildPrompt({ messages, promptPrefix: _promptPrefix, completionMode = false, isChatGptModel = true }) {
if (this.options.debug) {
console.debug('buildPrompt messages', messages);
}
const orderedMessages = messages;
let promptPrefix = _promptPrefix;
if (promptPrefix) {
promptPrefix = promptPrefix.trim();
// If the prompt prefix doesn't end with the end token, add it.
if (!promptPrefix.endsWith(`${this.endToken}`)) {
promptPrefix = `${promptPrefix.trim()}${this.endToken}\n\n`;
}
promptPrefix = `${this.startToken}Instructions:\n${promptPrefix}`;
} else {
promptPrefix = `${this.startToken}${completionInstructions} ${this.currentDateString}${this.endToken}\n\n`;
}
const promptSuffix = `${this.startToken}${this.chatGptLabel}:\n`; // Prompt ChatGPT to respond.
const instructionsPayload = {
role: 'system',
name: 'instructions',
content: promptPrefix
};
const messagePayload = {
role: 'system',
content: promptSuffix
};
if (this.isGpt3) {
instructionsPayload.role = 'user';
messagePayload.role = 'user';
}
if (this.isGpt3 && completionMode) {
instructionsPayload.content += `\n${promptSuffix}`;
}
// testing if this works with browser endpoint
if (!this.isGpt3 && this.reverseProxyUrl) {
instructionsPayload.role = 'user';
}
let currentTokenCount;
if (isChatGptModel) {
currentTokenCount =
this.getTokenCountForMessage(instructionsPayload) +
this.getTokenCountForMessage(messagePayload);
} else {
currentTokenCount = this.getTokenCount(`${promptPrefix}${promptSuffix}`);
}
let promptBody = '';
const maxTokenCount = this.maxPromptTokens;
// Iterate backwards through the messages, adding them to the prompt until we reach the max token count.
// Do this within a recursive async function so that it doesn't block the event loop for too long.
const buildPromptBody = async () => {
if (currentTokenCount < maxTokenCount && orderedMessages.length > 0) {
const message = orderedMessages.pop();
// const roleLabel = message.role === 'User' ? this.userLabel : this.chatGptLabel;
const roleLabel = message.role;
let messageString = `${this.startToken}${roleLabel}:\n${message.text}${this.endToken}\n`;
let newPromptBody;
if (promptBody || isChatGptModel) {
newPromptBody = `${messageString}${promptBody}`;
} else {
// Always insert prompt prefix before the last user message, if not gpt-3.5-turbo.
// This makes the AI obey the prompt instructions better, which is important for custom instructions.
// After a bunch of testing, it doesn't seem to cause the AI any confusion, even if you ask it things
// like "what's the last thing I wrote?".
newPromptBody = `${promptPrefix}${messageString}${promptBody}`;
}
const tokenCountForMessage = this.getTokenCount(messageString);
const newTokenCount = currentTokenCount + tokenCountForMessage;
if (newTokenCount > maxTokenCount) {
if (promptBody) {
// This message would put us over the token limit, so don't add it.
return false;
}
// This is the first message, so we can't add it. Just throw an error.
throw new Error(
`Prompt is too long. Max token count is ${maxTokenCount}, but prompt is ${newTokenCount} tokens long.`
);
}
promptBody = newPromptBody;
currentTokenCount = newTokenCount;
// wait for next tick to avoid blocking the event loop
await new Promise((resolve) => setTimeout(resolve, 0));
return buildPromptBody();
}
return true;
};
await buildPromptBody();
// const prompt = `${promptBody}${promptSuffix}`;
const prompt = promptBody;
if (isChatGptModel) {
messagePayload.content = prompt;
// Add 2 tokens for metadata after all messages have been counted.
currentTokenCount += 2;
}
if (this.isGpt3 && messagePayload.content.length > 0) {
const context = `Chat History:\n`;
messagePayload.content = `${context}${prompt}`;
currentTokenCount += this.getTokenCount(context);
}
// Use up to `this.maxContextTokens` tokens (prompt + response), but try to leave `this.maxTokens` tokens for the response.
this.modelOptions.max_tokens = Math.min(
this.maxContextTokens - currentTokenCount,
this.maxResponseTokens
);
if (this.isGpt3 && !completionMode) {
messagePayload.content += promptSuffix;
return [instructionsPayload, messagePayload];
}
if (isChatGptModel) {
const result = [messagePayload, instructionsPayload];
return result.filter((message) => message.content.length > 0);
}
this.completionPromptTokens = currentTokenCount;
return prompt;
}
getTokenCount(text) {
return this.gptEncoder.encode(text, 'all').length;
}
/**
* Algorithm adapted from "6. Counting tokens for chat API calls" of
* https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb
*
* An additional 2 tokens need to be added for metadata after all messages have been counted.
*
* @param {*} message
*/
getTokenCountForMessage(message) {
// Map each property of the message to the number of tokens it contains
const propertyTokenCounts = Object.entries(message).map(([key, value]) => {
// Count the number of tokens in the property value
const numTokens = this.getTokenCount(value);
// Subtract 1 token if the property key is 'name'
const adjustment = key === 'name' ? 1 : 0;
return numTokens - adjustment;
});
// Sum the number of tokens in all properties and add 4 for metadata
return propertyTokenCounts.reduce((a, b) => a + b, 4);
}
/**
* Iterate through messages, building an array based on the parentMessageId.
* Each message has an id and a parentMessageId. The parentMessageId is the id of the message that this message is a reply to.
* @param messages
* @param parentMessageId
* @returns {*[]} An array containing the messages in the order they should be displayed, starting with the root message.
*/
static getMessagesForConversation(messages, parentMessageId) {
const orderedMessages = [];
let currentMessageId = parentMessageId;
while (currentMessageId) {
// eslint-disable-next-line no-loop-func
const message = messages.find((m) => m.messageId === currentMessageId);
if (!message) {
break;
}
orderedMessages.unshift(message);
currentMessageId = message.parentMessageId;
}
if (orderedMessages.length === 0) {
return [];
}
return orderedMessages.map((msg) => ({
messageId: msg.messageId,
parentMessageId: msg.parentMessageId,
role: msg.isCreatedByUser ? 'User' : 'ChatGPT',
text: msg.text
}));
}
/**
* Extracts the action tool values from the intermediate steps array.
* Each step object in the array contains an action object with a tool property.
* This function returns an array of tool values.
*
* @param {Object[]} intermediateSteps - An array of intermediate step objects.
* @returns {string} An string of action tool values from each step.
*/
extractToolValues(intermediateSteps) {
const tools = intermediateSteps.map((step) => step.action.tool);
if (tools.length === 0) {
return '';
}
const uniqueTools = [...new Set(tools)];
if (tools.length === 1) {
return tools[0] + ' plugin';
}
return uniqueTools.join(' plugin, ');
}
}
module.exports = ChatAgent;

View File

@@ -0,0 +1,92 @@
const mongoose = require('mongoose');
const ChatAgent = require('./ChatAgent');
const connectDb = require('../../lib/db/connectDb');
const Conversation = require('../../models/Conversation');
describe('ChatAgent', () => {
let TestAgent;
let options = {
tools: [],
modelOptions: {
model: 'gpt-3.5-turbo',
temperature: 0,
max_tokens: 2
},
agentOptions: {
model: 'gpt-3.5-turbo',
}
};
let parentMessageId;
let conversationId;
const userMessage = 'Hello, ChatGPT!';
const apiKey = process.env.OPENAI_API_KEY;
beforeAll(async () => {
await connectDb();
});
beforeEach(() => {
TestAgent = new ChatAgent(apiKey, options);
});
afterAll(async () => {
// Delete the messages and conversation created by the test
await Conversation.deleteConvos(null, { conversationId });
await mongoose.connection.close();
});
test('initializes ChatAgent without crashing', () => {
expect(TestAgent).toBeInstanceOf(ChatAgent);
});
test('check setOptions function', () => {
expect(TestAgent.agentIsGpt3).toBe(true);
});
describe('sendMessage', () => {
test('sendMessage should return a response message', async () => {
const expectedResult = expect.objectContaining({
sender: 'ChatGPT',
text: expect.any(String),
isCreatedByUser: false,
messageId: expect.any(String),
parentMessageId: expect.any(String),
conversationId: expect.any(String)
});
const response = await TestAgent.sendMessage(userMessage);
console.log(response);
parentMessageId = response.messageId;
conversationId = response.conversationId;
expect(response).toEqual(expectedResult);
});
test('sendMessage should work with provided conversationId and parentMessageId', async () => {
const userMessage = 'Second message in the conversation';
const opts = {
conversationId,
parentMessageId
};
const expectedResult = expect.objectContaining({
sender: 'ChatGPT',
text: expect.any(String),
isCreatedByUser: false,
messageId: expect.any(String),
parentMessageId: expect.any(String),
conversationId: opts.conversationId
});
const response = await TestAgent.sendMessage(userMessage, opts);
parentMessageId = response.messageId;
expect(response.conversationId).toEqual(conversationId);
expect(response).toEqual(expectedResult);
});
test('should return chat history', async () => {
const chatMessages = await TestAgent.loadHistory(conversationId, parentMessageId);
expect(TestAgent.currentMessages).toHaveLength(4);
expect(chatMessages[0].text).toEqual(userMessage);
});
});
});

View File

@@ -0,0 +1,50 @@
const { ZeroShotAgent } = require('langchain/agents');
const { PromptTemplate, renderTemplate } = require('langchain/prompts');
const { gpt3, gpt4 } = require('./instructions');
class CustomAgent extends ZeroShotAgent {
constructor(input) {
super(input);
}
_stop() {
return [`\nObservation:`, `\nObservation 1:`];
}
static createPrompt(tools, opts = {}) {
const { currentDateString, model } = opts;
const inputVariables = ['input', 'chat_history', 'agent_scratchpad'];
let prefix, instructions, suffix;
if (model.startsWith('gpt-3')) {
prefix = gpt3.prefix;
instructions = gpt3.instructions;
suffix = gpt3.suffix;
} else if (model.startsWith('gpt-4')) {
prefix = gpt4.prefix;
instructions = gpt4.instructions;
suffix = gpt4.suffix;
}
const toolStrings = tools
.filter((tool) => tool.name !== 'self-reflection')
.map((tool) => `${tool.name}: ${tool.description}`)
.join('\n');
const toolNames = tools.map((tool) => tool.name);
const formatInstructions = (0, renderTemplate)(instructions, 'f-string', {
tool_names: toolNames
});
const template = [
`Date: ${currentDateString}\n${prefix}`,
toolStrings,
formatInstructions,
suffix
].join('\n\n');
return new PromptTemplate({
template,
inputVariables
});
}
}
module.exports = CustomAgent;

View File

@@ -0,0 +1,56 @@
const CustomAgent = require('./CustomAgent');
const { CustomOutputParser } = require('./outputParser');
const { AgentExecutor } = require('langchain/agents');
const { LLMChain } = require('langchain/chains');
const { BufferMemory, ChatMessageHistory } = require('langchain/memory');
const {
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate
} = require('langchain/prompts');
const initializeCustomAgent = async ({
tools,
model,
pastMessages,
currentDateString,
...rest
}) => {
let prompt = CustomAgent.createPrompt(tools, { currentDateString, model: model.modelName });
const chatPrompt = ChatPromptTemplate.fromPromptMessages([
new SystemMessagePromptTemplate(prompt),
HumanMessagePromptTemplate.fromTemplate(`{chat_history}
Query: {input}
{agent_scratchpad}`)
]);
const outputParser = new CustomOutputParser({ tools });
const memory = new BufferMemory({
chatHistory: new ChatMessageHistory(pastMessages),
// returnMessages: true, // commenting this out retains memory
memoryKey: 'chat_history',
humanPrefix: 'User',
aiPrefix: 'Assistant',
inputKey: 'input',
outputKey: 'output'
});
const llmChain = new LLMChain({
prompt: chatPrompt,
llm: model
});
const agent = new CustomAgent({
llmChain,
outputParser,
allowedTools: tools.map((tool) => tool.name)
});
return AgentExecutor.fromAgentAndTools({ agent, tools, memory, ...rest });
};
module.exports = {
initializeCustomAgent
};

View File

@@ -0,0 +1,203 @@
/*
module.exports = `You are ChatGPT, a Large Language model with useful tools.
Talk to the human and provide meaningful answers when questions are asked.
Use the tools when you need them, but use your own knowledge if you are confident of the answer. Keep answers short and concise.
A tool is not usually needed for creative requests, so do your best to answer them without tools.
Avoid repeating identical answers if it appears before. Only fulfill the human's requests, do not create extra steps beyond what the human has asked for.
Your input for 'Action' should be the name of tool used only.
Be honest. If you can't answer something, or a tool is not appropriate, say you don't know or answer to the best of your ability.
Attempt to fulfill the human's requests in as few actions as possible`;
*/
// module.exports = `You are ChatGPT, a highly knowledgeable and versatile large language model.
// Engage with the Human conversationally, providing concise and meaningful answers to questions. Utilize built-in tools when necessary, except for creative requests, where relying on your own knowledge is preferred. Aim for variety and avoid repetitive answers.
// For your 'Action' input, state the name of the tool used only, and honor user requests without adding extra steps. Always be honest; if you cannot provide an appropriate answer or tool, admit that or do your best.
// Strive to meet the user's needs efficiently with minimal actions.`;
// import {
// BasePromptTemplate,
// BaseStringPromptTemplate,
// SerializedBasePromptTemplate,
// renderTemplate,
// } from "langchain/prompts";
// prefix: `You are ChatGPT, a highly knowledgeable and versatile large language model.
// Your objective is to help users by understanding their intent and choosing the best action. Prioritize direct, specific responses. Use concise, varied answers and rely on your knowledge for creative tasks. Utilize tools when needed, and structure results for machine compatibility.
// prefix: `Objective: to comprehend human intentions based on user input and available tools. Goal: identify the best action to directly address the human's query. In your subsequent steps, you will utilize the chosen action. You may select multiple actions and list them in a meaningful order. Prioritize actions that directly relate to the user's query over general ones. Ensure that the generated thought is highly specific and explicit to best match the user's expectations. Construct the result in a manner that an online open-API would most likely expect. Provide concise and meaningful answers to human queries. Utilize tools when necessary. Relying on your own knowledge is preferred for creative requests. Aim for variety and avoid repetitive answers.
// # Available Actions & Tools:
// N/A: no suitable action, use your own knowledge.`,
// suffix: `Remember, all your responses MUST adhere to the described format and only respond if the format is followed. Output exactly with the requested format, avoiding any other text as this will be parsed by a machine. Following 'Action:', provide only one of the actions listed above. If a tool is not necessary, deduce this quickly and finish your response. Honor the human's requests without adding extra steps. Carry out tasks in the sequence written by the human. Always be honest; if you cannot provide an appropriate answer or tool, do your best with your own knowledge. Strive to meet the user's needs efficiently with minimal actions.`;
module.exports = {
'gpt3-v1': {
prefix: `Objective: Understand human intentions using user input and available tools. Goal: Identify the most suitable actions to directly address user queries.
When responding:
- Choose actions relevant to the user's query, using multiple actions in a logical order if needed.
- Prioritize direct and specific thoughts to meet user expectations.
- Format results in a way compatible with open-API expectations.
- Offer concise, meaningful answers to user queries.
- Use tools when necessary but rely on your own knowledge for creative requests.
- Strive for variety, avoiding repetitive responses.
# Available Actions & Tools:
N/A: No suitable action; use your own knowledge.`,
instructions: `Always adhere to the following format in your response to indicate actions taken:
Thought: Summarize your thought process.
Action: Select an action from [{tool_names}].
Action Input: Define the action's input.
Observation: Report the action's result.
Repeat steps 1-4 as needed, in order. When not using a tool, use N/A for Action, provide the result as Action Input, and include an Observation.
Upon reaching the final answer, use this format after completing all necessary actions:
Thought: Indicate that you've determined the final answer.
Final Answer: Present the answer to the user's query.`,
suffix: `Keep these guidelines in mind when crafting your response:
- Strictly adhere to the Action format for all responses, as they will be machine-parsed.
- If a tool is unnecessary, quickly move to the Thought/Final Answer format.
- Follow the logical sequence provided by the user without adding extra steps.
- Be honest; if you can't provide an appropriate answer using the given tools, use your own knowledge.
- Aim for efficiency and minimal actions to meet the user's needs effectively.`,
},
'gpt3-v2': {
prefix: `Objective: Understand the human's query with available actions & tools. Let's work this out in a step by step way to be sure we fulfill the query.
When responding:
- Choose actions relevant to the user's query, using multiple actions in a logical order if needed.
- Prioritize direct and specific thoughts to meet user expectations.
- Format results in a way compatible with open-API expectations.
- Offer concise, meaningful answers to user queries.
- Use tools when necessary but rely on your own knowledge for creative requests.
- Strive for variety, avoiding repetitive responses.
# Available Actions & Tools:
N/A: No suitable action; use your own knowledge.`,
instructions: `I want you to respond with this format and this format only, without comments or explanations, to indicate actions taken:
\`\`\`
Thought: Summarize your thought process.
Action: Select an action from [{tool_names}].
Action Input: Define the action's input.
Observation: Report the action's result.
\`\`\`
Repeat the format for each action as needed. When not using a tool, use N/A for Action, provide the result as Action Input, and include an Observation.
Upon reaching the final answer, use this format after completing all necessary actions:
\`\`\`
Thought: Indicate that you've determined the final answer.
Final Answer: A conversational reply to the user's query as if you were answering them directly.
\`\`\``,
suffix: `Keep these guidelines in mind when crafting your response:
- Strictly adhere to the Action format for all responses, as they will be machine-parsed.
- If a tool is unnecessary, quickly move to the Thought/Final Answer format.
- Follow the logical sequence provided by the user without adding extra steps.
- Be honest; if you can't provide an appropriate answer using the given tools, use your own knowledge.
- Aim for efficiency and minimal actions to meet the user's needs effectively.`,
},
gpt3: {
prefix: `Objective: Understand the human's query with available actions & tools. Let's work this out in a step by step way to be sure we fulfill the query.
Use available actions and tools judiciously.
# Available Actions & Tools:
N/A: No suitable action; use your own knowledge.`,
instructions: `I want you to respond with this format and this format only, without comments or explanations, to indicate actions taken:
\`\`\`
Thought: Your thought process.
Action: Action from [{tool_names}].
Action Input: Action's input.
Observation: Action's result.
\`\`\`
For each action, repeat the format. If no tool is used, use N/A for Action, and provide the result as Action Input.
Finally, complete with:
\`\`\`
Thought: Convey final answer determination.
Final Answer: Reply to user's query conversationally.
\`\`\``,
suffix: `Remember:
- Adhere to the Action format strictly for parsing.
- Transition quickly to Thought/Final Answer format when a tool isn't needed.
- Follow user's logic without superfluous steps.
- If unable to use tools for a fitting answer, use your knowledge.
- Strive for efficient, minimal actions.`,
},
'gpt4-v1': {
prefix: `Objective: Understand the human's query with available actions & tools. Let's work this out in a step by step way to be sure we fulfill the query.
When responding:
- Choose actions relevant to the query, using multiple actions in a step by step way.
- Prioritize direct and specific thoughts to meet user expectations.
- Be precise and offer meaningful answers to user queries.
- Use tools when necessary but rely on your own knowledge for creative requests.
- Strive for variety, avoiding repetitive responses.
# Available Actions & Tools:
N/A: No suitable action; use your own knowledge.`,
instructions: `I want you to respond with this format and this format only, without comments or explanations, to indicate actions taken:
\`\`\`
Thought: Summarize your thought process.
Action: Select an action from [{tool_names}].
Action Input: Define the action's input.
Observation: Report the action's result.
\`\`\`
Repeat the format for each action as needed. When not using a tool, use N/A for Action, provide the result as Action Input, and include an Observation.
Upon reaching the final answer, use this format after completing all necessary actions:
\`\`\`
Thought: Indicate that you've determined the final answer.
Final Answer: A conversational reply to the user's query as if you were answering them directly.
\`\`\``,
suffix: `Keep these guidelines in mind when crafting your final response:
- Strictly adhere to the Action format for all responses.
- If a tool is unnecessary, quickly move to the Thought/Final Answer format, only if no further actions are possible or necessary.
- Follow the logical sequence provided by the user without adding extra steps.
- Be honest: if you can't provide an appropriate answer using the given tools, use your own knowledge.
- Aim for efficiency and minimal actions to meet the user's needs effectively.`,
},
gpt4: {
prefix: `Objective: Understand the human's query with available actions & tools. Let's work this out in a step by step way to be sure we fulfill the query.
Use available actions and tools judiciously.
# Available Actions & Tools:
N/A: No suitable action; use your own knowledge.`,
instructions: `Respond in this specific format without extraneous comments:
\`\`\`
Thought: Your thought process.
Action: Action from [{tool_names}].
Action Input: Action's input.
Observation: Action's result.
\`\`\`
For each action, repeat the format. If no tool is used, use N/A for Action, and provide the result as Action Input.
Finally, complete with:
\`\`\`
Thought: Indicate that you've determined the final answer.
Final Answer: A conversational reply to the user's query, including your full answer.
\`\`\``,
suffix: `Remember:
- Adhere to the Action format strictly for parsing.
- Transition quickly to Thought/Final Answer format when a tool isn't needed.
- Follow user's logic without superfluous steps.
- If unable to use tools for a fitting answer, use your knowledge.
- Strive for efficient, minimal actions.`,
},
};

View File

@@ -0,0 +1,218 @@
const { ZeroShotAgentOutputParser } = require('langchain/agents');
class CustomOutputParser extends ZeroShotAgentOutputParser {
constructor(fields) {
super(fields);
this.tools = fields.tools;
this.longestToolName = '';
for (const tool of this.tools) {
if (tool.name.length > this.longestToolName.length) {
this.longestToolName = tool.name;
}
}
this.finishToolNameRegex = /(?:the\s+)?final\s+answer:\s*/i;
this.actionValues =
/(?:Action(?: [1-9])?:) ([\s\S]*?)(?:\n(?:Action Input(?: [1-9])?:) ([\s\S]*?))?$/i;
this.actionInputRegex = /(?:Action Input(?: *\d*):) ?([\s\S]*?)$/i;
this.thoughtRegex = /(?:Thought(?: *\d*):) ?([\s\S]*?)$/i;
}
getValidTool(text) {
let result = false;
for (const tool of this.tools) {
const { name } = tool;
const toolIndex = text.indexOf(name);
if (toolIndex !== -1) {
result = name;
break;
}
}
return result;
}
checkIfValidTool(text) {
let isValidTool = false;
for (const tool of this.tools) {
const { name } = tool;
if (text === name) {
isValidTool = true;
break;
}
}
return isValidTool;
}
async parse(text) {
const finalMatch = text.match(this.finishToolNameRegex);
// if (text.includes(this.finishToolName)) {
// const parts = text.split(this.finishToolName);
// const output = parts[parts.length - 1].trim();
// return {
// returnValues: { output },
// log: text
// };
// }
if (finalMatch) {
const output = text.substring(finalMatch.index + finalMatch[0].length).trim();
return {
returnValues: { output },
log: text
};
}
const match = this.actionValues.exec(text); // old v2
if (!match) {
console.log(
'\n\n<----------------------HIT NO MATCH PARSING ERROR---------------------->\n\n',
match
);
const thoughts = text.replace(/[tT]hought:/, '').split('\n');
// return {
// tool: 'self-reflection',
// toolInput: thoughts[0],
// log: thoughts.slice(1).join('\n')
// };
return {
returnValues: { output: thoughts[0] },
log: thoughts.slice(1).join('\n')
};
}
let selectedTool = match?.[1].trim().toLowerCase();
if (match && selectedTool === 'n/a') {
console.log(
'\n\n<----------------------HIT N/A PARSING ERROR---------------------->\n\n',
match
);
return {
tool: 'self-reflection',
toolInput: match[2]?.trim().replace(/^"+|"+$/g, '') ?? '',
log: text
};
}
let toolIsValid = this.checkIfValidTool(selectedTool);
if (match && !toolIsValid) {
console.log(
'\n\n<----------------Tool invalid: Re-assigning Selected Tool---------------->\n\n',
match
);
selectedTool = this.getValidTool(selectedTool);
}
if (match && !selectedTool) {
console.log(
'\n\n<----------------------HIT INVALID TOOL PARSING ERROR---------------------->\n\n',
match
);
selectedTool = 'self-reflection';
}
if (match && !match[2]) {
console.log(
'\n\n<----------------------HIT NO ACTION INPUT PARSING ERROR---------------------->\n\n',
match
);
// In case there is no action input, let's double-check if there is an action input in 'text' variable
const actionInputMatch = this.actionInputRegex.exec(text);
const thoughtMatch = this.thoughtRegex.exec(text);
if (actionInputMatch) {
return {
tool: selectedTool,
toolInput: actionInputMatch[1].trim(),
log: text
};
}
if (thoughtMatch && !actionInputMatch) {
return {
tool: selectedTool,
toolInput: thoughtMatch[1].trim(),
log: text
};
}
}
if (match && selectedTool.length > this.longestToolName.length) {
console.log('\n\n<----------------------HIT LONG PARSING ERROR---------------------->\n\n');
let action, input, thought;
let firstIndex = Infinity;
for (const tool of this.tools) {
const { name } = tool;
const toolIndex = text.indexOf(name);
if (toolIndex !== -1 && toolIndex < firstIndex) {
firstIndex = toolIndex;
action = name;
}
}
// In case there is no action input, let's double-check if there is an action input in 'text' variable
const actionInputMatch = this.actionInputRegex.exec(text);
if (action && actionInputMatch) {
console.log(
'\n\n<------Matched Action Input in Long Parsing Error------>\n\n',
actionInputMatch
);
return {
tool: action,
toolInput: actionInputMatch[1].trim().replaceAll('"', ''),
log: text
};
}
if (action) {
const actionEndIndex = text.indexOf('Action:', firstIndex + action.length);
const inputText = text
.slice(firstIndex + action.length, actionEndIndex !== -1 ? actionEndIndex : undefined)
.trim();
const inputLines = inputText.split('\n');
input = inputLines[0];
if (inputLines.length > 1) {
thought = inputLines.slice(1).join('\n');
}
const returnValues = {
tool: action,
toolInput: input,
log: thought || inputText
};
const inputMatch = this.actionValues.exec(returnValues.log); //new
if (inputMatch) {
console.log('inputMatch');
console.dir(inputMatch, { depth: null });
returnValues.toolInput = inputMatch[1].replaceAll('"', '').trim();
returnValues.log = returnValues.log.replace(this.actionValues, '');
}
return returnValues;
} else {
console.log('No valid tool mentioned.', this.tools, text);
return {
tool: 'self-reflection',
toolInput: 'Hypothetical actions: \n"' + text + '"\n',
log: 'Thought: I need to look at my hypothetical actions and try one'
};
}
// if (action && input) {
// console.log('Action:', action);
// console.log('Input:', input);
// }
}
return {
tool: selectedTool,
toolInput: match[2]?.trim()?.replace(/^"+|"+$/g, '') ?? '',
log: text
};
}
}
module.exports = { CustomOutputParser };

View File

@@ -0,0 +1,77 @@
const {
ChainStepExecutor,
LLMPlanner,
PlanOutputParser,
PlanAndExecuteAgentExecutor
} = require('langchain/experimental/plan_and_execute');
const { LLMChain } = require('langchain/chains');
const { ChatAgent, AgentExecutor } = require('langchain/agents');
const { BufferMemory, ChatMessageHistory } = require('langchain/memory');
const {
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate
} = require('langchain/prompts');
const DEFAULT_STEP_EXECUTOR_HUMAN_CHAT_MESSAGE_TEMPLATE = `{chat_history}
Previous steps: {previous_steps}
Current objective: {current_step}
{agent_scratchpad}
You may extract and combine relevant data from your previous steps when responding to me.`;
const PLANNER_SYSTEM_PROMPT_MESSAGE_TEMPLATE = [
`Let's first understand the problem and devise a plan to solve the problem.`,
`Please output the plan starting with the header "Plan:"`,
`and then followed by a numbered list of steps.`,
`Please make the plan the minimum number of steps required`,
`to answer the query or complete the task accurately and precisely.`,
`Your steps should be general, and should not require a specific method to solve a step. If the task is a question,`,
`the final step in the plan must be the following: "Given the above steps taken,`,
`please respond to the original query."`,
`At the end of your plan, say "<END_OF_PLAN>"`
].join(' ');
const PLANNER_CHAT_PROMPT = /* #__PURE__ */ ChatPromptTemplate.fromPromptMessages([
/* #__PURE__ */ SystemMessagePromptTemplate.fromTemplate(PLANNER_SYSTEM_PROMPT_MESSAGE_TEMPLATE),
/* #__PURE__ */ HumanMessagePromptTemplate.fromTemplate(`{input}`)
]);
const initializePAEAgent = async ({ tools: _tools, model: llm, pastMessages, ...rest }) => {
//removed currentDateString
const tools = _tools.filter((tool) => tool.name !== 'self-reflection');
const memory = new BufferMemory({
chatHistory: new ChatMessageHistory(pastMessages),
// returnMessages: true, // commenting this out retains memory
memoryKey: 'chat_history',
humanPrefix: 'User',
aiPrefix: 'Assistant',
inputKey: 'input',
outputKey: 'output'
});
const plannerLlmChain = new LLMChain({
llm,
prompt: PLANNER_CHAT_PROMPT,
memory
});
const planner = new LLMPlanner(plannerLlmChain, new PlanOutputParser());
const agent = ChatAgent.fromLLMAndTools(llm, tools, {
humanMessageTemplate: DEFAULT_STEP_EXECUTOR_HUMAN_CHAT_MESSAGE_TEMPLATE
});
const stepExecutor = new ChainStepExecutor(
AgentExecutor.fromAgentAndTools({ agent, tools, memory, ...rest })
);
return new PlanAndExecuteAgentExecutor({
planner,
stepExecutor
});
};
module.exports = {
initializePAEAgent
};

View File

@@ -0,0 +1,31 @@
require('dotenv').config();
const { ChatOpenAI } = require( "langchain/chat_models/openai");
const { initializeAgentExecutorWithOptions } = require( "langchain/agents");
const HttpRequestTool = require('../tools/HttpRequestTool');
const AIPluginTool = require('../tools/AIPluginTool');
const run = async () => {
const openAIApiKey = process.env.OPENAI_API_KEY;
const tools = [
new HttpRequestTool(),
await AIPluginTool.fromPluginUrl(
"https://www.klarna.com/.well-known/ai-plugin.json", new ChatOpenAI({ temperature: 0, openAIApiKey })
),
];
const agent = await initializeAgentExecutorWithOptions(
tools,
new ChatOpenAI({ temperature: 0, openAIApiKey }),
{ agentType: "chat-zero-shot-react-description", verbose: true }
);
const result = await agent.call({
input: "what t shirts are available in klarna?",
});
console.log({ result });
};
(async () => {
await run();
})();

View File

@@ -0,0 +1,47 @@
require('dotenv').config();
const fs = require( "fs");
const yaml = require( "js-yaml");
const { OpenAI } = require( "langchain/llms/openai");
const { JsonSpec } = require( "langchain/tools");
const { createOpenApiAgent, OpenApiToolkit } = require( "langchain/agents");
const run = async () => {
let data;
try {
const yamlFile = fs.readFileSync("./app/langchain/demos/klarna.yaml", "utf8");
data = yaml.load(yamlFile);
if (!data) {
throw new Error("Failed to load OpenAPI spec");
}
} catch (e) {
console.error(e);
return;
}
const headers = {
"Content-Type": "application/json",
// Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,
};
const model = new OpenAI({ temperature: 0 });
const toolkit = new OpenApiToolkit(new JsonSpec(data), model, headers);
const executor = createOpenApiAgent(model, toolkit, { verbose: true });
const input = `Find me some medium sized blue shirts`;
console.log(`Executing with input "${input}"...`);
const result = await executor.call({ input });
console.log(`Got output ${result.output}`);
console.log(
`Got intermediate steps ${JSON.stringify(
result.intermediateSteps,
null,
2
)}`
);
};
(async () => {
await run();
})();

View File

@@ -0,0 +1,79 @@
openapi: 3.0.1
servers:
- url: https://www.klarna.com/us/shopping
info:
title: Open AI Klarna product Api
version: v0
x-apisguru-categories:
- ecommerce
x-logo:
url: https://www.klarna.com/static/img/social-prod-imagery-blinds-beauty-default.jpg
x-origin:
- format: openapi
url: https://www.klarna.com/us/shopping/public/openai/v0/api-docs/
version: "3.0"
x-providerName: klarna.com
x-serviceName: openai
tags:
- description: Open AI Product Endpoint. Query for products.
name: open-ai-product-endpoint
paths:
/public/openai/v0/products:
get:
deprecated: false
operationId: productsUsingGET
parameters:
- description: A precise query that matches one very small category or product that needs to be searched for to find the products the user is looking for. If the user explicitly stated what they want, use that as a query. The query is as specific as possible to the product name or category mentioned by the user in its singular form, and don't contain any clarifiers like latest, newest, cheapest, budget, premium, expensive or similar. The query is always taken from the latest topic, if there is a new topic a new query is started.
in: query
name: q
required: true
schema:
type: string
- description: number of products returned
in: query
name: size
required: false
schema:
type: integer
- description: maximum price of the matching product in local currency, filters results
in: query
name: budget
required: false
schema:
type: integer
responses:
"200":
content:
application/json:
schema:
$ref: "#/components/schemas/ProductResponse"
description: Products found
"503":
description: one or more services are unavailable
summary: API for fetching Klarna product information
tags:
- open-ai-product-endpoint
components:
schemas:
Product:
properties:
attributes:
items:
type: string
type: array
name:
type: string
price:
type: string
url:
type: string
title: Product
type: object
ProductResponse:
properties:
products:
items:
$ref: "#/components/schemas/Product"
type: array
title: ProductResponse
type: object

View File

@@ -0,0 +1,32 @@
require('dotenv').config();
const { Calculator } = require('langchain/tools/calculator');
const { SerpAPI } = require('langchain/tools');
const { ChatOpenAI } = require('langchain/chat_models/openai');
const { PlanAndExecuteAgentExecutor } = require('langchain/experimental/plan_and_execute');
const tools = [
new Calculator(),
new SerpAPI(process.env.SERPAPI_API_KEY || '', {
location: 'Austin,Texas,United States',
hl: 'en',
gl: 'us'
})
];
const model = new ChatOpenAI({
temperature: 0,
modelName: 'gpt-3.5-turbo',
verbose: true,
openAIApiKey: process.env.OPENAI_API_KEY
});
const executor = PlanAndExecuteAgentExecutor.fromLLMAndTools({
llm: model,
tools
});
(async () => {
const result = await executor.call({
input: `Who is the current president of the United States? What is their current age raised to the second power?`
});
console.log({ result });
})();

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,6 @@
module.exports = {
instructions: `Remember, all your responses MUST be in the format described. Do not respond unless it's in the format described, using the structure of Action, Action Input, etc.`,
errorInstructions: `\nYou encountered an error in attempting a response. The user is not aware of the error so you shouldn't mention it.\nReview the actions taken carefully in case there is a partial or complete answer within them.\nError Message:`,
imageInstructions: 'You must include the exact image paths from above, formatted in Markdown syntax: ![alt-text](URL)',
completionInstructions: `Instructions:\nYou are ChatGPT, a large language model trained by OpenAI. Respond conversationally.\nCurrent date:`,
};

View File

@@ -0,0 +1,237 @@
const { Tool } = require('langchain/tools');
const yaml = require('js-yaml');
/*
export interface AIPluginToolParams {
name: string;
description: string;
apiSpec: string;
openaiSpec: string;
model: BaseLanguageModel;
}
export interface PathParameter {
name: string;
description: string;
}
export interface Info {
title: string;
description: string;
version: string;
}
export interface PathMethod {
summary: string;
operationId: string;
parameters?: PathParameter[];
}
interface ApiSpec {
openapi: string;
info: Info;
paths: { [key: string]: { [key: string]: PathMethod } };
}
*/
function isJson(str) {
try {
JSON.parse(str);
} catch (e) {
return false;
}
return true;
}
function convertJsonToYamlIfApplicable(spec) {
if (isJson(spec)) {
const jsonData = JSON.parse(spec);
return yaml.dump(jsonData);
}
return spec;
}
function extractShortVersion(openapiSpec) {
openapiSpec = convertJsonToYamlIfApplicable(openapiSpec);
try {
const fullApiSpec = yaml.load(openapiSpec);
const shortApiSpec = {
openapi: fullApiSpec.openapi,
info: fullApiSpec.info,
paths: {}
};
for (let path in fullApiSpec.paths) {
shortApiSpec.paths[path] = {};
for (let method in fullApiSpec.paths[path]) {
shortApiSpec.paths[path][method] = {
summary: fullApiSpec.paths[path][method].summary,
operationId: fullApiSpec.paths[path][method].operationId,
parameters: fullApiSpec.paths[path][method].parameters?.map((parameter) => ({
name: parameter.name,
description: parameter.description
}))
};
}
}
return yaml.dump(shortApiSpec);
} catch (e) {
console.log(e);
return '';
}
}
function printOperationDetails(operationId, openapiSpec) {
openapiSpec = convertJsonToYamlIfApplicable(openapiSpec);
let returnText = '';
try {
let doc = yaml.load(openapiSpec);
let servers = doc.servers;
let paths = doc.paths;
let components = doc.components;
for (let path in paths) {
for (let method in paths[path]) {
let operation = paths[path][method];
if (operation.operationId === operationId) {
returnText += `The API request to do for operationId "${operationId}" is:\n`;
returnText += `Method: ${method.toUpperCase()}\n`;
let url = servers[0].url + path;
returnText += `Path: ${url}\n`;
returnText += 'Parameters:\n';
if (operation.parameters) {
for (let param of operation.parameters) {
let required = param.required ? '' : ' (optional),';
returnText += `- ${param.name} (${param.in},${required} ${param.schema.type}): ${param.description}\n`;
}
} else {
returnText += ' None\n';
}
returnText += '\n';
let responseSchema = operation.responses['200'].content['application/json'].schema;
// Check if schema is a reference
if (responseSchema.$ref) {
// Extract schema name from reference
let schemaName = responseSchema.$ref.split('/').pop();
// Look up schema in components
responseSchema = components.schemas[schemaName];
}
returnText += 'Response schema:\n';
returnText += '- Type: ' + responseSchema.type + '\n';
returnText += '- Additional properties:\n';
returnText += ' - Type: ' + responseSchema.additionalProperties?.type + '\n';
if (responseSchema.additionalProperties?.properties) {
returnText += ' - Properties:\n';
for (let prop in responseSchema.additionalProperties.properties) {
returnText += ` - ${prop} (${responseSchema.additionalProperties.properties[prop].type}): Description not provided in OpenAPI spec\n`;
}
}
}
}
}
if (returnText === '') {
returnText += `No operation with operationId "${operationId}" found.`;
}
return returnText;
} catch (e) {
console.log(e);
return '';
}
}
class AIPluginTool extends Tool {
/*
private _name: string;
private _description: string;
apiSpec: string;
openaiSpec: string;
model: BaseLanguageModel;
*/
get name() {
return this._name;
}
get description() {
return this._description;
}
constructor(params) {
super();
this._name = params.name;
this._description = params.description;
this.apiSpec = params.apiSpec;
this.openaiSpec = params.openaiSpec;
this.model = params.model;
}
async _call(input) {
let date = new Date();
let fullDate = `Date: ${date.getDate()}/${
date.getMonth() + 1
}/${date.getFullYear()}, Time: ${date.getHours()}:${date.getMinutes()}:${date.getSeconds()}`;
const prompt = `${fullDate}\nQuestion: ${input} \n${this.apiSpec}.`;
console.log(prompt);
const gptResponse = await this.model.predict(prompt);
let operationId = gptResponse.match(/operationId: (.*)/)?.[1];
if (!operationId) {
return 'No operationId found in the response';
}
if (operationId == 'No API path found to answer the question') {
return 'No API path found to answer the question';
}
let openApiData = printOperationDetails(operationId, this.openaiSpec);
return openApiData;
}
static async fromPluginUrl(url, model) {
const aiPluginRes = await fetch(url, {});
if (!aiPluginRes.ok) {
throw new Error(`Failed to fetch plugin from ${url} with status ${aiPluginRes.status}`);
}
const aiPluginJson = await aiPluginRes.json();
const apiUrlRes = await fetch(aiPluginJson.api.url, {});
if (!apiUrlRes.ok) {
throw new Error(
`Failed to fetch API spec from ${aiPluginJson.api.url} with status ${apiUrlRes.status}`
);
}
const apiUrlJson = await apiUrlRes.text();
const shortApiSpec = extractShortVersion(apiUrlJson);
return new AIPluginTool({
name: aiPluginJson.name_for_model.toLowerCase(),
description: `A \`tool\` to learn the API documentation for ${aiPluginJson.name_for_model.toLowerCase()}, after which you can use 'http_request' to make the actual API call. Short description of how to use the API's results: ${aiPluginJson.description_for_model})`,
apiSpec: `
As an AI, your task is to identify the operationId of the relevant API path based on the condensed OpenAPI specifications provided.
Please note:
1. Do not imagine URLs. Only use the information provided in the condensed OpenAPI specifications.
2. Do not guess the operationId. Identify it strictly based on the API paths and their descriptions.
Your output should only include:
- operationId: The operationId of the relevant API path
If you cannot find a suitable API path based on the OpenAPI specifications, please answer only "operationId: No API path found to answer the question".
Now, based on the question above and the condensed OpenAPI specifications given below, identify the operationId:
\`\`\`
${shortApiSpec}
\`\`\`
`,
openaiSpec: apiUrlJson,
model: model
});
}
}
module.exports = AIPluginTool;

View File

@@ -0,0 +1,111 @@
// From https://platform.openai.com/docs/api-reference/images/create
// To use this tool, you must pass in a configured OpenAIApi object.
const fs = require('fs');
const { Configuration, OpenAIApi } = require('openai');
const { genAzureEndpoint } = require('../../../utils/genAzureEndpoints');
const { Tool } = require('langchain/tools');
const saveImageFromUrl = require('./saveImageFromUrl');
const path = require('path');
class OpenAICreateImage extends Tool {
constructor(fields = {}) {
super();
let apiKey = fields.OPENAI_API_KEY || process.env.OPENAI_API_KEY;
let azureKey = fields.AZURE_OPENAI_API_KEY || process.env.AZURE_OPENAI_API_KEY;
let config = { apiKey };
if (azureKey) {
apiKey = azureKey;
const azureConfig = {
apiKey,
azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME || fields.azureOpenAIApiInstanceName,
azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME || fields.azureOpenAIApiDeploymentName,
azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION || fields.azureOpenAIApiVersion
};
config = {
apiKey,
basePath: genAzureEndpoint({
...azureConfig,
}),
baseOptions: {
headers: { 'api-key': apiKey },
params: {
'api-version': azureConfig.azureOpenAIApiVersion // this might change. I got the current value from the sample code at https://oai.azure.com/portal/chat
}
}
};
}
this.openaiApi = new OpenAIApi(new Configuration(config));
this.name = 'dall-e';
this.description = `You can generate images with 'dall-e'. This tool is exclusively for visual content.
Guidelines:
- Visually describe the moods, details, structures, styles, and/or proportions of the image. Remember, the focus is on visual attributes.
- Craft your input by "showing" and not "telling" the imagery. Think in terms of what you'd want to see in a photograph or a painting.
- It's best to follow this format for image creation. Come up with the optional inputs yourself if none are given:
"Subject: [subject], Style: [style], Color: [color], Details: [details], Emotion: [emotion]"
- Generate images only once per human query unless explicitly requested by the user`;
}
// "Subject": "Mona Lisa",
// "Style": "Chinese traditional painting",
// "Color": "Mainly wash tones of ink, with small color blocks in some parts",
// "Details": "Mona Lisa should have long hair, a silk dress, holding a fan. The background should have mountains and trees.",
// "Emotion": "Serene and elegant"
replaceUnwantedChars(inputString) {
return inputString.replace(/\r\n|\r|\n/g, ' ').replace('"', '').trim();
}
getMarkdownImageUrl(imageName) {
const imageUrl = path.join(this.relativeImageUrl, imageName).replace(/\\/g, '/').replace('public/', '');
return `![generated image](/${imageUrl})`;
}
async _call(input) {
const resp = await this.openaiApi.createImage({
prompt: this.replaceUnwantedChars(input),
// TODO: Future idea -- could we ask an LLM to extract these arguments from an input that might contain them?
n: 1,
// size: '1024x1024'
size: '512x512'
});
const theImageUrl = resp.data.data[0].url;
if (!theImageUrl) {
throw new Error(`No image URL returned from OpenAI API.`);
}
const regex = /img-[\w\d]+.png/;
const match = theImageUrl.match(regex);
let imageName = '1.png';
if (match) {
imageName = match[0];
console.log(imageName); // Output: img-lgCf7ppcbhqQrz6a5ear6FOb.png
} else {
console.log('No image name found in the string.');
}
this.outputPath = path.resolve(__dirname, '..', '..', '..', '..', 'client', 'public', 'images');
const appRoot = path.resolve(__dirname, '..', '..', '..', '..', 'client');
this.relativeImageUrl = path.relative(appRoot, this.outputPath);
// Check if directory exists, if not create it
if (!fs.existsSync(this.outputPath)) {
fs.mkdirSync(this.outputPath, { recursive: true });
}
try {
await saveImageFromUrl(theImageUrl, this.outputPath, imageName);
this.result = this.getMarkdownImageUrl(imageName);
} catch (error) {
console.error('Error while saving the image:', error);
this.result = theImageUrl;
}
return this.result;
}
}
module.exports = OpenAICreateImage;

View File

@@ -0,0 +1,117 @@
const { Tool } = require('langchain/tools');
const { google } = require('googleapis');
/**
* Represents a tool that allows an agent to use the Google Custom Search API.
* @extends Tool
*/
class GoogleSearchAPI extends Tool {
constructor(fields = {}) {
super();
this.cx = fields.GOOGLE_CSE_ID || this.getCx();
this.apiKey = fields.GOOGLE_API_KEY || this.getApiKey();
this.customSearch = undefined;
}
/**
* The name of the tool.
* @type {string}
*/
name = 'google';
/**
* A description for the agent to use
* @type {string}
*/
description = `Use the 'google' tool to retrieve internet search results relevant to your input. The results will return links and snippets of text from the webpages`;
getCx() {
const cx = process.env.GOOGLE_CSE_ID || '';
if (!cx) {
throw new Error('Missing GOOGLE_CSE_ID environment variable.');
}
return cx;
}
getApiKey() {
const apiKey = process.env.GOOGLE_API_KEY || '';
if (!apiKey) {
throw new Error('Missing GOOGLE_API_KEY environment variable.');
}
return apiKey;
}
getCustomSearch() {
if (!this.customSearch) {
const version = 'v1';
this.customSearch = google.customsearch(version);
}
return this.customSearch;
}
resultsToReadableFormat(results) {
let output = 'Results:\n';
results.forEach((resultObj, index) => {
output += `Title: ${resultObj.title}\n`;
output += `Link: ${resultObj.link}\n`;
if (resultObj.snippet) {
output += `Snippet: ${resultObj.snippet}\n`;
}
if (index < results.length - 1) {
output += '\n';
}
});
return output;
}
/**
* Calls the tool with the provided input and returns a promise that resolves with a response from the Google Custom Search API.
* @param {string} input - The input to provide to the API.
* @returns {Promise<String>} A promise that resolves with a response from the Google Custom Search API.
*/
async _call(input) {
try {
const metadataResults = [];
const response = await this.getCustomSearch().cse.list({
q: input,
cx: this.cx,
auth: this.apiKey,
num: 5 // Limit the number of results to 5
});
// return response.data;
// console.log(response.data);
if (!response.data.items || response.data.items.length === 0) {
return this.resultsToReadableFormat([
{ title: 'No good Google Search Result was found', link: '' }
]);
}
// const results = response.items.slice(0, numResults);
const results = response.data.items;
for (const result of results) {
const metadataResult = {
title: result.title || '',
link: result.link || ''
};
if (result.snippet) {
metadataResult.snippet = result.snippet;
}
metadataResults.push(metadataResult);
}
return this.resultsToReadableFormat(metadataResults);
} catch (error) {
console.log(`Error searching Google: ${error}`);
// throw error;
return 'There was an error searching Google.';
}
}
}
module.exports = GoogleSearchAPI;

View File

@@ -0,0 +1,107 @@
const { Tool } = require('langchain/tools');
// class RequestsGetTool extends Tool {
// constructor(headers = {}, { maxOutputLength } = {}) {
// super();
// this.name = 'requests_get';
// this.headers = headers;
// this.maxOutputLength = maxOutputLength || 2000;
// this.description = `A portal to the internet. Use this when you need to get specific content from a website.
// - Input should be a url (i.e. https://www.google.com). The output will be the text response of the GET request.`;
// }
// async _call(input) {
// const res = await fetch(input, {
// headers: this.headers
// });
// const text = await res.text();
// return text.slice(0, this.maxOutputLength);
// }
// }
// class RequestsPostTool extends Tool {
// constructor(headers = {}, { maxOutputLength } = {}) {
// super();
// this.name = 'requests_post';
// this.headers = headers;
// this.maxOutputLength = maxOutputLength || Infinity;
// this.description = `Use this when you want to POST to a website.
// - Input should be a json string with two keys: "url" and "data".
// - The value of "url" should be a string, and the value of "data" should be a dictionary of
// - key-value pairs you want to POST to the url as a JSON body.
// - Be careful to always use double quotes for strings in the json string
// - The output will be the text response of the POST request.`;
// }
// async _call(input) {
// try {
// const { url, data } = JSON.parse(input);
// const res = await fetch(url, {
// method: 'POST',
// headers: this.headers,
// body: JSON.stringify(data)
// });
// const text = await res.text();
// return text.slice(0, this.maxOutputLength);
// } catch (error) {
// return `${error}`;
// }
// }
// }
class HttpRequestTool extends Tool {
constructor(headers = {}, { maxOutputLength = Infinity } = {}) {
super();
this.headers = headers;
this.name = 'http_request';
this.maxOutputLength = maxOutputLength;
this.description = `Executes HTTP methods (GET, POST, PUT, DELETE, etc.). The input is an object with three keys: "url", "method", and "data". Even for GET or DELETE, include "data" key as an empty string. "method" is the HTTP method, and "url" is the desired endpoint. If POST or PUT, "data" should contain a stringified JSON representing the body to send. Only one url per use.`;
}
async _call(input) {
try {
const urlPattern = /"url":\s*"([^"]*)"/;
const methodPattern = /"method":\s*"([^"]*)"/;
const dataPattern = /"data":\s*"([^"]*)"/;
const url = input.match(urlPattern)[1];
const method = input.match(methodPattern)[1];
let data = input.match(dataPattern)[1];
// Parse 'data' back to JSON if possible
try {
data = JSON.parse(data);
} catch (e) {
// If it's not a JSON string, keep it as is
}
let options = {
method: method,
headers: this.headers
};
if (['POST', 'PUT', 'PATCH'].includes(method.toUpperCase()) && data) {
if (typeof data === 'object') {
options.body = JSON.stringify(data);
} else {
options.body = data;
}
options.headers['Content-Type'] = 'application/json';
}
const res = await fetch(url, options);
const text = await res.text();
if (text.includes('<html')) {
return 'This tool is not designed to browse web pages. Only use it for API calls.';
}
return text.slice(0, this.maxOutputLength);
} catch (error) {
console.log(error);
return `${error}`;
}
}
}
module.exports = HttpRequestTool;

View File

@@ -0,0 +1,30 @@
const { Tool } = require('langchain/tools');
/**
* Represents a tool that allows an agent to ask a human for guidance when they are stuck
* or unsure of what to do next.
* @extends Tool
*/
export class HumanTool extends Tool {
/**
* The name of the tool.
* @type {string}
*/
name = 'Human';
/**
* A description for the agent to use
* @type {string}
*/
description = `You can ask a human for guidance when you think you
got stuck or you are not sure what to do next.
The input should be a question for the human.`;
/**
* Calls the tool with the provided input and returns a promise that resolves with a response from the human.
* @param {string} input - The input to provide to the human.
* @returns {Promise<string>} A promise that resolves with a response from the human.
*/
_call(input) {
return Promise.resolve(`${input}`);
}
}

View File

@@ -0,0 +1,27 @@
const { Tool } = require('langchain/tools');
class SelfReflectionTool extends Tool {
constructor({ message, isGpt3 }) {
super();
this.reminders = 0;
this.name = 'self-reflection';
this.description = `Take this action to reflect on your thoughts & actions. For your input, provide answers for self-evaluation as part of one input, using this space as a canvas to explore and organize your ideas in response to the user's message. You can use multiple lines for your input. Perform this action sparingly and only when you are stuck.`;
this.message = message;
this.isGpt3 = isGpt3;
// this.returnDirect = true;
}
async _call(input) {
return this.selfReflect(input);
}
async selfReflect() {
if (this.isGpt3) {
return `I should finalize my reply as soon as I have satisfied the user's query.`;
} else {
return ``;
}
}
}
module.exports = SelfReflectionTool;

View File

@@ -0,0 +1,85 @@
// Generates image using stable diffusion webui's api (automatic1111)
const fs = require('fs');
const { Tool } = require('langchain/tools');
const path = require('path');
const axios = require('axios');
const sharp = require('sharp');
class StableDiffusionAPI extends Tool {
constructor(fields) {
super();
this.name = 'stable-diffusion';
this.url = fields.SD_WEBUI_URL || this.getServerURL();
this.description = `You can generate images with 'stable-diffusion'. This tool is exclusively for visual content.
Guidelines:
- Visually describe the moods, details, structures, styles, and/or proportions of the image. Remember, the focus is on visual attributes.
- Craft your input by "showing" and not "telling" the imagery. Think in terms of what you'd want to see in a photograph or a painting.
- It's best to follow this format for image creation:
"detailed keywords to describe the subject, separated by comma | keywords we want to exclude from the final image"
- Here's an example prompt for generating a realistic portrait photo of a man:
"photo of a man in black clothes, half body, high detailed skin, coastline, overcast weather, wind, waves, 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3 | semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, out of frame, low quality, ugly, mutation, deformed"
- Generate images only once per human query unless explicitly requested by the user`;
}
replaceNewLinesWithSpaces(inputString) {
return inputString.replace(/\r\n|\r|\n/g, ' ');
}
getMarkdownImageUrl(imageName) {
const imageUrl = path.join(this.relativeImageUrl, imageName).replace(/\\/g, '/').replace('public/', '');
return `![generated image](/${imageUrl})`;
}
getServerURL() {
const url = process.env.SD_WEBUI_URL || '';
if (!url) {
throw new Error('Missing SD_WEBUI_URL environment variable.');
}
return url;
}
async _call(input) {
const url = this.url;
const payload = {
prompt: input.split('|')[0],
negative_prompt: input.split('|')[1],
steps: 20
};
const response = await axios.post(`${url}/sdapi/v1/txt2img`, payload);
const image = response.data.images[0];
const pngPayload = { image: `data:image/png;base64,${image}` };
const response2 = await axios.post(`${url}/sdapi/v1/png-info`, pngPayload);
const info = response2.data.info;
// Generate unique name
const imageName = `${Date.now()}.png`;
this.outputPath = path.resolve(__dirname, '..', '..', '..', '..', 'client', 'public', 'images');
const appRoot = path.resolve(__dirname, '..', '..', '..', '..', 'client');
this.relativeImageUrl = path.relative(appRoot, this.outputPath);
// Check if directory exists, if not create it
if (!fs.existsSync(this.outputPath)) {
fs.mkdirSync(this.outputPath, { recursive: true });
}
try {
const buffer = Buffer.from(image.split(',', 1)[0], 'base64');
await sharp(buffer)
.withMetadata({
iptcpng: {
parameters: info
}
})
.toFile(this.outputPath + '/' + imageName);
this.result = this.getMarkdownImageUrl(imageName);
} catch (error) {
console.error('Error while saving the image:', error);
// this.result = theImageUrl;
}
return this.result;
}
}
module.exports = StableDiffusionAPI;

View File

@@ -0,0 +1,82 @@
/* eslint-disable no-useless-escape */
const axios = require('axios');
const { Tool } = require('langchain/tools');
class WolframAlphaAPI extends Tool {
constructor(fields) {
super();
this.name = 'wolfram';
this.apiKey = fields.WOLFRAM_APP_ID || this.getAppId();
this.description = `Access computation, math, curated knowledge & real-time data through wolframAlpha.
- Understands natural language queries about entities in chemistry, physics, geography, history, art, astronomy, and more.
- Performs mathematical calculations, date and unit conversions, formula solving, etc.
General guidelines:
- Make natural-language queries in English; translate non-English queries before sending, then respond in the original language.
- Inform users if information is not from wolfram.
- ALWAYS use this exponent notation: "6*10^14", NEVER "6e14".
- Your input must ONLY be a single-line string.
- ALWAYS use proper Markdown formatting for all math, scientific, and chemical formulas, symbols, etc.: '$$\n[expression]\n$$' for standalone cases and '\( [expression] \)' when inline.
- Format inline wolfram Language code with Markdown code formatting.
- Convert inputs to simplified keyword queries whenever possible (e.g. convert "how many people live in France" to "France population").
- Use ONLY single-letter variable names, with or without integer subscript (e.g., n, n1, n_1).
- Use named physical constants (e.g., 'speed of light') without numerical substitution.
- Include a space between compound units (e.g., "Ω m" for "ohm*meter").
- To solve for a variable in an equation with units, consider solving a corresponding equation without units; exclude counting units (e.g., books), include genuine units (e.g., kg).
- If data for multiple properties is needed, make separate calls for each property.
- If a wolfram Alpha result is not relevant to the query:
-- If wolfram provides multiple 'Assumptions' for a query, choose the more relevant one(s) without explaining the initial result. If you are unsure, ask the user to choose.
- Performs complex calculations, data analysis, plotting, data import, and information retrieval.`;
// - Please ensure your input is properly formatted for wolfram Alpha.
// -- Re-send the exact same 'input' with NO modifications, and add the 'assumption' parameter, formatted as a list, with the relevant values.
// -- ONLY simplify or rephrase the initial query if a more relevant 'Assumption' or other input suggestions are not provided.
// -- Do not explain each step unless user input is needed. Proceed directly to making a better input based on the available assumptions.
// - wolfram Language code is accepted, but accepts only syntactically correct wolfram Language code.
}
async fetchRawText(url) {
try {
const response = await axios.get(url, { responseType: 'text' });
return response.data;
} catch (error) {
console.error(`Error fetching raw text: ${error}`);
throw error;
}
}
getAppId() {
const appId = process.env.WOLFRAM_APP_ID || '';
if (!appId) {
throw new Error('Missing WOLFRAM_APP_ID environment variable.');
}
return appId;
}
createWolframAlphaURL(query) {
// Clean up query
const formattedQuery = query.replaceAll(/`/g, '').replaceAll(/\n/g, ' ');
const baseURL = 'https://www.wolframalpha.com/api/v1/llm-api';
const encodedQuery = encodeURIComponent(formattedQuery);
const appId = this.apiKey || this.getAppId();
const url = `${baseURL}?input=${encodedQuery}&appid=${appId}`;
return url;
}
async _call(input) {
try {
const url = this.createWolframAlphaURL(input);
const response = await this.fetchRawText(url);
return response;
} catch (error) {
if (error.response && error.response.data) {
console.log('Error data:', error.response.data);
return error.response.data;
} else {
console.log(`Error querying Wolfram Alpha`, error.message);
// throw error;
return 'There was an error querying Wolfram Alpha.';
}
}
}
}
module.exports = WolframAlphaAPI;

View File

@@ -0,0 +1,158 @@
const { OpenAIEmbeddings } = require('langchain/embeddings/openai');
const { ZapierToolKit } = require('langchain/agents');
const {
SerpAPI,
ZapierNLAWrapper
} = require('langchain/tools');
const { ChatOpenAI } = require('langchain/chat_models/openai');
const { Calculator } = require('langchain/tools/calculator');
const { WebBrowser } = require('langchain/tools/webbrowser');
const GoogleSearchAPI = require('./GoogleSearch');
const HttpRequestTool = require('./HttpRequestTool');
const AIPluginTool = require('./AIPluginTool');
const OpenAICreateImage = require('./DALL-E');
const StableDiffusionAPI = require('./StableDiffusion');
const WolframAlphaAPI = require('./Wolfram');
const availableTools = require('./manifest.json');
const { getUserPluginAuthValue } = require('../../../server/services/PluginService');
const validateTools = async (user, tools = []) => {
try {
const validToolsSet = new Set(tools);
const availableToolsToValidate = availableTools.filter((tool) =>
validToolsSet.has(tool.pluginKey)
);
const validateCredentials = async (authField, toolName) => {
const adminAuth = process.env[authField];
if (adminAuth && adminAuth.length > 0) {
return;
}
const userAuth = await getUserPluginAuthValue(user, authField);
if (userAuth && userAuth.length > 0) {
return;
}
validToolsSet.delete(toolName);
};
for (const tool of availableToolsToValidate) {
if (!tool.authConfig || tool.authConfig.length === 0) {
continue;
}
for (const auth of tool.authConfig) {
await validateCredentials(auth.authField, tool.pluginKey);
}
}
return Array.from(validToolsSet.values());
} catch (err) {
console.log('There was a problem validating tools', err);
throw new Error(err);
}
};
const loadToolWithAuth = async (user, authFields, ToolConstructor, options = {}) => {
return async function () {
let authValues = {};
for (const authField of authFields) {
let authValue = process.env[authField];
if (!authValue) {
authValue = await getUserPluginAuthValue(user, authField);
}
authValues[authField] = authValue;
}
return new ToolConstructor({ ...options, ...authValues });
};
};
const loadTools = async ({ user, model, tools = [], options = {} }) => {
const toolConstructors = {
calculator: Calculator,
google: GoogleSearchAPI,
wolfram: WolframAlphaAPI,
'dall-e': OpenAICreateImage,
'stable-diffusion': StableDiffusionAPI
};
const customConstructors = {
browser: async () => {
let openAIApiKey = process.env.OPENAI_API_KEY;
if (!openAIApiKey) {
openAIApiKey = await getUserPluginAuthValue(user, 'OPENAI_API_KEY');
}
return new WebBrowser({ model, embeddings: new OpenAIEmbeddings({ openAIApiKey }) });
},
serpapi: async () => {
let apiKey = process.env.SERPAPI_API_KEY;
if (!apiKey) {
apiKey = await getUserPluginAuthValue(user, 'SERPAPI_API_KEY');
}
return new SerpAPI(apiKey, {
location: 'Austin,Texas,United States',
hl: 'en',
gl: 'us'
});
},
zapier: async () => {
let apiKey = process.env.ZAPIER_NLA_API_KEY;
if (!apiKey) {
apiKey = await getUserPluginAuthValue(user, 'ZAPIER_NLA_API_KEY');
}
const zapier = new ZapierNLAWrapper({ apiKey });
return ZapierToolKit.fromZapierNLAWrapper(zapier);
},
plugins: async () => {
return [
new HttpRequestTool(),
await AIPluginTool.fromPluginUrl(
"https://www.klarna.com/.well-known/ai-plugin.json", new ChatOpenAI({ openAIApiKey: options.openAIApiKey, temperature: 0 })
),
]
}
};
const requestedTools = {};
const toolOptions = {
serpapi: { location: 'Austin,Texas,United States', hl: 'en', gl: 'us' }
};
const toolAuthFields = {};
availableTools.forEach((tool) => {
if (customConstructors[tool.pluginKey]) {
return;
}
toolAuthFields[tool.pluginKey] = tool.authConfig.map((auth) => auth.authField);
});
for (const tool of tools) {
if (customConstructors[tool]) {
requestedTools[tool] = customConstructors[tool];
continue;
}
if (toolConstructors[tool]) {
const options = toolOptions[tool] || {};
const toolInstance = await loadToolWithAuth(
user,
toolAuthFields[tool],
toolConstructors[tool],
options
);
requestedTools[tool] = toolInstance;
}
}
return requestedTools;
};
module.exports = {
validateTools,
loadTools
};

View File

@@ -0,0 +1,10 @@
const SelfReflectionTool = require('./SelfReflection');
const availableTools = require('./manifest.json');
const { validateTools, loadTools } = require('./handleTools');
module.exports = {
validateTools,
loadTools,
availableTools,
SelfReflectionTool
};

View File

@@ -0,0 +1,158 @@
/* eslint-disable jest/no-conditional-expect */
require('dotenv').config({ path: '../../../.env' });
const mongoose = require('mongoose');
const User = require('../../../models/User');
const connectDb = require('../../../lib/db/connectDb');
const { validateTools, loadTools, availableTools } = require('./index');
const PluginService = require('../../../server/services/PluginService');
const { BaseChatModel } = require('langchain/chat_models/openai');
const { Calculator } = require('langchain/tools/calculator');
const OpenAICreateImage = require('./DALL-E');
const GoogleSearchAPI = require('./GoogleSearch');
describe('Tool Handlers', () => {
let fakeUser;
let pluginKey = 'dall-e';
let pluginKey2 = 'wolfram';
let sampleTools = [pluginKey, pluginKey2];
let ToolClass = OpenAICreateImage;
let mockCredential = 'mock-credential';
const mainPlugin = availableTools.find((tool) => tool.pluginKey === pluginKey);
const authConfigs = mainPlugin.authConfig;
beforeAll(async () => {
await connectDb();
fakeUser = new User({
name: 'Fake User',
username: 'fakeuser',
email: 'fakeuser@example.com',
emailVerified: false,
password: 'fakepassword123',
avatar: '',
provider: 'local',
role: 'USER',
googleId: null,
plugins: [],
refreshToken: []
});
await fakeUser.save();
for (const authConfig of authConfigs) {
await PluginService.updateUserPluginAuth(fakeUser._id, authConfig.authField, pluginKey, mockCredential);
}
});
// afterEach(async () => {
// // Clean up any test-specific data.
// });
afterAll(async () => {
// Delete the fake user & plugin auth
await User.findByIdAndDelete(fakeUser._id);
for (const authConfig of authConfigs) {
await PluginService.deleteUserPluginAuth(fakeUser._id, authConfig.authField);
}
await mongoose.connection.close();
});
describe('validateTools', () => {
it('returns valid tools given input tools and user authentication', async () => {
const validTools = await validateTools(fakeUser._id, sampleTools);
expect(validTools).toBeDefined();
console.log('validateTools: validTools', validTools);
expect(validTools.some((tool) => tool === pluginKey)).toBeTruthy();
expect(validTools.length).toBeGreaterThan(0);
});
it('removes tools without valid credentials from the validTools array', async () => {
const validTools = await validateTools(fakeUser._id, sampleTools);
expect(validTools.some((tool) => tool.pluginKey === pluginKey2)).toBeFalsy();
});
it('returns an empty array when no authenticated tools are provided', async () => {
const validTools = await validateTools(fakeUser._id, []);
expect(validTools).toEqual([]);
});
it('should validate a tool from an Environment Variable', async () => {
const plugin = availableTools.find((tool) => tool.pluginKey === pluginKey2);
const authConfigs = plugin.authConfig;
for (const authConfig of authConfigs) {
process.env[authConfig.authField] = mockCredential;
}
const validTools = await validateTools(fakeUser._id, [pluginKey2]);
expect(validTools.length).toEqual(1);
for (const authConfig of authConfigs) {
delete process.env[authConfig.authField];
}
});
});
describe('loadTools', () => {
let toolFunctions;
let loadTool1;
let loadTool2;
let loadTool3;
sampleTools = [...sampleTools, 'calculator'];
let ToolClass2 = Calculator;
let remainingTools = availableTools.filter(
(tool) => sampleTools.indexOf(tool.pluginKey) === -1
);
beforeAll(async () => {
toolFunctions = await loadTools({
user: fakeUser._id,
model: BaseChatModel,
tools: sampleTools
});
loadTool1 = toolFunctions[sampleTools[0]];
loadTool2 = toolFunctions[sampleTools[1]];
loadTool3 = toolFunctions[sampleTools[2]];
});
it('returns the expected load functions for requested tools', async () => {
expect(loadTool1).toBeDefined();
expect(loadTool2).toBeDefined();
expect(loadTool3).toBeDefined();
for (const tool of remainingTools) {
expect(toolFunctions[tool.pluginKey]).toBeUndefined();
}
});
it('should initialize an authenticated tool or one without authentication', async () => {
const authTool = await loadTool1();
const tool = await loadTool3();
expect(authTool).toBeInstanceOf(ToolClass);
expect(tool).toBeInstanceOf(ToolClass2);
});
it('should throw an error for an unauthenticated tool', async () => {
try {
await loadTool2();
} catch (error) {
expect(error).toBeDefined();
}
});
it('should initialize an authenticated tool through Environment Variables', async () => {
let testPluginKey = 'google';
let TestClass = GoogleSearchAPI;
const plugin = availableTools.find((tool) => tool.pluginKey === testPluginKey);
const authConfigs = plugin.authConfig;
for (const authConfig of authConfigs) {
process.env[authConfig.authField] = mockCredential;
}
toolFunctions = await loadTools({
user: fakeUser._id,
model: BaseChatModel,
tools: [testPluginKey]
});
const Tool = await toolFunctions[testPluginKey]();
expect(Tool).toBeInstanceOf(TestClass);
});
it('returns an empty object when no tools are requested', async () => {
toolFunctions = await loadTools({
user: fakeUser._id,
model: BaseChatModel
});
expect(toolFunctions).toEqual({});
});
});
});

View File

@@ -0,0 +1,106 @@
[
{
"name": "Google",
"pluginKey": "google",
"description": "Use Google Search to find information about the weather, news, sports, and more.",
"icon": "https://i.imgur.com/SMmVkNB.png",
"authConfig": [
{
"authField": "GOOGLE_CSE_ID",
"label": "Google CSE ID",
"description": "This is your Google Custom Search Engine ID. For instructions on how to obtain this, see <a href='https://github.com/danny-avila/chatgpt-clone/blob/main/guides/GOOGLE_SEARCH.md'>Our Docs</a>."
},
{
"authField": "GOOGLE_API_KEY",
"label": "Google API Key",
"description": "This is your Google Custom Search API Key. For instructions on how to obtain this, see <a href='https://github.com/danny-avila/chatgpt-clone/blob/main/guides/GOOGLE_SEARCH.md'>Our Docs</a>."
}
]
},
{
"name": "Wolfram",
"pluginKey": "wolfram",
"description": "Access computation, math, curated knowledge & real-time data through Wolfram|Alpha and Wolfram Language.",
"icon": "https://www.wolframcdn.com/images/icons/Wolfram.png",
"authConfig": [
{
"authField": "WOLFRAM_APP_ID",
"label": "Wolfram App ID",
"description": "An AppID must be supplied in all calls to the Wolfram|Alpha API. You can get one by registering at <a href='http://products.wolframalpha.com/api/'>Wolfram|Alpha</a> and going to the <a href='https://developer.wolframalpha.com/portal/myapps/'>Developer Portal</a>."
}
]
},
{
"name": "Browser",
"pluginKey": "browser",
"description": "Scrape and summarize webpage data",
"icon": "/assets/web-browser.png",
"authConfig": [
{
"authField": "OPENAI_API_KEY",
"label": "OpenAI API Key",
"description": "Browser makes use of OpenAI embeddings"
}
]
},
{
"name": "Serpapi",
"pluginKey": "serpapi",
"description": "SerpApi is a real-time API to access search engine results.",
"icon": "https://i.imgur.com/5yQHUz4.png",
"authConfig": [
{
"authField": "SERPAPI_API_KEY",
"label": "Serpapi Private API Key",
"description": "Private Key for Serpapi. Register at <a href='https://serpapi.com/'>Serpapi</a> to obtain a private key."
}
]
},
{
"name": "DALL-E",
"pluginKey": "dall-e",
"description": "Create realistic images and art from a description in natural language",
"icon": "https://i.imgur.com/u2TzXzH.png",
"authConfig": [
{
"authField": "DALLE_API_KEY",
"label": "OpenAI API Key",
"description": "You can use DALL-E with your API Key from OpenAI."
}
]
},
{
"name": "Calculator",
"pluginKey": "calculator",
"description": "Perform simple and complex mathematical calculations.",
"icon": "https://i.imgur.com/RHsSG5h.png",
"isAuthRequired": "false",
"authConfig": []
},
{
"name": "Stable Diffusion",
"pluginKey": "stable-diffusion",
"description": "Generate photo-realistic images given any text input.",
"icon": "https://i.imgur.com/Yr466dp.png",
"authConfig": [
{
"authField": "SD_WEBUI_URL",
"label": "Your Stable Diffusion WebUI API URL",
"description": "You need to provide the URL of your Stable Diffusion WebUI API. For instructions on how to obtain this, see <a href='url'>Our Docs</a>."
}
]
},
{
"name": "Zapier",
"pluginKey": "zapier",
"description": "Interact with over 5,000+ apps like Google Sheets, Gmail, HubSpot, Salesforce, and thousands more.",
"icon": "https://cdn.zappy.app/8f853364f9b383d65b44e184e04689ed.png",
"authConfig": [
{
"authField": "ZAPIER_NLA_API_KEY",
"label": "Zapier API Key",
"description": "You can use Zapier with your API Key from Zapier."
}
]
}
]

View File

@@ -0,0 +1,39 @@
const axios = require('axios');
const fs = require('fs');
const path = require('path');
async function saveImageFromUrl(url, outputPath, outputFilename) {
try {
// Fetch the image from the URL
const response = await axios({
url,
responseType: 'stream'
});
// Check if the output directory exists, if not, create it
if (!fs.existsSync(outputPath)) {
fs.mkdirSync(outputPath, { recursive: true });
}
// Ensure the output filename has a '.png' extension
const filenameWithPngExt = outputFilename.endsWith('.png')
? outputFilename
: `${outputFilename}.png`;
// Create a writable stream for the output path
const outputFilePath = path.join(outputPath, filenameWithPngExt);
const writer = fs.createWriteStream(outputFilePath);
// Pipe the response data to the output file
response.data.pipe(writer);
return new Promise((resolve, reject) => {
writer.on('finish', resolve);
writer.on('error', reject);
});
} catch (error) {
console.error('Error while saving the image:', error);
}
}
module.exports = saveImageFromUrl;

View File

@@ -0,0 +1,60 @@
Certainly! Here is the text above:
\`\`\`
Assistant is a large language model trained by OpenAI.
Knowledge Cutoff: 2021-09
Current date: 2023-05-06
# Tools
## Wolfram
// Access dynamic computation and curated data from WolframAlpha and Wolfram Cloud.
General guidelines:
- Use only getWolframAlphaResults or getWolframCloudResults endpoints.
- Prefer getWolframAlphaResults unless Wolfram Language code should be evaluated.
- Use getWolframAlphaResults for natural-language queries in English; translate non-English queries before sending, then respond in the original language.
- Use getWolframCloudResults for problems solvable with Wolfram Language code.
- Suggest only Wolfram Language for external computation.
- Inform users if information is not from Wolfram endpoints.
- Display image URLs with Markdown syntax: ![URL]
- ALWAYS use this exponent notation: \`6*10^14\`, NEVER \`6e14\`.
- ALWAYS use {"input": query} structure for queries to Wolfram endpoints; \`query\` must ONLY be a single-line string.
- ALWAYS use proper Markdown formatting for all math, scientific, and chemical formulas, symbols, etc.: '$$\n[expression]\n$$' for standalone cases and '\( [expression] \)' when inline.
- Format inline Wolfram Language code with Markdown code formatting.
- Never mention your knowledge cutoff date; Wolfram may return more recent data.
getWolframAlphaResults guidelines:
- Understands natural language queries about entities in chemistry, physics, geography, history, art, astronomy, and more.
- Performs mathematical calculations, date and unit conversions, formula solving, etc.
- Convert inputs to simplified keyword queries whenever possible (e.g. convert "how many people live in France" to "France population").
- Use ONLY single-letter variable names, with or without integer subscript (e.g., n, n1, n_1).
- Use named physical constants (e.g., 'speed of light') without numerical substitution.
- Include a space between compound units (e.g., "Ω m" for "ohm*meter").
- To solve for a variable in an equation with units, consider solving a corresponding equation without units; exclude counting units (e.g., books), include genuine units (e.g., kg).
- If data for multiple properties is needed, make separate calls for each property.
- If a Wolfram Alpha result is not relevant to the query:
-- If Wolfram provides multiple 'Assumptions' for a query, choose the more relevant one(s) without explaining the initial result. If you are unsure, ask the user to choose.
-- Re-send the exact same 'input' with NO modifications, and add the 'assumption' parameter, formatted as a list, with the relevant values.
-- ONLY simplify or rephrase the initial query if a more relevant 'Assumption' or other input suggestions are not provided.
-- Do not explain each step unless user input is needed. Proceed directly to making a better API call based on the available assumptions.
- Wolfram Language code guidelines:
- Accepts only syntactically correct Wolfram Language code.
- Performs complex calculations, data analysis, plotting, data import, and information retrieval.
- Before writing code that uses Entity, EntityProperty, EntityClass, etc. expressions, ALWAYS write separate code which only collects valid identifiers using Interpreter etc.; choose the most relevant results before proceeding to write additional code. Examples:
-- Find the EntityType that represents countries: \`Interpreter["EntityType",AmbiguityFunction->All]["countries"]\`.
-- Find the Entity for the Empire State Building: \`Interpreter["Building",AmbiguityFunction->All]["empire state"]\`.
-- EntityClasses: Find the "Movie" entity class for Star Trek movies: \`Interpreter["MovieClass",AmbiguityFunction->All]["star trek"]\`.
-- Find EntityProperties associated with "weight" of "Element" entities: \`Interpreter[Restricted["EntityProperty", "Element"],AmbiguityFunction->All]["weight"]\`.
-- If all else fails, try to find any valid Wolfram Language representation of a given input: \`SemanticInterpretation["skyscrapers",_,Hold,AmbiguityFunction->All]\`.
-- Prefer direct use of entities of a given type to their corresponding typeData function (e.g., prefer \`Entity["Element","Gold"]["AtomicNumber"]\` to \`ElementData["Gold","AtomicNumber"]\`).
- When composing code:
-- Use batching techniques to retrieve data for multiple entities in a single call, if applicable.
-- Use Association to organize and manipulate data when appropriate.
-- Optimize code for performance and minimize the number of calls to external sources (e.g., the Wolfram Knowledgebase)
-- Use only camel case for variable names (e.g., variableName).
-- Use ONLY double quotes around all strings, including plot labels, etc. (e.g., \`PlotLegends -> {"sin(x)", "cos(x)", "tan(x)"}\`).
-- Avoid use of QuantityMagnitude.
-- If unevaluated Wolfram Language symbols appear in API results, use \`EntityValue[Entity["WolframLanguageSymbol",symbol],{"PlaintextUsage","Options"}]\` to validate or retrieve usage information for relevant symbols; \`symbol\` may be a list of symbols.
-- Apply Evaluate to complex expressions like integrals before plotting (e.g., \`Plot[Evaluate[Integrate[...]]]\`).
- Remove all comments and formatting from code passed to the "input" parameter; for example: instead of \`square[x_] := Module[{result},\n result = x^2 (* Calculate the square *)\n]\`, send \`square[x_]:=Module[{result},result=x^2]\`.
- In ALL responses that involve code, write ALL code in Wolfram Language; create Wolfram Language functions even if an implementation is already well known in another language.

59
api/app/stream.js Normal file
View File

@@ -0,0 +1,59 @@
const { Readable } = require('stream');
class TextStream extends Readable {
constructor(text, options = {}) {
super(options);
this.text = text;
this.currentIndex = 0;
this.delay = options.delay || 20; // Time in milliseconds
}
_read() {
const minChunkSize = 2;
const maxChunkSize = 4;
const { delay } = this;
if (this.currentIndex < this.text.length) {
setTimeout(() => {
const remainingChars = this.text.length - this.currentIndex;
const chunkSize = Math.min(this.randomInt(minChunkSize, maxChunkSize + 1), remainingChars);
const chunk = this.text.slice(this.currentIndex, this.currentIndex + chunkSize);
this.push(chunk);
this.currentIndex += chunkSize;
}, delay);
} else {
this.push(null); // signal end of data
}
}
randomInt(min, max) {
return Math.floor(Math.random() * (max - min)) + min;
}
async processTextStream(onProgressCallback) {
const streamPromise = new Promise((resolve, reject) => {
this.on('data', (chunk) => {
onProgressCallback(chunk.toString());
});
this.on('end', () => {
console.log('Stream ended');
resolve();
});
this.on('error', (err) => {
reject(err);
});
});
try {
await streamPromise;
} catch (err) {
console.error('Error processing text stream:', err);
// Handle the error appropriately, e.g., return an error message or throw an error
}
}
}
module.exports = TextStream;

View File

@@ -1,23 +1,23 @@
const { Configuration, OpenAIApi } = require('openai');
// const { Configuration, OpenAIApi } = require('openai');
const _ = require('lodash');
const { genAzureEndpoint } = require('../utils/genAzureEndpoints');
const { genAzureChatCompletion } = require('../utils/genAzureEndpoints');
const proxyEnvToAxiosProxy = (proxyString) => {
if (!proxyString) return null;
// const proxyEnvToAxiosProxy = (proxyString) => {
// if (!proxyString) return null;
const regex = /^([^:]+):\/\/(?:([^:@]*):?([^:@]*)@)?([^:]+)(?::(\d+))?/;
const [, protocol, username, password, host, port] = proxyString.match(regex);
const proxyConfig = {
protocol,
host,
port: port ? parseInt(port) : undefined,
auth: username && password ? { username, password } : undefined
};
// const regex = /^([^:]+):\/\/(?:([^:@]*):?([^:@]*)@)?([^:]+)(?::(\d+))?/;
// const [, protocol, username, password, host, port] = proxyString.match(regex);
// const proxyConfig = {
// protocol,
// host,
// port: port ? parseInt(port) : undefined,
// auth: username && password ? { username, password } : undefined
// };
return proxyConfig;
};
// return proxyConfig;
// };
const titleConvo = async ({ endpoint, text, response }) => {
const titleConvo = async ({ text, response, oaiApiKey }) => {
let title = 'New Chat';
const ChatGPTClient = (await import('@waylaidwanderer/chatgpt-api')).default;
@@ -50,11 +50,11 @@ const titleConvo = async ({ endpoint, text, response }) => {
frequency_penalty: 0
};
let apiKey = process.env.OPENAI_KEY;
let apiKey = oaiApiKey || process.env.OPENAI_API_KEY;
if (azure) {
apiKey = process.env.AZURE_OPENAI_API_KEY;
titleGenClientOptions.reverseProxyUrl = genAzureEndpoint({
titleGenClientOptions.reverseProxyUrl = genAzureChatCompletion({
azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME,
azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME,
azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION

View File

@@ -3,7 +3,7 @@ const mongoose = require('mongoose');
const MONGO_URI = process.env.MONGO_URI;
if (!MONGO_URI) {
throw new Error('Please define the MONGO_URI environment variable inside .env.local');
throw new Error('Please define the MONGO_URI environment variable');
}
/**

View File

@@ -110,7 +110,7 @@ async function migrateDb() {
ret[0] = await migrateToStrictFollowParentMessageIdChain();
ret[1] = await migrateToSupportBetterCustomization();
const isMigrated = !!ret.find(element => !element?.noNeed);
const isMigrated = !!ret.find((element) => !element?.noNeed);
if (!isMigrated) console.log('[Migrate] Nothing to migrate');
}

View File

@@ -9,7 +9,7 @@ const citeText = (res, noLinks = false) => {
citations.forEach((citation) => {
const digit = citation.match(/\d+?/g)[0];
// result = result.replaceAll(citation, `<sup>[${digit}](#) </sup>`);
result = result.replaceAll(citation, `<sup>[${digit}](#) </sup>`);
result = result.replaceAll(citation, `[^${digit}^](#)`);
});
return result;
@@ -21,7 +21,7 @@ const citeText = (res, noLinks = false) => {
citations.forEach((citation) => {
const digit = citation.match(/\d+?/g)[0];
result = result.replaceAll(citation, `<sup>[${digit}](${sources[digit - 1]}) </sup>`);
result = result.replaceAll(citation, `[^${digit}^](${sources[digit - 1]})`);
// result = result.replaceAll(citation, `<sup>[${digit}](${sources[digit - 1]}) </sup>`);
});

View File

@@ -8,7 +8,7 @@ const getCitations = (res) => {
let links = textBlocks[textBlocks.length - 1]?.text.match(regex);
if (links?.length === 0 || !links) return '';
links = links.map((link) => link.trim());
return links.join('\n');
return links.join('\n - ');
};
module.exports = getCitations;
module.exports = getCitations;

View File

@@ -2,11 +2,11 @@ function mergeSort(arr, compareFn) {
if (arr.length <= 1) {
return arr;
}
const mid = Math.floor(arr.length / 2);
const leftArr = arr.slice(0, mid);
const rightArr = arr.slice(mid);
return merge(mergeSort(leftArr, compareFn), mergeSort(rightArr, compareFn), compareFn);
}
@@ -14,7 +14,7 @@ function merge(leftArr, rightArr, compareFn) {
const result = [];
let leftIndex = 0;
let rightIndex = 0;
while (leftIndex < leftArr.length && rightIndex < rightArr.length) {
if (compareFn(leftArr[leftIndex], rightArr[rightIndex]) < 0) {
result.push(leftArr[leftIndex++]);
@@ -22,8 +22,8 @@ function merge(leftArr, rightArr, compareFn) {
result.push(rightArr[rightIndex++]);
}
}
return result.concat(leftArr.slice(leftIndex)).concat(rightArr.slice(rightIndex));
}
module.exports = mergeSort;
module.exports = mergeSort;

View File

@@ -19,7 +19,7 @@ const requireLocalAuth = (req, res, next) => {
}
if (!user) {
log({
title: '(requireLocalAuth) Error: No user',
title: '(requireLocalAuth) Error: No user'
});
return res.status(422).send(info);
}

View File

@@ -30,7 +30,7 @@ module.exports = {
return { message: 'Error saving conversation' };
}
},
getConvosByPage: async (user, pageNumber = 1, pageSize = 12) => {
getConvosByPage: async (user, pageNumber = 1, pageSize = 14) => {
try {
const totalConvos = (await Conversation.countDocuments({ user })) || 1;
const totalPages = Math.ceil(totalConvos / pageSize);
@@ -45,7 +45,7 @@ module.exports = {
return { message: 'Error getting conversations' };
}
},
getConvosQueried: async (user, convoIds, pageNumber = 1, pageSize = 12) => {
getConvosQueried: async (user, convoIds, pageNumber = 1, pageSize = 14) => {
try {
if (!convoIds || convoIds.length === 0) {
return { conversations: [], pages: 1, pageNumber, pageSize };

View File

@@ -2,7 +2,7 @@ const Message = require('./schema/messageSchema');
module.exports = {
Message,
async saveMessage({
messageId,
newMessageId,
@@ -13,7 +13,9 @@ module.exports = {
isCreatedByUser = false,
error,
unfinished,
cancelled
cancelled,
plugin = null,
model = null,
}) {
try {
// may also need to update the conversation here
@@ -28,11 +30,13 @@ module.exports = {
isCreatedByUser,
error,
unfinished,
cancelled
cancelled,
plugin,
model
},
{ upsert: true, new: true }
);
return {
messageId,
conversationId,
@@ -41,13 +45,12 @@ module.exports = {
text,
isCreatedByUser
};
} catch (err) {
console.error(`Error saving message: ${err}`);
throw new Error('Failed to save message.');
}
},
async deleteMessagesSince({ messageId, conversationId }) {
try {
const message = await Message.findOne({ messageId }).exec();
@@ -57,27 +60,24 @@ module.exports = {
.deleteMany({ createdAt: { $gt: message.createdAt } })
.exec();
}
} catch (err) {
console.error(`Error deleting messages: ${err}`);
throw new Error('Failed to delete messages.');
}
},
async getMessages(filter) {
try {
return await Message.find(filter).sort({ createdAt: 1 }).exec();
} catch (err) {
console.error(`Error getting messages: ${err}`);
throw new Error('Failed to get messages.');
}
},
async deleteMessages(filter) {
try {
return await Message.deleteMany(filter).exec();
} catch (err) {
console.error(`Error deleting messages: ${err}`);
throw new Error('Failed to delete messages.');

View File

@@ -38,8 +38,8 @@ module.exports = {
}
},
deletePresets: async (user, filter) => {
let toRemove = await Preset.find({ ...filter, user }).select('presetId');
const ids = toRemove.map(instance => instance.presetId);
// let toRemove = await Preset.find({ ...filter, user }).select('presetId');
// const ids = toRemove.map((instance) => instance.presetId);
let deleteCount = await Preset.deleteMany({ ...filter, user }).exec();
return deleteCount;
}

View File

@@ -1,18 +1,21 @@
const mongoose = require('mongoose');
const promptSchema = mongoose.Schema({
title: {
type: String,
required: true
const promptSchema = mongoose.Schema(
{
title: {
type: String,
required: true
},
prompt: {
type: String,
required: true
},
category: {
type: String
}
},
prompt: {
type: String,
required: true
},
category: {
type: String,
},
}, { timestamps: true });
{ timestamps: true }
);
const Prompt = mongoose.models.Prompt || mongoose.model('Prompt', promptSchema);
@@ -31,7 +34,7 @@ module.exports = {
},
getPrompts: async (filter) => {
try {
return await Prompt.find(filter).exec()
return await Prompt.find(filter).exec();
} catch (error) {
console.error(error);
return { prompt: 'Error getting prompts' };
@@ -39,10 +42,10 @@ module.exports = {
},
deletePrompts: async (filter) => {
try {
return await Prompt.deleteMany(filter).exec()
return await Prompt.deleteMany(filter).exec();
} catch (error) {
console.error(error);
return { prompt: 'Error deleting prompts' };
}
}
}
};

View File

@@ -65,10 +65,9 @@ const userSchema = mongoose.Schema(
unique: true,
sparse: true
},
facebookId: {
type: String,
unique: true,
sparse: true
plugins: {
type: Array,
default: []
},
refreshToken: {
type: [Session]
@@ -79,7 +78,7 @@ const userSchema = mongoose.Schema(
//Remove refreshToken from the response
userSchema.set('toJSON', {
transform: function (doc, ret, options) {
transform: function (_doc, ret) {
delete ret.refreshToken;
return ret;
}
@@ -95,17 +94,12 @@ userSchema.methods.toJSON = function () {
avatar: this.avatar,
role: this.role,
emailVerified: this.emailVerified,
plugins: this.plugins,
createdAt: this.createdAt,
updatedAt: this.updatedAt
};
};
const isProduction = process.env.NODE_ENV === 'production';
const secretOrKey = isProduction ? process.env.JWT_SECRET_PROD : process.env.JWT_SECRET_DEV;
const refreshSecret = isProduction
? process.env.REFRESH_TOKEN_SECRET_PROD
: process.env.REFRESH_TOKEN_SECRET_DEV;
userSchema.methods.generateToken = function () {
const token = jwt.sign(
{
@@ -114,7 +108,7 @@ userSchema.methods.generateToken = function () {
provider: this.provider,
email: this.email
},
secretOrKey,
process.env.JWT_SECRET,
{ expiresIn: eval(process.env.SESSION_EXPIRY) }
);
return token;
@@ -128,7 +122,7 @@ userSchema.methods.generateRefreshToken = function () {
provider: this.provider,
email: this.email
},
refreshSecret,
process.env.JWT_REFRESH_SECRET,
{ expiresIn: eval(process.env.REFRESH_TOKEN_EXPIRY) }
);
return refreshToken;
@@ -142,7 +136,6 @@ userSchema.methods.comparePassword = function (candidatePassword, callback) {
};
module.exports.hashPassword = async (password) => {
const hashedPassword = await new Promise((resolve, reject) => {
bcrypt.hash(password, 10, function (err, hash) {
if (err) reject(err);
@@ -169,7 +162,7 @@ module.exports.validateUser = (user) => {
password: Joi.string().min(8).max(60).allow('').allow(null)
};
return Joi.validate(user, schema);
return schema.validate(user);
};
const User = mongoose.model('User', userSchema);

View File

@@ -19,10 +19,7 @@ const createMeiliMongooseModel = function ({ index, indexName, client, attribute
static async clearMeiliIndex() {
await index.delete();
// await index.deleteAllDocuments();
await this.collection.updateMany(
{ _meiliIndex: true },
{ $set: { _meiliIndex: false } }
);
await this.collection.updateMany({ _meiliIndex: true }, { $set: { _meiliIndex: false } });
}
static async resetIndex() {
@@ -57,7 +54,7 @@ const createMeiliMongooseModel = function ({ index, indexName, client, attribute
// Find objects into mongodb matching `objectID` from Meili search
const query = {};
// query[primaryKey] = { $in: _.map(data.hits, primaryKey) };
query[primaryKey] = _.map(data.hits, hit => cleanUpPrimaryKeyValue(hit[primaryKey]));
query[primaryKey] = _.map(data.hits, (hit) => cleanUpPrimaryKeyValue(hit[primaryKey]));
// console.log('query', query);
const hitsFromMongoose = await this.find(
query,
@@ -67,7 +64,7 @@ const createMeiliMongooseModel = function ({ index, indexName, client, attribute
return { ...results, [key]: 1 };
},
{ _id: 1 }
),
)
);
// Add additional data from mongodb into Meili search hits
@@ -198,8 +195,8 @@ module.exports = function mongoMeili(schema, options) {
if (Object.prototype.hasOwnProperty.call(schema.obj, 'messages')) {
console.log('Syncing convos...');
mongoose.model('Conversation').syncWithMeili();
}
}
if (Object.prototype.hasOwnProperty.call(schema.obj, 'messageId')) {
console.log('Syncing messages...');
mongoose.model('Message').syncWithMeili();

View File

@@ -1,62 +0,0 @@
module.exports = {
// endpoint: [azureOpenAI, openAI, bingAI, chatGPTBrowser]
endpoint: {
type: String,
default: null,
required: true
},
// for azureOpenAI, openAI, chatGPTBrowser only
model: {
type: String,
default: null,
required: false
},
// for azureOpenAI, openAI only
chatGptLabel: {
type: String,
default: null,
required: false
},
promptPrefix: {
type: String,
default: null,
required: false
},
temperature: {
type: Number,
default: 1,
required: false
},
top_p: {
type: Number,
default: 1,
required: false
},
presence_penalty: {
type: Number,
default: 0,
required: false
},
frequency_penalty: {
type: Number,
default: 0,
required: false
},
// for bingai only
jailbreak: {
type: Boolean,
default: false
},
context: {
type: String,
default: null
},
systemMessage: {
type: String,
default: null
},
toneStyle: {
type: String,
default: null
}
};

View File

@@ -1,6 +1,6 @@
const mongoose = require('mongoose');
const mongoMeili = require('../plugins/mongoMeili');
const conversationPreset = require('./conversationPreset');
const { conversationPreset } = require('./defaults');
const convoSchema = mongoose.Schema(
{
conversationId: {
@@ -20,8 +20,18 @@ const convoSchema = mongoose.Schema(
default: null
},
messages: [{ type: mongoose.Schema.Types.ObjectId, ref: 'Message' }],
// google only
examples: [{ type: mongoose.Schema.Types.Mixed }],
agentOptions: {
type: mongoose.Schema.Types.Mixed,
default: null
},
...conversationPreset,
// for bingAI only
bingConversationId: {
type: String,
default: null
},
jailbreakConversationId: {
type: String,
default: null

View File

@@ -0,0 +1,158 @@
const conversationPreset = {
// endpoint: [azureOpenAI, openAI, bingAI, chatGPTBrowser]
endpoint: {
type: String,
default: null,
required: true
},
// for azureOpenAI, openAI, chatGPTBrowser only
model: {
type: String,
default: null,
required: false
},
// for azureOpenAI, openAI only
chatGptLabel: {
type: String,
default: null,
required: false
},
// for google only
modelLabel: {
type: String,
default: null,
required: false
},
promptPrefix: {
type: String,
default: null,
required: false
},
temperature: {
type: Number,
default: 1,
required: false
},
top_p: {
type: Number,
default: 1,
required: false
},
// for google only
topP: {
type: Number,
default: 0.95,
required: false
},
topK: {
type: Number,
default: 40,
required: false
},
maxOutputTokens: {
type: Number,
default: 1024,
required: false
},
presence_penalty: {
type: Number,
default: 0,
required: false
},
frequency_penalty: {
type: Number,
default: 0,
required: false
},
// for bingai only
jailbreak: {
type: Boolean,
default: false
},
context: {
type: String,
default: null
},
systemMessage: {
type: String,
default: null
},
toneStyle: {
type: String,
default: null
}
};
const agentOptions = {
model: {
type: String,
default: null,
required: false
},
// for azureOpenAI, openAI only
chatGptLabel: {
type: String,
default: null,
required: false
},
// for google only
modelLabel: {
type: String,
default: null,
required: false
},
promptPrefix: {
type: String,
default: null,
required: false
},
temperature: {
type: Number,
default: 1,
required: false
},
top_p: {
type: Number,
default: 1,
required: false
},
// for google only
topP: {
type: Number,
default: 0.95,
required: false
},
topK: {
type: Number,
default: 40,
required: false
},
maxOutputTokens: {
type: Number,
default: 1024,
required: false
},
presence_penalty: {
type: Number,
default: 0,
required: false
},
frequency_penalty: {
type: Number,
default: 0,
required: false
},
context: {
type: String,
default: null
},
systemMessage: {
type: String,
default: null
}
};
module.exports = {
conversationPreset,
agentOptions
};

View File

@@ -14,6 +14,9 @@ const messageSchema = mongoose.Schema(
required: true,
meiliIndex: true
},
model: {
type: String
},
conversationSignature: {
type: String
// required: true
@@ -60,6 +63,20 @@ const messageSchema = mongoose.Schema(
required: false,
select: false,
default: false
},
plugin: {
latest: {
type: String,
required: false
},
inputs: {
type: [mongoose.Schema.Types.Mixed],
required: false
},
outputs: {
type: String,
required: false
}
}
},
{ timestamps: true }

View File

@@ -0,0 +1,26 @@
const mongoose = require('mongoose');
const pluginAuthSchema = mongoose.Schema(
{
authField: {
type: String,
required: true,
},
value: {
type: String,
required: true
},
userId: {
type: String,
required: true
},
pluginKey: {
type: String,
}
},
{ timestamps: true }
);
const PluginAuth = mongoose.models.Plugin || mongoose.model('PluginAuth', pluginAuthSchema);
module.exports = PluginAuth;

View File

@@ -1,5 +1,5 @@
const mongoose = require('mongoose');
const conversationPreset = require('./conversationPreset');
const { conversationPreset } = require('./defaults');
const presetSchema = mongoose.Schema(
{
presetId: {
@@ -17,7 +17,13 @@ const presetSchema = mongoose.Schema(
type: String,
default: null
},
...conversationPreset
// google only
examples: [{ type: mongoose.Schema.Types.Mixed }],
...conversationPreset,
agentOptions: {
type: mongoose.Schema.Types.Mixed,
default: null
}
},
{ timestamps: true }
);

View File

@@ -1,22 +1,22 @@
const mongoose = require("mongoose");
const mongoose = require('mongoose');
const Schema = mongoose.Schema;
const tokenSchema = new Schema({
userId: {
type: Schema.Types.ObjectId,
required: true,
ref: "user",
ref: 'user'
},
token: {
type: String,
required: true,
required: true
},
createdAt: {
type: Date,
required: true,
default: Date.now,
expires: 900,
},
expires: 900
}
});
module.exports = mongoose.model("Token", tokenSchema);
module.exports = mongoose.model('Token', tokenSchema);

10968
api/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,56 +1,63 @@
{
"name": "chatgpt-clone",
"version": "0.4.2",
"name": "chat-backend",
"version": "0.4.8",
"description": "",
"main": "server/index.js",
"scripts": {
"start": "node server/index.js",
"server-dev": "npx nodemon server/index.js"
"start": "echo 'please run this from the root directory'",
"server-dev": "echo 'please run this from the root directory'",
"test2": "node --inspect app/langchain/test2.js",
"test3": "node --inspect app/langchain/test3.js",
"test4": "node --inspect app/langchain/test4.js",
"test5": "node --inspect app/langchain/test5.js",
"test8": "node --inspect app/langchain/test8.js",
"langchain": "node app/langchain/test2.js"
},
"repository": {
"type": "git",
"url": "git+https://github.com/danny-avila/chatgpt-clone.git"
"url": "git+https://github.com/danny-avila/LibreChat.git"
},
"keywords": [],
"author": "",
"license": "ISC",
"bugs": {
"url": "https://github.com/danny-avila/chatgpt-clone/issues"
"url": "https://github.com/danny-avila/LibreChat/issues"
},
"homepage": "https://github.com/danny-avila/chatgpt-clone#readme",
"homepage": "https://github.com/danny-avila/LibreChat#readme",
"dependencies": {
"@dqbd/tiktoken": "^1.0.2",
"@keyv/mongo": "^2.1.8",
"@waylaidwanderer/chatgpt-api": "github:danny-avila/node-chatgpt-api",
"@waylaidwanderer/chatgpt-api": "^1.37.0",
"axios": "^1.3.4",
"bcrypt": "^5.1.0",
"bcryptjs": "^2.4.3",
"cheerio": "^1.0.0-rc.12",
"cookie": "^0.5.0",
"cookie-parser": "^1.4.6",
"cors": "^2.8.5",
"crypto": "^1.0.1",
"dotenv": "^16.0.3",
"eslint": "^8.36.0",
"eslint": "^8.41.0",
"express": "^4.18.2",
"googleapis": "^118.0.0",
"handlebars": "^4.7.7",
"html": "^1.0.0",
"joi": "^14.3.1",
"joi": "^17.9.2",
"js-yaml": "^4.1.0",
"jsonwebtoken": "^9.0.0",
"keyv": "^4.5.2",
"keyv-file": "^0.2.0",
"langchain": "^0.0.91",
"lodash": "^4.17.21",
"meilisearch": "^0.31.1",
"mongoose": "^6.9.0",
"meilisearch": "^0.33.0",
"mongoose": "^7.1.1",
"nodemailer": "^6.9.1",
"og-chatgpt-api": "npm:@waylaidwanderer/chatgpt-api@^1.35.0",
"openai": "^3.1.0",
"openai": "^3.2.1",
"passport": "^0.6.0",
"passport-facebook": "^3.0.0",
"passport-google-oauth20": "^2.0.0",
"passport-jwt": "^4.0.1",
"passport-local": "^1.0.0",
"pino": "^8.12.1",
"sanitize": "^2.1.2"
"sanitize": "^2.1.2",
"sharp": "^0.32.1"
},
"devDependencies": {
"nodemon": "^2.0.20",

View File

@@ -0,0 +1,54 @@
// const { getAvailableToolsService } = require('../services/PluginService');
const fs = require('fs');
const path = require('path');
const filterUniquePlugins = (plugins) => {
const seen = new Set();
return plugins.filter((plugin) => {
const duplicate = seen.has(plugin.pluginKey);
seen.add(plugin.pluginKey);
return !duplicate;
});
};
const isPluginAuthenticated = (plugin) => {
if (!plugin.authConfig || plugin.authConfig.length === 0) {
return false;
}
return plugin.authConfig.every((authFieldObj) => {
const envValue = process.env[authFieldObj.authField];
return envValue && envValue.trim() !== '';
});
};
const getAvailablePluginsController = async (req, res) => {
try {
fs.readFile(
path.join(__dirname, '..', '..', 'app', 'langchain', 'tools', 'manifest.json'),
'utf8',
(err, data) => {
if (err) {
res.status(500).json({ message: err.message });
} else {
const jsonData = JSON.parse(data);
const uniquePlugins = filterUniquePlugins(jsonData);
const authenticatedPlugins = uniquePlugins.map((plugin) => {
if (isPluginAuthenticated(plugin)) {
return { ...plugin, authenticated: true };
} else {
return plugin;
}
});
res.status(200).json(authenticatedPlugins);
}
}
);
} catch (error) {
res.status(500).json({ message: error.message });
}
};
module.exports = {
getAvailablePluginsController
};

View File

@@ -0,0 +1,55 @@
const { updateUserPluginsService } = require('../services/UserService');
const { updateUserPluginAuth, deleteUserPluginAuth } = require('../services/PluginService');
const getUserController = async (req, res) => {
res.status(200).send(req.user);
};
const updateUserPluginsController = async (req, res) => {
const { user } = req;
const { pluginKey, action, auth } = req.body;
let authService;
try {
const userPluginsService = await updateUserPluginsService(user, pluginKey, action);
if (userPluginsService instanceof Error) {
console.log(userPluginsService);
const { status, message } = userPluginsService;
res.status(status).send({ message });
}
if (auth) {
const keys = Object.keys(auth);
const values = Object.values(auth);
if (action === 'install' && keys.length > 0) {
for (let i = 0; i < keys.length; i++) {
authService = await updateUserPluginAuth(user.id, keys[i], pluginKey, values[i]);
if (authService instanceof Error) {
console.log(authService);
const { status, message } = authService;
res.status(status).send({ message });
}
}
}
if (action === 'uninstall' && keys.length > 0) {
for (let i = 0; i < keys.length; i++) {
authService = await deleteUserPluginAuth(user.id, keys[i]);
if (authService instanceof Error) {
console.log(authService);
const { status, message } = authService;
res.status(status).send({ message });
}
}
}
}
res.status(200).send();
} catch (err) {
console.log(err);
res.status(500).json({ message: err.message });
}
};
module.exports = {
getUserController,
updateUserPluginsController
};

View File

@@ -1,57 +1,11 @@
const {
loginUser,
logoutUser,
registerUser,
requestPasswordReset,
resetPassword,
} = require("../services/auth.service");
resetPassword
} = require('../services/auth.service');
const isProduction = process.env.NODE_ENV === 'production';
const loginController = async (req, res) => {
try {
const token = req.user.generateToken();
const user = await loginUser(req.user)
if(user) {
res.cookie('token', token, {
expires: new Date(Date.now() + eval(process.env.SESSION_EXPIRY)),
httpOnly: false,
secure: isProduction
});
res.status(200).send({ token, user });
}
else {
return res.status(400).json({ message: 'Invalid credentials' });
}
}
catch (err) {
console.log(err);
return res.status(500).json({ message: err.message });
}
};
const logoutController = async (req, res) => {
const { signedCookies = {} } = req;
const { refreshToken } = signedCookies;
try {
const logout = await logoutUser(req.user, refreshToken);
console.log(logout)
const { status, message } = logout;
if (status === 200) {
res.clearCookie('token');
res.clearCookie('refreshToken');
res.status(status).send({ message });
}
else {
res.status(status).send({ message });
}
}
catch (err) {
console.log(err);
return res.status(500).json({ message: err.message });
}
}
const registrationController = async (req, res) => {
try {
const response = await registerUser(req.body);
@@ -65,13 +19,11 @@ const registrationController = async (req, res) => {
secure: isProduction
});
res.status(status).send({ user });
}
else {
} else {
const { status, message } = response;
res.status(status).send({ message });
}
}
catch (err) {
} catch (err) {
console.log(err);
return res.status(500).json({ message: err.message });
}
@@ -83,17 +35,13 @@ const getUserController = async (req, res) => {
const resetPasswordRequestController = async (req, res) => {
try {
const resetService = await requestPasswordReset(
req.body.email
);
const resetService = await requestPasswordReset(req.body.email);
if (resetService.link) {
return res.status(200).json(resetService);
}
else {
} else {
return res.status(400).json(resetService);
}
}
catch (e) {
} catch (e) {
console.log(e);
return res.status(400).json({ message: e.message });
}
@@ -106,14 +54,12 @@ const resetPasswordController = async (req, res) => {
req.body.token,
req.body.password
);
if(resetPasswordService instanceof Error) {
if (resetPasswordService instanceof Error) {
return res.status(400).json(resetPasswordService);
}
else {
} else {
return res.status(200).json(resetPasswordService);
}
}
catch (e) {
} catch (e) {
console.log(e);
return res.status(400).json({ message: e.message });
}
@@ -171,10 +117,8 @@ const refreshController = async (req, res, next) => {
module.exports = {
getUserController,
loginController,
logoutController,
refreshController,
registrationController,
resetPasswordRequestController,
resetPasswordController,
};
resetPasswordController
};

View File

@@ -0,0 +1,39 @@
const User = require('../../../models/User');
const loginController = async (req, res) => {
try {
const user = await User.findById(
req.user._id
);
// If user doesn't exist, return error
if (!user) { // typeof user !== User) { // this doesn't seem to resolve the User type ??
return res.status(400).json({ message: 'Invalid credentials' });
}
const token = req.user.generateToken();
const expires = eval(process.env.SESSION_EXPIRY);
// Add token to cookie
res.cookie(
'token',
token,
{
expires: new Date(Date.now() + expires),
httpOnly: false,
secure: process.env.NODE_ENV === 'production'
}
);
return res.status(200).send({ token, user });
} catch (err) {
console.log(err);
}
// Generic error messages are safer
return res.status(500).json({ message: 'Something went wrong' });
};
module.exports = {
loginController
};

View File

@@ -0,0 +1,21 @@
const logoutUser = require('../../services/auth.service');
const logoutController = async (req, res) => {
const { signedCookies = {} } = req;
const { refreshToken } = signedCookies;
try {
const logout = await logoutUser(req.user, refreshToken);
const { status, message } = logout;
res.clearCookie('token');
res.clearCookie('refreshToken');
return res.status(status).send({ message });
} catch (err) {
console.log(err);
return res.status(500).json({ message: err.message });
}
};
module.exports = {
logoutController
};

View File

@@ -10,8 +10,8 @@ const handleDuplicateKeyError = (err, res) => {
//handle validation errors
const handleValidationError = (err, res) => {
console.log('congrats you hit the validation middleware');
let errors = Object.values(err.errors).map(el => el.message);
let fields = Object.values(err.errors).map(el => el.path);
let errors = Object.values(err.errors).map((el) => el.message);
let fields = Object.values(err.errors).map((el) => el.path);
let code = 400;
if (errors.length > 1) {
const formattedErrors = errors.join(' ');

View File

@@ -7,11 +7,14 @@ const cors = require('cors');
const routes = require('./routes');
const errorController = require('./controllers/error.controller');
const passport = require('passport');
const port = process.env.PORT || 3080;
const host = process.env.HOST || 'localhost';
const projectPath = path.join(__dirname, '..', '..', 'client');
// Init the config and validate it
const config = require('../../config/loader');
config.validate(); // Validate the config
(async () => {
await connectDb();
console.log('Connected to MongoDB');
@@ -23,6 +26,8 @@ const projectPath = path.join(__dirname, '..', '..', 'client');
app.use(express.json());
app.use(express.urlencoded({ extended: true }));
app.use(express.static(path.join(projectPath, 'dist')));
app.use(express.static(path.join(projectPath, 'public')));
app.set('trust proxy', 1); // trust first proxy
app.use(cors());
@@ -30,15 +35,16 @@ const projectPath = path.join(__dirname, '..', '..', 'client');
app.use(passport.initialize());
require('../strategies/jwtStrategy');
require('../strategies/localStrategy');
if(process.env.GOOGLE_CLIENT_ID && process.env.GOOGLE_CLIENT_SECRET) {
if (process.env.GOOGLE_CLIENT_ID && process.env.GOOGLE_CLIENT_SECRET) {
require('../strategies/googleStrategy');
}
if(process.env.FACEBOOK_CLIENT_ID && process.env.FACEBOOK_CLIENT_SECRET) {
if (process.env.FACEBOOK_CLIENT_ID && process.env.FACEBOOK_CLIENT_SECRET) {
require('../strategies/facebookStrategy');
}
app.use('/oauth', routes.oauth)
app.use('/oauth', routes.oauth);
// api endpoint
app.use('/api/auth', routes.auth);
app.use('/api/user', routes.user);
app.use('/api/search', routes.search);
app.use('/api/ask', routes.ask);
app.use('/api/messages', routes.messages);
@@ -47,8 +53,7 @@ const projectPath = path.join(__dirname, '..', '..', 'client');
app.use('/api/prompts', routes.prompts);
app.use('/api/tokenizer', routes.tokenizer);
app.use('/api/endpoints', routes.endpoints);
app.use('/api/plugins', routes.plugins);
// static files
app.get('/*', function (req, res) {
@@ -60,14 +65,16 @@ const projectPath = path.join(__dirname, '..', '..', 'client');
console.log(
`Server listening on all interface at port ${port}. Use http://localhost:${port} to access it`
);
else console.log(`Server listening at http://${host == '0.0.0.0' ? 'localhost' : host}:${port}`);
else
console.log(`Server listening at http://${host == '0.0.0.0' ? 'localhost' : host}:${port}`);
});
})();
let messageCount = 0;
process.on('uncaughtException', (err) => {
if (!err.message.includes('fetch failed')) {
console.error('There was an uncaught error:', err.message);
console.error('There was an uncaught error:');
console.error(err);
}
if (err.message.includes('fetch failed')) {

View File

@@ -40,7 +40,7 @@ router.post('/', requireJwtAuth, async (req, res) => {
jailbreakConversationId: req.body?.jailbreakConversationId ?? null,
systemMessage: req.body?.systemMessage ?? null,
context: req.body?.context ?? null,
toneStyle: req.body?.toneStyle ?? 'fast',
toneStyle: req.body?.toneStyle ?? 'creative',
token: req.body?.token ?? null
};
else
@@ -51,7 +51,7 @@ router.post('/', requireJwtAuth, async (req, res) => {
conversationSignature: req.body?.conversationSignature ?? null,
clientId: req.body?.clientId ?? null,
invocationId: req.body?.invocationId ?? null,
toneStyle: req.body?.toneStyle ?? 'fast',
toneStyle: req.body?.toneStyle ?? 'creative',
token: req.body?.token ?? null
};
@@ -110,7 +110,7 @@ const ask = async ({
try {
let lastSavedTimestamp = 0;
const { onProgress: progressCallback, getPartialText } = createOnProgress({
const { onProgress: progressCallback } = createOnProgress({
onProgress: ({ text }) => {
const currentTimestamp = Date.now();
if (currentTimestamp - lastSavedTimestamp > 500) {
@@ -129,10 +129,15 @@ const ask = async ({
}
});
const abortController = new AbortController();
let bingConversationId = null;
if (!isNewConversation) {
const convo = await getConvo(req.user.id, conversationId);
bingConversationId = convo.bingConversationId;
}
let response = await askBing({
text,
parentMessageId: userParentMessageId,
conversationId,
conversationId: bingConversationId ?? conversationId,
...endpointOption,
onProgress: progressCallback.call(null, {
res,
@@ -147,21 +152,25 @@ const ask = async ({
const newConversationId = endpointOption?.jailbreak
? response.jailbreakConversationId
: response.conversationId || conversationId;
const newUserMassageId = response.parentMessageId || response.details.requestId || userMessageId;
const newUserMessageId =
response.parentMessageId || response.details.requestId || userMessageId;
const newResponseMessageId = response.messageId || response.details.messageId;
// STEP1 generate response message
response.text = response.response || response.details.spokenText || '**Bing refused to answer.**';
response.text =
response.response || response.details.spokenText || '**Bing refused to answer.**';
let responseMessage = {
conversationId: newConversationId,
conversationId,
bingConversationId: newConversationId,
messageId: responseMessageId,
newMessageId: newResponseMessageId,
parentMessageId: overrideParentMessageId || newUserMassageId,
parentMessageId: overrideParentMessageId || newUserMessageId,
sender: endpointOption?.jailbreak ? 'Sydney' : 'BingAI',
text: await handleText(response, true),
suggestions:
response.details.suggestedResponses && response.details.suggestedResponses.map((s) => s.text),
response.details.suggestedResponses &&
response.details.suggestedResponses.map((s) => s.text),
unfinished: false,
cancelled: false,
error: false
@@ -170,31 +179,7 @@ const ask = async ({
await saveMessage(responseMessage);
responseMessage.messageId = newResponseMessageId;
// STEP2 update the convosation.
// First update conversationId if needed
// Note!
// Bing API will not use our conversationId at the first time,
// so change the placeholder conversationId to the real one.
// Attition: the api will also create new conversationId while using invalid userMessage.parentMessageId,
// but in this situation, don't change the conversationId, but create new convo.
let conversationUpdate = { conversationId: newConversationId, endpoint: 'bingAI' };
if (conversationId != newConversationId)
if (isNewConversation) {
// change the conversationId to new one
conversationUpdate = {
...conversationUpdate,
conversationId: conversationId,
newConversationId: newConversationId
};
} else {
// create new conversation
conversationUpdate = {
...conversationUpdate,
...endpointOption
};
}
let conversationUpdate = { conversationId, bingConversationId: newConversationId, endpoint: 'bingAI' };
if (endpointOption?.jailbreak) {
conversationUpdate.jailbreak = true;
@@ -207,16 +192,16 @@ const ask = async ({
}
await saveConvo(req.user.id, conversationUpdate);
conversationId = newConversationId;
// STEP3 update the user message
userMessage.conversationId = newConversationId;
userMessage.messageId = newUserMassageId;
userMessage.messageId = newUserMessageId;
// If response has parentMessageId, the fake userMessage.messageId should be updated to the real one.
if (!overrideParentMessageId)
await saveMessage({ ...userMessage, messageId: userMessageId, newMessageId: newUserMassageId });
userMessageId = newUserMassageId;
await saveMessage({
...userMessage,
messageId: userMessageId,
newMessageId: newUserMessageId
});
userMessageId = newUserMessageId;
sendMessage(res, {
title: await getConvoTitle(req.user.id, conversationId),
@@ -228,7 +213,11 @@ const ask = async ({
res.end();
if (userParentMessageId == '00000000-0000-0000-0000-000000000000') {
const title = await titleConvo({ endpoint: endpointOption?.endpoint, text, response: responseMessage });
const title = await titleConvo({
endpoint: endpointOption?.endpoint,
text,
response: responseMessage
});
await saveConvo(req.user.id, {
conversationId: conversationId,

View File

@@ -76,7 +76,6 @@ const ask = async ({
userMessage,
endpointOption,
conversationId,
preSendRequest = true,
overrideParentMessageId = null,
req,
res
@@ -92,10 +91,8 @@ const ask = async ({
'X-Accel-Buffering': 'no'
});
if (preSendRequest) sendMessage(res, { message: userMessage, created: true });
let responseMessageId = crypto.randomUUID();
let getPartialMessage = null;
try {
let lastSavedTimestamp = 0;
const { onProgress: progressCallback, getPartialText } = createOnProgress({
@@ -116,15 +113,30 @@ const ask = async ({
}
}
});
getPartialMessage = getPartialText;
const abortController = new AbortController();
let response = await browserClient({
text,
parentMessageId: userParentMessageId,
conversationId,
...endpointOption,
onProgress: progressCallback.call(null, { res, text }),
abortController,
userId
userId,
onProgress: progressCallback.call(null, { res, text }),
onEventMessage: (eventMessage) => {
let data = null;
try {
data = JSON.parse(eventMessage.data);
} catch (e) {
return;
}
sendMessage(res, {
message: { ...userMessage, conversationId: data.conversation_id },
created: true
});
}
});
console.log('CLIENT RESPONSE', response);
@@ -180,7 +192,11 @@ const ask = async ({
// If response has parentMessageId, the fake userMessage.messageId should be updated to the real one.
if (!overrideParentMessageId)
await saveMessage({ ...userMessage, messageId: userMessageId, newMessageId: newUserMassageId });
await saveMessage({
...userMessage,
messageId: userMessageId,
newMessageId: newUserMassageId
});
userMessageId = newUserMassageId;
sendMessage(res, {
@@ -208,8 +224,8 @@ const ask = async ({
parentMessageId: overrideParentMessageId || userMessageId,
unfinished: false,
cancelled: false,
error: true,
text: error.message
// error: true,
text: `${getPartialMessage() ?? ''}\n\nError message: "${error.message}"`
};
await saveMessage(errorMessage);
handleError(res, errorMessage);

View File

@@ -0,0 +1,279 @@
const express = require('express');
const router = express.Router();
const { titleConvo } = require('../../../app/');
const { getOpenAIModels } = require('../endpoints');
const ChatAgent = require('../../../app/langchain/ChatAgent');
const { validateTools } = require('../../../app/langchain/tools');
const { saveMessage, getConvoTitle, saveConvo, getConvo } = require('../../../models');
const {
handleError,
sendMessage,
createOnProgress,
formatSteps,
formatAction
} = require('./handlers');
const requireJwtAuth = require('../../../middleware/requireJwtAuth');
const abortControllers = new Map();
router.post('/abort', requireJwtAuth, async (req, res) => {
const { abortKey } = req.body;
console.log(`req.body`, req.body);
if (!abortControllers.has(abortKey)) {
return res.status(404).send('Request not found');
}
const { abortController } = abortControllers.get(abortKey);
abortControllers.delete(abortKey);
const ret = await abortController.abortAsk();
console.log('Aborted request', abortKey);
console.log('Aborted message:', ret);
res.send(JSON.stringify(ret));
});
router.post('/', requireJwtAuth, async (req, res) => {
const { endpoint, text, parentMessageId, conversationId } = req.body;
if (text.length === 0) return handleError(res, { text: 'Prompt empty or too short' });
if (endpoint !== 'gptPlugins') return handleError(res, { text: 'Illegal request' });
const agentOptions = req.body?.agentOptions ?? {
model: 'gpt-3.5-turbo',
// model: 'gpt-4', // for agent model
temperature: 0,
// top_p: 1,
// presence_penalty: 0,
// frequency_penalty: 0
};
const tools = req.body?.tools.map((tool) => tool.pluginKey) ?? [];
// build endpoint option
const endpointOption = {
chatGptLabel: tools.length === 0 ? req.body?.chatGptLabel ?? null : null,
promptPrefix: tools.length === 0 ? req.body?.promptPrefix ?? null : null,
tools,
modelOptions: {
model: req.body?.model ?? 'gpt-4',
temperature: req.body?.temperature ?? 0,
top_p: req.body?.top_p ?? 1,
presence_penalty: req.body?.presence_penalty ?? 0,
frequency_penalty: req.body?.frequency_penalty ?? 0
},
agentOptions
};
const availableModels = getOpenAIModels();
if (availableModels.find((model) => model === endpointOption.modelOptions.model) === undefined) {
return handleError(res, { text: `Illegal request: model` });
}
// console.log('ask log', {
// text,
// conversationId,
// endpointOption
// });
console.log('ask log');
console.dir({ text, conversationId, endpointOption }, { depth: null });
// eslint-disable-next-line no-use-before-define
return await ask({
text,
endpointOption,
conversationId,
parentMessageId,
req,
res
});
});
const ask = async ({ text, endpointOption, parentMessageId = null, conversationId, req, res }) => {
res.writeHead(200, {
Connection: 'keep-alive',
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache, no-transform',
'Access-Control-Allow-Origin': '*',
'X-Accel-Buffering': 'no'
});
let userMessage;
let userMessageId;
let responseMessageId;
let lastSavedTimestamp = 0;
const newConvo = !conversationId;
const { overrideParentMessageId = null } = req.body;
const user = req.user.id;
const plugin = {
loading: true,
inputs: [],
latest: null,
outputs: null
};
try {
const getIds = (data) => {
userMessage = data.userMessage;
userMessageId = userMessage.messageId;
responseMessageId = data.responseMessageId;
if (!conversationId) {
conversationId = data.conversationId;
}
};
const { onProgress: progressCallback, sendIntermediateMessage, getPartialText } = createOnProgress({
onProgress: ({ text: partialText }) => {
const currentTimestamp = Date.now();
if (plugin.loading === true) {
plugin.loading = false;
}
if (currentTimestamp - lastSavedTimestamp > 500) {
lastSavedTimestamp = currentTimestamp;
saveMessage({
messageId: responseMessageId,
sender: 'ChatGPT',
conversationId,
parentMessageId: overrideParentMessageId || userMessageId,
text: partialText,
model: endpointOption.modelOptions.model,
unfinished: false,
cancelled: true,
error: false
});
}
}
});
const abortController = new AbortController();
abortController.abortAsk = async function () {
this.abort();
const responseMessage = {
messageId: responseMessageId,
sender: endpointOption?.chatGptLabel || 'ChatGPT',
conversationId,
parentMessageId: overrideParentMessageId || userMessageId,
text: getPartialText(),
plugin: { ...plugin, loading: false },
model: endpointOption.modelOptions.model,
unfinished: false,
cancelled: true,
error: false,
};
saveMessage(responseMessage);
return {
title: await getConvoTitle(req.user.id, conversationId),
final: true,
conversation: await getConvo(req.user.id, conversationId),
requestMessage: userMessage,
responseMessage: responseMessage
};
};
const onStart = (userMessage) => {
sendMessage(res, { message: userMessage, created: true });
abortControllers.set(userMessage.conversationId, { abortController, ...endpointOption });
}
endpointOption.tools = await validateTools(user, endpointOption.tools);
const clientOptions = {
debug: true,
reverseProxyUrl: process.env.OPENAI_REVERSE_PROXY || null,
proxy: process.env.PROXY || null,
...endpointOption
};
if (process.env.AZURE_OPENAI_API_KEY) {
clientOptions.azure = {
azureOpenAIApiKey: process.env.AZURE_OPENAI_API_KEY,
azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME,
azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME,
azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION
};
}
const chatAgent = new ChatAgent(process.env.OPENAI_API_KEY, clientOptions);
const onAgentAction = (action) => {
const formattedAction = formatAction(action);
plugin.inputs.push(formattedAction);
plugin.latest = formattedAction.plugin;
saveMessage(userMessage);
sendIntermediateMessage(res, { plugin });
// console.log('PLUGIN ACTION', formattedAction);
};
const onChainEnd = (data) => {
let { intermediateSteps: steps } = data;
plugin.outputs = steps && steps[0].action ? formatSteps(steps) : 'An error occurred.';
plugin.loading = false;
saveMessage(userMessage);
sendIntermediateMessage(res, { plugin });
// console.log('CHAIN END', plugin.outputs);
};
let response = await chatAgent.sendMessage(text, {
getIds,
user,
parentMessageId,
conversationId,
overrideParentMessageId,
onAgentAction,
onChainEnd,
onStart,
onProgress: progressCallback.call(null, {
res,
text,
plugin,
parentMessageId: overrideParentMessageId || userMessageId
}),
abortController
});
if (overrideParentMessageId) {
response.parentMessageId = overrideParentMessageId;
}
// console.log('CLIENT RESPONSE');
// console.dir(response, { depth: null });
response.plugin = { ...plugin, loading: false };
await saveMessage(response);
sendMessage(res, {
title: await getConvoTitle(req.user.id, conversationId),
final: true,
conversation: await getConvo(req.user.id, conversationId),
requestMessage: userMessage,
responseMessage: response
});
res.end();
if (parentMessageId == '00000000-0000-0000-0000-000000000000' && newConvo) {
const title = await titleConvo({ text, response });
await saveConvo(req.user.id, {
conversationId: conversationId,
title
});
}
} catch (error) {
console.error(error);
const errorMessage = {
messageId: responseMessageId,
sender: 'ChatGPT',
conversationId,
parentMessageId: userMessageId,
unfinished: false,
cancelled: false,
error: true,
text: error.message
};
await saveMessage(errorMessage);
handleError(res, errorMessage);
}
};
module.exports = router;

View File

@@ -0,0 +1,178 @@
const express = require('express');
const router = express.Router();
const crypto = require('crypto');
const { titleConvo } = require('../../../app/');
const GoogleClient = require('../../../app/google/GoogleClient');
const { saveMessage, getConvoTitle, saveConvo, getConvo } = require('../../../models');
const { handleError, sendMessage, createOnProgress } = require('./handlers');
const requireJwtAuth = require('../../../middleware/requireJwtAuth');
router.post('/', requireJwtAuth, async (req, res) => {
const { endpoint, text, parentMessageId, conversationId: oldConversationId } = req.body;
if (text.length === 0) return handleError(res, { text: 'Prompt empty or too short' });
if (endpoint !== 'google') return handleError(res, { text: 'Illegal request' });
// build endpoint option
const endpointOption = {
examples: req.body?.examples ?? [{ input: { content: '' }, output: { content: '' } }],
promptPrefix: req.body?.promptPrefix ?? null,
token: req.body?.token ?? null,
modelOptions: {
model: req.body?.model ?? 'chat-bison',
modelLabel: req.body?.modelLabel ?? null,
temperature: req.body?.temperature ?? 0.2,
maxOutputTokens: req.body?.maxOutputTokens ?? 1024,
topP: req.body?.topP ?? 0.95,
topK: req.body?.topK ?? 40
}
};
const availableModels = ['chat-bison', 'text-bison'];
if (availableModels.find((model) => model === endpointOption.modelOptions.model) === undefined) {
return handleError(res, { text: `Illegal request: model` });
}
const conversationId = oldConversationId || crypto.randomUUID();
// eslint-disable-next-line no-use-before-define
return await ask({
text,
endpointOption,
conversationId,
parentMessageId,
req,
res
});
});
const ask = async ({ text, endpointOption, parentMessageId = null, conversationId, req, res }) => {
res.writeHead(200, {
Connection: 'keep-alive',
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache, no-transform',
'Access-Control-Allow-Origin': '*',
'X-Accel-Buffering': 'no'
});
let userMessage;
let userMessageId;
let responseMessageId;
let lastSavedTimestamp = 0;
const { overrideParentMessageId = null } = req.body;
try {
const getIds = (data) => {
userMessage = data.userMessage;
userMessageId = userMessage.messageId;
responseMessageId = data.responseMessageId;
if (!conversationId) {
conversationId = data.conversationId;
}
sendMessage(res, { message: userMessage, created: true });
};
const { onProgress: progressCallback } = createOnProgress({
onProgress: ({ text: partialText }) => {
const currentTimestamp = Date.now();
if (currentTimestamp - lastSavedTimestamp > 500) {
lastSavedTimestamp = currentTimestamp;
saveMessage({
messageId: responseMessageId,
sender: 'PaLM2',
conversationId,
parentMessageId: overrideParentMessageId || userMessageId,
text: partialText,
unfinished: true,
cancelled: false,
error: false
});
}
}
});
const abortController = new AbortController();
let key;
if (endpointOption.token) {
key = JSON.parse(endpointOption.token);
delete endpointOption.token;
console.log('Using service account key provided by User for PaLM models');
}
try {
if (!key) {
key = require('../../../data/auth.json');
}
} catch (e) {
console.log("No 'auth.json' file (service account key) found in /api/data/ for PaLM models");
}
const clientOptions = {
// debug: true, // for testing
reverseProxyUrl: process.env.GOOGLE_REVERSE_PROXY || null,
proxy: process.env.PROXY || null,
...endpointOption
};
const client = new GoogleClient(key, clientOptions);
let response = await client.sendMessage(text, {
getIds,
user: req.user.id,
conversationId,
parentMessageId,
overrideParentMessageId,
onProgress: progressCallback.call(null, {
res,
text,
parentMessageId: overrideParentMessageId || userMessageId
}),
abortController
});
if (overrideParentMessageId) {
response.parentMessageId = overrideParentMessageId;
}
await saveConvo(req.user.id, {
...endpointOption,
...endpointOption.modelOptions,
conversationId,
endpoint: 'google'
});
await saveMessage(response);
sendMessage(res, {
title: await getConvoTitle(req.user.id, conversationId),
final: true,
conversation: await getConvo(req.user.id, conversationId),
requestMessage: userMessage,
responseMessage: response
});
res.end();
if (parentMessageId == '00000000-0000-0000-0000-000000000000') {
const title = await titleConvo({ text, response });
await saveConvo(req.user.id, {
conversationId,
title
});
}
} catch (error) {
console.error(error);
const errorMessage = {
messageId: responseMessageId,
sender: 'PaLM2',
conversationId,
parentMessageId,
unfinished: false,
cancelled: false,
error: true,
text: error.message
};
await saveMessage(errorMessage);
handleError(res, errorMessage);
}
};
module.exports = router;

View File

@@ -64,7 +64,7 @@ router.post('/', requireJwtAuth, async (req, res) => {
};
const availableModels = getOpenAIModels();
if (availableModels.find(model => model === endpointOption.model) === undefined)
if (availableModels.find((model) => model === endpointOption.model) === undefined)
return handleError(res, { text: 'Illegal request: model' });
console.log('ask log', {
@@ -169,11 +169,13 @@ const ask = async ({
};
const abortKey = conversationId;
abortControllers.set(abortKey, { abortController, ...endpointOption });
const oaiApiKey = req.body?.token ?? null;
let response = await askClient({
text,
parentMessageId: userParentMessageId,
conversationId,
oaiApiKey,
...endpointOption,
onProgress: progressCallback.call(null, {
res,
@@ -188,7 +190,7 @@ const ask = async ({
console.log('CLIENT RESPONSE', response);
const newConversationId = response.conversationId || conversationId;
const newUserMassageId = response.parentMessageId || userMessageId;
const newUserMessageId = response.parentMessageId || userMessageId;
const newResponseMessageId = response.messageId;
// STEP1 generate response message
@@ -198,7 +200,7 @@ const ask = async ({
conversationId: newConversationId,
messageId: responseMessageId,
newMessageId: newResponseMessageId,
parentMessageId: overrideParentMessageId || newUserMassageId,
parentMessageId: overrideParentMessageId || newUserMessageId,
text: await handleText(response),
sender: endpointOption?.chatGptLabel || 'ChatGPT',
unfinished: false,
@@ -232,12 +234,16 @@ const ask = async ({
// STEP3 update the user message
userMessage.conversationId = newConversationId;
userMessage.messageId = newUserMassageId;
userMessage.messageId = newUserMessageId;
// If response has parentMessageId, the fake userMessage.messageId should be updated to the real one.
if (!overrideParentMessageId)
await saveMessage({ ...userMessage, messageId: userMessageId, newMessageId: newUserMassageId });
userMessageId = newUserMassageId;
await saveMessage({
...userMessage,
messageId: userMessageId,
newMessageId: newUserMessageId
});
userMessageId = newUserMessageId;
sendMessage(res, {
title: await getConvoTitle(req.user.id, conversationId),
@@ -249,7 +255,12 @@ const ask = async ({
res.end();
if (userParentMessageId == '00000000-0000-0000-0000-000000000000') {
const title = await titleConvo({ endpoint: endpointOption?.endpoint, text, response: responseMessage });
const title = await titleConvo({
endpoint: endpointOption?.endpoint,
text,
response: responseMessage,
oaiApiKey
});
await saveConvo(req.user.id, {
conversationId: conversationId,
title

View File

@@ -1,20 +1,18 @@
const _ = require('lodash');
const citationRegex = /\[\^\d+?\^]/g;
const backtick = /(?<!`)[`](?!`)/g;
// const singleBacktick = /(?<!`)[`](?!`)/;
const cursorDefault = '<span className="result-streaming">█</span>';
const { getCitations, citeText } = require('../../../app');
const cursor = '<span className="result-streaming">█</span>';
const handleError = (res, message) => {
res.write(`event: error\ndata: ${JSON.stringify(message)}\n\n`);
res.end();
};
const sendMessage = (res, message) => {
const sendMessage = (res, message, event = 'message') => {
if (message.length === 0) {
return;
}
res.write(`event: message\ndata: ${JSON.stringify(message)}\n\n`);
res.write(`event: ${event}\ndata: ${JSON.stringify(message)}\n\n`);
};
const createOnProgress = ({ onProgress: _onProgress }) => {
@@ -22,11 +20,9 @@ const createOnProgress = ({ onProgress: _onProgress }) => {
let code = '';
let tokens = '';
let precode = '';
let blockCount = 0;
let codeBlock = false;
let cursor = cursorDefault;
const progressCallback = async (partial, { res, text, bing = false, ...rest }) => {
const progressCallback = async (partial, { res, text, plugin, bing = false, ...rest }) => {
let chunk = partial === text ? '' : partial;
tokens += chunk;
precode += chunk;
@@ -38,7 +34,6 @@ const createOnProgress = ({ onProgress: _onProgress }) => {
if (precode.includes('```') && codeBlock) {
codeBlock = false;
cursor = cursorDefault;
precode = precode.replace(/```/g, '');
code = '';
}
@@ -46,14 +41,6 @@ const createOnProgress = ({ onProgress: _onProgress }) => {
if (precode.includes('```') && code === '') {
precode = precode.replace(/```/g, '');
codeBlock = true;
blockCount++;
cursor = blockCount > 1 ? '█\n\n```' : '█\n\n';
}
const backticks = precode.match(backtick);
if (backticks && !codeBlock && cursor === cursorDefault) {
precode = precode.replace(backtick, '');
cursor = '█';
}
if (tokens.match(/^\n/)) {
@@ -64,10 +51,17 @@ const createOnProgress = ({ onProgress: _onProgress }) => {
tokens = citeText(tokens, true);
}
sendMessage(res, { text: tokens + cursor, message: true, initial: i === 0, ...rest });
_onProgress && _onProgress({ text: tokens, message: true, initial: i === 0, ...rest });
const payload = { text: tokens, message: true, initial: i === 0, ...rest };
if (plugin) {
payload.plugin = plugin;
}
sendMessage(res, { ...payload, text: tokens });
_onProgress && _onProgress(payload);
i++;
};
const sendIntermediateMessage = (res, payload) => {
sendMessage(res, { text: tokens?.length === 0 ? cursor : tokens, message: true, initial: i === 0, ...payload });
i++;
};
@@ -79,24 +73,86 @@ const createOnProgress = ({ onProgress: _onProgress }) => {
return tokens;
};
return { onProgress, getPartialText };
return { onProgress, getPartialText, sendIntermediateMessage };
};
const handleText = async (response, bing = false) => {
let { text } = response;
// text = await detectCode(text);
response.text = text;
if (bing) {
// const hasCitations = response.response.match(citationRegex)?.length > 0;
const links = getCitations(response);
if (response.text.match(citationRegex)?.length > 0) {
text = citeText(response);
}
text += links?.length > 0 ? `\n<small>${links}</small>` : '';
text += links?.length > 0 ? `\n- ${links}` : '';
}
return text;
};
module.exports = { handleError, sendMessage, createOnProgress, handleText };
function formatSteps(steps) {
let output = '';
for (let i = 0; i < steps.length; i++) {
const step = steps[i];
const actionInput = step.action.toolInput;
const observation = step.observation;
if (actionInput === 'N/A' || observation?.trim()?.length === 0) {
continue;
}
output += `Input: ${actionInput}\nOutput: ${observation}`;
if (steps.length > 1 && i !== steps.length - 1) {
output += '\n---\n';
}
}
return output;
}
function formatAction(action) {
const capitalizeWords = (input) => {
if (input === 'dall-e') {
return 'DALL-E';
}
return input
.replace(/-/g, ' ')
.split(' ')
.map((word) => word.charAt(0).toUpperCase() + word.slice(1))
.join(' ');
};
const formattedAction = {
plugin: capitalizeWords(action.tool) || action.tool,
input: action.toolInput,
thought: action.log.includes('Thought: ')
? action.log.split('\n')[0].replace('Thought: ', '')
: action.log.split('\n')[0]
};
if (action.tool.toLowerCase() === 'self-reflection' || formattedAction.plugin === 'N/A') {
formattedAction.inputStr = `{\n\tthought: ${formattedAction.input}${
!formattedAction.thought.includes(formattedAction.input)
? ' - ' + formattedAction.thought
: ''
}\n}`;
formattedAction.inputStr = formattedAction.inputStr.replace('N/A - ', '');
} else {
formattedAction.inputStr = `{\n\tplugin: ${formattedAction.plugin}\n\tinput: ${formattedAction.input}\n\tthought: ${formattedAction.thought}\n}`;
}
return formattedAction;
}
module.exports = {
handleError,
sendMessage,
createOnProgress,
handleText,
formatSteps,
formatAction
};

View File

@@ -2,12 +2,16 @@ const express = require('express');
const router = express.Router();
// const askAzureOpenAI = require('./askAzureOpenAI';)
const askOpenAI = require('./askOpenAI');
const askGoogle = require('./askGoogle');
const askBingAI = require('./askBingAI');
const askChatGPTBrowser = require('./askChatGPTBrowser');
const askGPTPlugins = require('./askGPTPlugins');
// router.use('/azureOpenAI', askAzureOpenAI);
router.use('/openAI', askOpenAI);
router.use('/google', askGoogle);
router.use('/bingAI', askBingAI);
router.use('/chatGPTBrowser', askChatGPTBrowser);
router.use('/gptPlugins', askGPTPlugins);
module.exports = router;

View File

@@ -3,22 +3,23 @@ const {
resetPasswordRequestController,
resetPasswordController,
getUserController,
loginController,
logoutController,
refreshController,
registrationController,
registrationController
} = require('../controllers/auth.controller');
const { loginController } = require('../controllers/auth/login.controller');
const { logoutController } = require('../controllers/auth/logout.controller');
const requireJwtAuth = require('../../middleware/requireJwtAuth');
const requireLocalAuth = require('../../middleware/requireLocalAuth');
const router = express.Router();
//Local
router.get('/user', requireJwtAuth, getUserController);
router.post('/logout', requireJwtAuth, logoutController);
router.post('/login', requireLocalAuth, loginController);
router.post('/refresh', requireJwtAuth, refreshController);
router.post('/register', registrationController);
if (process.env.ALLOW_REGISTRATION) {
router.post('/register', registrationController);
}
router.post('/requestPasswordReset', resetPasswordRequestController);
router.post('/resetPassword', resetPasswordController);

View File

@@ -1,5 +1,6 @@
const express = require('express');
const router = express.Router();
const { availableTools } = require('../../app/langchain/tools');
const getOpenAIModels = () => {
let models = ['gpt-4', 'text-davinci-003', 'gpt-3.5-turbo', 'gpt-3.5-turbo-0301'];
@@ -9,15 +10,44 @@ const getOpenAIModels = () => {
};
const getChatGPTBrowserModels = () => {
let models = ['text-davinci-002-render-sha', 'text-davinci-002-render-paid', 'gpt-4'];
let models = ['text-davinci-002-render-sha', 'gpt-4'];
if (process.env.CHATGPT_MODELS) models = String(process.env.CHATGPT_MODELS).split(',');
return models;
};
router.get('/', function (req, res) {
const azureOpenAI = !!process.env.AZURE_OPENAI_KEY;
const openAI = process.env.OPENAI_KEY || process.env.AZURE_OPENAI_API_KEY ? { availableModels: getOpenAIModels() } : false;
let i = 0;
router.get('/', async function (req, res) {
let key, palmUser;
try {
key = require('../../data/auth.json');
} catch (e) {
if (i === 0) {
console.log("No 'auth.json' file (service account key) found in /api/data/ for PaLM models");
i++;
}
}
if (process.env.PALM_KEY === 'user_provided') {
palmUser = true;
if (i <= 1) {
console.log('User will provide key for PaLM models');
i++;
}
}
const google =
key || palmUser
? { userProvide: palmUser, availableModels: ['chat-bison', 'text-bison'] }
: false;
const azureOpenAI = !!process.env.AZURE_OPENAI_API_KEY;
const apiKey = process.env.OPENAI_API_KEY || process.env.AZURE_OPENAI_API_KEY;
const openAI = apiKey
? { availableModels: getOpenAIModels(), userProvide: apiKey === 'user_provided' }
: false;
const gptPlugins = apiKey
? { availableModels: ['gpt-4', 'gpt-3.5-turbo', 'gpt-3.5-turbo-0301'], availableTools }
: false;
const bingAI = process.env.BINGAI_TOKEN
? { userProvide: process.env.BINGAI_TOKEN == 'user_provided' }
: false;
@@ -28,7 +58,7 @@ router.get('/', function (req, res) {
}
: false;
res.send(JSON.stringify({ azureOpenAI, openAI, bingAI, chatGPTBrowser }));
res.send(JSON.stringify({ azureOpenAI, openAI, google, bingAI, chatGPTBrowser, gptPlugins }));
});
module.exports = { router, getOpenAIModels, getChatGPTBrowserModels };

View File

@@ -8,6 +8,8 @@ const tokenizer = require('./tokenizer');
const auth = require('./auth');
const oauth = require('./oauth');
const { router: endpoints } = require('./endpoints');
const plugins = require('./plugins');
const user = require('./user');
module.exports = {
search,
@@ -18,6 +20,8 @@ module.exports = {
prompts,
auth,
oauth,
user,
tokenizer,
endpoints,
plugins
};

View File

@@ -1,12 +1,13 @@
const passport = require('passport');
const express = require('express');
const router = express.Router();
const config = require('../../../config/loader');
const domains = config.domains;
const isProduction = config.isProduction;
const isProduction = process.env.NODE_ENV === 'production';
const clientUrl = isProduction ? process.env.CLIENT_URL_PROD : process.env.CLIENT_URL_DEV;
// Social
/**
* Google Routes
*/
router.get(
'/google',
passport.authenticate('google', {
@@ -18,7 +19,7 @@ router.get(
router.get(
'/google/callback',
passport.authenticate('google', {
failureRedirect: `${clientUrl}/login`,
failureRedirect: `${domains.client}/login`,
failureMessage: true,
session: false,
scope: ['openid', 'profile', 'email']
@@ -30,7 +31,7 @@ router.get(
httpOnly: false,
secure: isProduction
});
res.redirect(clientUrl);
res.redirect(domains.client);
}
);
@@ -45,7 +46,7 @@ router.get(
router.get(
'/facebook/callback',
passport.authenticate('facebook', {
failureRedirect: `${clientUrl}/login`,
failureRedirect: `${domains.client}/login`,
failureMessage: true,
session: false,
scope: ['public_profile', 'email']
@@ -57,8 +58,8 @@ router.get(
httpOnly: false,
secure: isProduction
});
res.redirect(clientUrl);
res.redirect(domains.client);
}
);
module.exports = router;
module.exports = router;

View File

@@ -0,0 +1,9 @@
const express = require('express');
const { getAvailablePluginsController } = require('../controllers/PluginController');
const requireJwtAuth = require('../../middleware/requireJwtAuth');
const router = express.Router();
router.get('/', requireJwtAuth, getAvailablePluginsController);
module.exports = router;

View File

@@ -40,7 +40,7 @@ router.post('/delete', requireJwtAuth, async (req, res) => {
try {
await deletePresets(req.user.id, filter);
const presets = (await getPresets(req.user.id)).map(preset => preset.toObject());
const presets = (await getPresets(req.user.id)).map((preset) => preset.toObject());
// console.log('delete preset response', presets);
res.status(201).send(presets);

View File

@@ -27,7 +27,9 @@ router.get('/', requireJwtAuth, async function (req, res) {
console.log('cache hit', key);
const cached = cache.get(key);
const { pages, pageSize, messages } = cached;
res.status(200).send({ conversations: cached[pageNumber], pages, pageNumber, pageSize, messages });
res
.status(200)
.send({ conversations: cached[pageNumber], pages, pageNumber, pageSize, messages });
return;
} else {
cache.clear();
@@ -44,7 +46,7 @@ router.get('/', requireJwtAuth, async function (req, res) {
},
true
)
).hits.map(message => {
).hits.map((message) => {
const { _formatted, ...rest } = message;
return {
...rest,
@@ -95,12 +97,12 @@ router.get('/clear', async function (req, res) {
router.get('/test', async function (req, res) {
const { q } = req.query;
const messages = (await Message.meiliSearch(q, { attributesToHighlight: ['text'] }, true)).hits.map(
message => {
const { _formatted, ...rest } = message;
return { ...rest, searchResult: true, text: _formatted.text };
}
);
const messages = (
await Message.meiliSearch(q, { attributesToHighlight: ['text'] }, true)
).hits.map((message) => {
const { _formatted, ...rest } = message;
return { ...rest, searchResult: true, text: _formatted.text };
});
res.send(messages);
});

10
api/server/routes/user.js Normal file
View File

@@ -0,0 +1,10 @@
const express = require('express');
const requireJwtAuth = require('../../middleware/requireJwtAuth');
const { getUserController, updateUserPluginsController } = require('../controllers/UserController');
const router = express.Router();
router.get('/', requireJwtAuth, getUserController);
router.post('/plugins', requireJwtAuth, updateUserPluginsController);
module.exports = router;

View File

@@ -0,0 +1,84 @@
const PluginAuth = require('../../models/schema/pluginAuthSchema');
const { encrypt, decrypt } = require('../../utils/crypto');
const getUserPluginAuthValue = async (user, authField) => {
try {
const pluginAuth = await PluginAuth.findOne({ user, authField });
if (!pluginAuth) {
return null;
}
const decryptedValue = decrypt(pluginAuth.value);
return decryptedValue;
} catch (err) {
console.log(err);
return err;
}
};
// const updateUserPluginAuth = async (userId, authField, pluginKey, value) => {
// try {
// const encryptedValue = encrypt(value);
// const pluginAuth = await PluginAuth.findOneAndUpdate(
// { userId, authField },
// {
// $set: {
// value: encryptedValue,
// pluginKey
// }
// },
// {
// new: true,
// upsert: true
// }
// );
// return pluginAuth;
// } catch (err) {
// console.log(err);
// return err;
// }
// };
const updateUserPluginAuth = async (userId, authField, pluginKey, value) => {
try {
const encryptedValue = encrypt(value);
const pluginAuth = await PluginAuth.findOne({ userId, authField });
if (pluginAuth) {
const pluginAuth = await PluginAuth.updateOne(
{ userId, authField },
{ $set: { value: encryptedValue } }
);
return pluginAuth;
} else {
const newPluginAuth = await new PluginAuth({
userId,
authField,
value: encryptedValue,
pluginKey
});
newPluginAuth.save();
return newPluginAuth;
}
} catch (err) {
console.log(err);
return err;
}
};
const deleteUserPluginAuth = async (userId, authField) => {
try {
const response = await PluginAuth.deleteOne({ userId, authField });
return response;
} catch (err) {
console.log(err);
return err;
}
};
module.exports = {
getUserPluginAuthValue,
updateUserPluginAuth,
deleteUserPluginAuth
};

View File

@@ -0,0 +1,24 @@
const User = require('../../models/User');
const updateUserPluginsService = async (user, pluginKey, action) => {
try {
if (action === 'install') {
const response = await User.updateOne(
{ _id: user._id },
{ $set: { plugins: [...user.plugins, pluginKey] } }
);
return response;
} else if (action === 'uninstall') {
const response = await User.updateOne(
{ _id: user._id },
{ $set: { plugins: user.plugins.filter((plugin) => plugin !== pluginKey) } }
);
return response;
}
} catch (err) {
console.log(err);
return err;
}
};
module.exports = { updateUserPluginsService };

Some files were not shown because too many files have changed in this diff Show More