Commit graph

28 commits

Author SHA1 Message Date
Ruben Talstra
4cbab86b45
📈 feat: Chat rating for feedback (#5878)
* feat: working started for feedback implementation.

TODO:
- needs some refactoring.
- needs some UI animations.

* feat: working rate functionality

* feat: works now as well to reader the already rated responses from the server.

* feat: added the option to give feedback in text (optional)

* feat: added Dismiss option `x` to the `FeedbackTagOptions`

*  feat: Add rating and ratingContent fields to message schema

* 🔧 chore: Bump version to 0.0.3 in package.json

*  feat: Enhance feedback localization and update UI elements

* 🚀 feat: Implement feedback tagging system with thumbs up/down options

* 🚀 feat: Add data-provider package to unused i18n keys detection

* 🎨 style: update HoverButtons' style

* 🎨 style: Update HoverButtons and Fork components for improved styling and visibility

* 🔧 feat: Implement feedback system with rating and content options

* 🔧 feat: Enhance feedback handling with improved rating toggle and tag options

* 🔧 feat: Integrate toast notifications for feedback submission and clean up unused state

* 🔧 feat: Remove unused feedback tag options from translation file

*  refactor: clean up Feedback component and improve HoverButtons structure

*  refactor: remove unused settings switches for auto scroll, hide side panel, and user message markdown

* refactor: reorganize import order

*  refactor: enhance HoverButtons and Fork components with improved styles and animations

*  refactor: update feedback response phrases for improved user engagement

*  refactor: add CheckboxOption component and streamline fork options rendering

* Refactor feedback components and logic

- Consolidated feedback handling into a single Feedback component, removing FeedbackButtons and FeedbackTagOptions.
- Introduced new feedback tagging system with detailed tags for both thumbs up and thumbs down ratings.
- Updated feedback schema to include new tags and improved type definitions.
- Enhanced user interface for feedback collection, including a dialog for additional comments.
- Removed obsolete files and adjusted imports accordingly.
- Updated translations for new feedback tags and placeholders.

*  refactor: update feedback handling by replacing rating fields with feedback in message updates

* fix: add missing validateMessageReq middleware to feedback route and refactor feedback system

* 🗑️ chore: Remove redundant fork option explanations from translation file

* 🔧 refactor: Remove unused dependency from feedback callback

* 🔧 refactor: Simplify message update response structure and improve error logging

* Chore: removed unused tests.

---------

Co-authored-by: Marco Beretta <81851188+berry-13@users.noreply.github.com>
2025-05-30 12:16:34 -04:00
Danny Avila
64bd373bc8
🔧 fix: Keyv and Proxy Issues, and More Memory Optimizations (#6867)
* chore: update @librechat/agents dependency to version 2.4.15

* refactor: Prevent memory leaks by nullifying boundModel.client in disposeClient function

* fix: use of proxy, use undici

* chore: update @librechat/agents dependency to version 2.4.16

* Revert "fix: use of proxy, use undici"

This reverts commit 83153cd582.

* fix: ensure fetch is imported for HTTP requests

* fix: replace direct OpenAI import with CustomOpenAIClient from @librechat/agents

* fix: update keyv peer dependency to version 5.3.2

* fix: update keyv dependency to version 5.3.2

* refactor: replace KeyvMongo with custom implementation and update flow state manager usage

* fix: update @librechat/agents dependency to version 2.4.17

* ci: update OpenAIClient tests to use CustomOpenAIClient from @librechat/agents

* refactor: remove KeyvMongo mock and related dependencies
2025-04-13 23:01:55 -04:00
Ruben Talstra
3a62a2633d
💵 feat: Add Automatic Balance Refill (#6452)
* 🚀 feat: Add automatic refill settings to balance schema

* 🚀 feat: Refactor balance feature to use global interface configuration

* 🚀 feat: Implement auto-refill functionality for balance management

* 🚀 feat: Enhance auto-refill logic and configuration for balance management

* 🚀 chore: Bump version to 0.7.74 in package.json and package-lock.json

* 🚀 chore: Bump version to 0.0.5 in package.json and package-lock.json

* 🚀 docs: Update comment for balance settings in librechat.example.yaml

* chore: space in `.env.example`

* 🚀 feat: Implement balance configuration loading and refactor related components

* 🚀 test: Refactor tests to use custom config for balance feature

* 🚀 fix: Update balance response handling in Transaction.js to use Balance model

* 🚀 test: Update AppService tests to include balance configuration in mock setup

* 🚀 test: Enhance AppService tests with complete balance configuration scenarios

* 🚀 refactor: Rename balanceConfig to balance and update related tests for clarity

* 🚀 refactor: Remove loadDefaultBalance and update balance handling in AppService

* 🚀 test: Update AppService tests to reflect new balance structure and defaults

* 🚀 test: Mock getCustomConfig in BaseClient tests to control balance configuration

* 🚀 test: Add get method to mockCache in OpenAIClient tests for improved cache handling

* 🚀 test: Mock getCustomConfig in OpenAIClient tests to control balance configuration

* 🚀 test: Remove mock for getCustomConfig in OpenAIClient tests to streamline configuration handling

* 🚀 fix: Update balance configuration reference in config.js for consistency

* refactor: Add getBalanceConfig function to retrieve balance configuration

* chore: Comment out example balance settings in librechat.example.yaml

* refactor: Replace getCustomConfig with getBalanceConfig for balance handling

* fix: tests

* refactor: Replace getBalanceConfig call with balance from request locals

* refactor: Update balance handling to use environment variables for configuration

* refactor: Replace getBalanceConfig calls with balance from request locals

* refactor: Simplify balance configuration logic in getBalanceConfig

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2025-03-21 17:48:11 -04:00
Danny Avila
be280004cf
🔧 refactor: Improve Params Handling, Remove Legacy Items, & Update Configs (#6074)
* chore: include all assets for service worker, remove unused tsconfig.node.json, eslint ignore vite config

* chore: exclude image files from service worker caching

* refactor: simplify googleSchema transformation and error handling

* fix: max output tokens cap for 3.7 models

* fix: skip index fixing in CI, development, and test environments

* ci: add maxOutputTokens handling tests for Claude models

* refactor: drop top_k and top_p parameters for claude-3.7 in AnthropicClient and add tests for new behavior

* refactor: conditionally include top_k and top_p parameters for non-claude-3.7 models

* ci: add unit tests for getLLMConfig function with various model options

* chore: remove all OPENROUTER_API_KEY legacy logic

* refactor: optimize stream chunk handling

* feat: reset model parameters button

* refactor: remove unused examples field from convoSchema and presetSchema

* chore: update librechat-data-provider version to 0.7.6993

* refactor: move excludedKeys set to data-provider for better reusability

* feat: enhance saveMessageToDatabase to handle unset fields and fetched conversation state

* feat: add 'iconURL' and 'greeting' to excludedKeys in data provider config

* fix: add optional chaining to user ID retrieval in getConvo call
2025-02-26 15:02:03 -05:00
Danny Avila
c26b54c74d
🔄 refactor: Consolidate Tokenizer; Fix Jest Open Handles (#5175)
* refactor: consolidate tokenizer to singleton

* fix: remove legacy tokenizer code, add Tokenizer singleton tests

* ci: fix jest open handles
2025-01-03 18:11:14 -05:00
Danny Avila
6ef05dd2e6
🔧 fix: Add modelLabel to OpenAIClient and PluginsClient options (#4995) 2024-12-14 15:31:50 -05:00
Danny Avila
1a815f5e19
🎉 feat: Code Interpreter API and Agents Release (#4860)
* feat: Code Interpreter API & File Search Agent Uploads

chore: add back code files

wip: first pass, abstract key dialog

refactor: influence checkbox on key changes

refactor: update localization keys for 'execute code' to 'run code'

wip: run code button

refactor: add throwError parameter to loadAuthValues and getUserPluginAuthValue functions

feat: first pass, API tool calling

fix: handle missing toolId in callTool function and return 404 for non-existent tools

feat: show code outputs

fix: improve error handling in callTool function and log errors

fix: handle potential null value for filepath in attachment destructuring

fix: normalize language before rendering and prevent null return

fix: add loading indicator in RunCode component while executing code

feat: add support for conditional code execution in Markdown components

feat: attachments

refactor: remove bash

fix: pass abort signal to graph/run

refactor: debounce and rate limit tool call

refactor: increase debounce delay for execute function

feat: set code output attachments

feat: image attachments

refactor: apply message context

refactor: pass `partIndex`

feat: toolCall schema/model/methods

feat: block indexing

feat: get tool calls

chore: imports

chore: typing

chore: condense type imports

feat: get tool calls

fix: block indexing

chore: typing

refactor: update tool calls mapping to support multiple results

fix: add unique key to nav link for rendering

wip: first pass, tool call results

refactor: update query cache from successful tool call mutation

style: improve result switcher styling

chore: note on using \`.toObject()\`

feat: add agent_id field to conversation schema

chore: typing

refactor: rename agentMap to agentsMap for consistency

feat: Agent Name as chat input placeholder

chore: bump agents

📦 chore: update @langchain dependencies to latest versions to match agents package

📦 chore: update @librechat/agents dependency to version 1.8.0

fix: Aborting agent stream removes sender; fix(bedrock): completion removes preset name label

refactor: remove direct file parameter to use req.file, add `processAgentFileUpload` for image uploads

feat: upload menu

feat: prime message_file resources

feat: implement conversation access validation in chat route

refactor: remove file parameter from processFileUpload and use req.file instead

feat: add savedMessageIds set to track saved message IDs in BaseClient, to prevent unnecessary double-write to db

feat: prevent duplicate message saves by checking savedMessageIds in AgentController

refactor: skip legacy RAG API handling for agents

feat: add files field to convoSchema

refactor: update request type annotations from Express.Request to ServerRequest in file processing functions

feat: track conversation files

fix: resendFiles, addPreviousAttachments handling

feat: add ID validation for session_id and file_id in download route

feat: entity_id for code file uploads/downloads

fix: code file edge cases

feat: delete related tool calls

feat: add stream rate handling for LLM configuration

feat: enhance system content with attached file information

fix: improve error logging in resource priming function

* WIP: PoC, sequential agents

WIP: PoC Sequential Agents, first pass content data + bump agents package

fix: package-lock

WIP: PoC, o1 support, refactor bufferString

feat: convertJsonSchemaToZod

fix: form issues and schema defining erroneous model

fix: max length issue on agent form instructions, limit conversation messages to sequential agents

feat: add abort signal support to createRun function and AgentClient

feat: PoC, hide prior sequential agent steps

fix: update parameter naming from config to metadata in event handlers for clarity, add model to usage data

refactor: use only last contentData, track model for usage data

chore: bump agents package

fix: content parts issue

refactor: filter contentParts to include tool calls and relevant indices

feat: show function calls

refactor: filter context messages to exclude tool calls when no tools are available to the agent

fix: ensure tool call content is not undefined in formatMessages

feat: add agent_id field to conversationPreset schema

feat: hide sequential agents

feat: increase upload toast duration to 10 seconds

* refactor: tool context handling & update Code API Key Dialog

feat: toolContextMap

chore: skipSpecs -> useSpecs

ci: fix handleTools tests

feat: API Key Dialog

* feat: Agent Permissions Admin Controls

feat: replace label with button for prompt permission toggle

feat: update agent permissions

feat: enable experimental agents and streamline capability configuration

feat: implement access control for agents and enhance endpoint menu items

feat: add welcome message for agent selection in localization

feat: add agents permission to access control and update version to 0.7.57

* fix: update types in useAssistantListMap and useMentions hooks for better null handling

* feat: mention agents

* fix: agent tool resource race conditions when deleting agent tool resource files

* feat: add error handling for code execution with user feedback

* refactor: rename AdminControls to AdminSettings for clarity

* style: add gap to button in AdminSettings for improved layout

* refactor: separate agent query hooks and check access to enable fetching

* fix: remove unused provider from agent initialization options, creates issue with custom endpoints

* refactor: remove redundant/deprecated modelOptions from AgentClient processes

* chore: update @librechat/agents to version 1.8.5 in package.json and package-lock.json

* fix: minor styling issues + agent panel uniformity

* fix: agent edge cases when set endpoint is no longer defined

* refactor: remove unused cleanup function call from AppService

* fix: update link in ApiKeyDialog to point to pricing page

* fix: improve type handling and layout calculations in SidePanel component

* fix: add missing localization string for agent selection in SidePanel

* chore: form styling and localizations for upload filesearch/code interpreter

* fix: model selection placeholder logic in AgentConfig component

* style: agent capabilities

* fix: add localization for provider selection and improve dropdown styling in ModelPanel

* refactor: use gpt-4o-mini > gpt-3.5-turbo

* fix: agents configuration for loadDefaultInterface and update related tests

* feat: DALLE Agents support
2024-12-04 15:48:13 -05:00
Danny Avila
95201908e9
📦 fix: npm warnings; chore: bump deprecated packages (#4707)
* chore: bump langchain deps to address vulnerability warnings

* chore: bump community package and install textsplitters package

* fix: update expected result in tokenSplit tests for accuracy

* chore: remove CodeSherpa tools

* chore: remove E2B tools and loadToolSuite

* chore: remove CodeBrew tool and update related references

* chore: remove HumanTool and ChatTool, update tool references

* chore: remove Zapier tool from manifest.json and update SerpAPI

* chore: remove basic tools

* chore: update import path for RecursiveCharacterTextSplitter

* chore: update import path for DynamicStructuredTool

* chore: remove extractionChain.js and update tool filtering logic

* chore: npm audit fix

* chore: bump google packages

* chore: update DALL-E tool to DALL-E-3 and adjust authentication logic

* ci: update message classes

* chore: elliptic npm audit fix

* chore: update CallbackManager import and remove deprecated tool handling logic

* chore: imports order

* chore: remove unused code

---------

Co-authored-by: Max Sanna <max@maxsanna.com>
2024-11-12 18:51:32 -05:00
Danny Avila
3428c3c647
feat: Known Endpoint, xAI (#4632)
* feat: Known Endpoint, xAI

* chore: update librechat-data-provider version to 0.7.53

* ci: name property removal

* feat: add XAI_API_KEY to example environment variables
2024-11-04 16:27:54 -05:00
Danny Avila
17e59349ff
📎 feat: Attachment Handling for v1/completions (#4205)
* refactor: add handling of attachments in v1/completions method

* ci: update OpenAIClient.test.js
2024-09-23 11:03:28 -04:00
Danny Avila
3ea2d908e0
🛠️ fix: getStreamUsage Method in OpenAIClient (#4133) 2024-09-19 18:07:49 -04:00
Danny Avila
1a452121fa
🤖 feat: OpenAI Assistants v2 (initial support) (#2781)
* 🤖 Assistants V2 Support: Part 1

- Separated Azure Assistants to its own endpoint
- File Search / Vector Store integration is incomplete, but can toggle and use storage from playground
- Code Interpreter resource files can be added but not deleted
- GPT-4o is supported
- Many improvements to the Assistants Endpoint overall

data-provider v2 changes

copy existing route as v1

chore: rename new endpoint to reduce comparison operations and add new azure filesource

api: add azureAssistants part 1

force use of version for assistants/assistantsAzure

chore: switch name back to azureAssistants

refactor type version: string | number

Ensure assistants endpoints have version set

fix: isArchived type issue in ConversationListParams

refactor: update assistants mutations/queries with endpoint/version definitions, update Assistants Map structure

chore:  FilePreview component ExtendedFile type assertion

feat: isAssistantsEndpoint helper

chore: remove unused useGenerations

chore(buildTree): type issue

chore(Advanced): type issue (unused component, maybe in future)

first pass for multi-assistant endpoint rewrite

fix(listAssistants): pass params correctly

feat: list separate assistants by endpoint

fix(useTextarea): access assistantMap correctly

fix: assistant endpoint switching, resetting ID

fix: broken during rewrite, selecting assistant mention

fix: set/invalidate assistants endpoint query data correctly

feat: Fix issue with assistant ID not being reset correctly

getOpenAIClient helper function

feat: add toast for assistant deletion

fix: assistants delete right after create issue for azure

fix: assistant patching

refactor: actions to use getOpenAIClient

refactor: consolidate logic into helpers file

fix: issue where conversation data was not initially available

v1 chat support

refactor(spendTokens): only early return if completionTokens isNaN

fix(OpenAIClient): ensure spendTokens has all necessary params

refactor: route/controller logic

fix(assistants/initializeClient): use defaultHeaders field

fix: sanitize default operation id

chore: bump openai package

first pass v2 action service

feat: retroactive domain parsing for actions added via v1

feat: delete db records of actions/assistants on openai assistant deletion

chore: remove vision tools from v2 assistants

feat: v2 upload and delete assistant vision images

WIP first pass, thread attachments

fix: show assistant vision files (save local/firebase copy)

v2 image continue

fix: annotations

fix: refine annotations

show analyze as error if is no longer submitting before progress reaches 1 and show file_search as retrieval tool

fix: abort run, undefined endpoint issue

refactor: consolidate capabilities logic and anticipate versioning

frontend version 2 changes

fix: query selection and filter

add endpoint to unknown filepath

add file ids to resource, deleting in progress

enable/disable file search

remove version log

* 🤖 Assistants V2 Support: Part 2

🎹 fix: Autocompletion Chrome Bug on Action API Key Input

chore: remove `useOriginNavigate`

chore: set correct OpenAI Storage Source

fix: azure file deletions, instantiate clients by source for deletion

update code interpret files info

feat: deleteResourceFileId

chore: increase poll interval as azure easily rate limits

fix: openai file deletions, TODO: evaluate rejected deletion settled promises to determine which to delete from db records

file source icons

update table file filters

chore: file search info and versioning

fix: retrieval update with necessary tool_resources if specified

fix(useMentions): add optional chaining in case listMap value is undefined

fix: force assistant avatar roundedness

fix: azure assistants, check correct flag

chore: bump data-provider

* fix: merge conflict

* ci: fix backend tests due to new updates

* chore: update .env.example

* meilisearch improvements

* localization updates

* chore: update comparisons

* feat: add additional metadata: endpoint, author ID

* chore: azureAssistants ENDPOINTS exclusion warning
2024-05-19 12:56:55 -04:00
Danny Avila
c94278be85
🦙 feat: Ollama Vision Support (#2643)
* refactor: checkVisionRequest, search availableModels for valid vision model instead of using default

* feat: install ollama-js, add typedefs

* feat: Ollama Vision Support

* ci: fix test
2024-05-08 20:24:40 -04:00
Danny Avila
d20970f5c5
🚀 Feat: Streamline File Strategies & GPT-4-Vision Settings (#1535)
* chore: fix `endpoint` typescript issues and typo in console info message

* feat(api): files GET endpoint and save only file_id references to messages

* refactor(client): `useGetFiles` query hook, update file types, optimistic update of filesQuery on file upload

* refactor(buildTree): update to use params object and accept fileMap

* feat: map files to messages; refactor(ChatView): messages only available after files are fetched

* fix: fetch files only when authenticated

* feat(api): AppService
- rename app.locals.configs to app.locals.paths
- load custom config use fileStrategy from yaml config in app.locals

* refactor: separate Firebase and Local strategies, call based on config

* refactor: modularize file strategies and employ with use of DALL-E

* refactor(librechat.yaml): add fileStrategy field

* feat: add source to MongoFile schema, as well as BatchFile, and ExtendedFile types

* feat: employ file strategies for upload/delete files

* refactor(deleteFirebaseFile): add user id validation for firebase file deletion

* chore(deleteFirebaseFile): update jsdocs

* feat: employ strategies for vision requests

* fix(client): handle messages with deleted files

* fix(client): ensure `filesToDelete` always saves/sends `file.source`

* feat(openAI): configurable `resendImages` and `imageDetail`

* refactor(getTokenCountForMessage): recursive process only when array of Objects and only their values (not keys) aside from `image_url` types

* feat(OpenAIClient): calculateImageTokenCost

* chore: remove comment

* refactor(uploadAvatar): employ fileStrategy for avatars, from social logins or user upload

* docs: update docs on how to configure fileStrategy

* fix(ci): mock winston and winston related modules, update DALLE3.spec.js with changes made

* refactor(redis): change terminal message to reflect current development state

* fix(DALL-E-2): pass fileStrategy to dall-e
2024-01-11 11:37:54 -05:00
Danny Avila
5b28362282
Release v0.6.5 (#1391)
*  Release v0.6.5

* fix(ci): use dynamic currentDateString
2023-12-19 01:09:42 -05:00
Danny Avila
8d563d61f1
feat: Azure Vision Support & Docs Update (#1389)
* feat(AzureOpenAI): Vision Support

* chore(ci/OpenAIClient.test): update test to reflect Azure now uses chatCompletion method as opposed to getCompletion, while still testing the latter method

* docs: update documentation mainly revolving around Azure setup, but also reformatting the 'Tokens and API' section completely

* docs: add images and links to ai_setup.md

* docs: ai setup reference
2023-12-18 18:43:50 -05:00
Danny Avila
0958db3825
fix: Enhance Test Coverage and Fix Compatibility Issues 👷‍♂️ (#1363)
* refactor: only remove conversation states from localStorage on login/logout but not on refresh

* chore: add debugging log for azure completion url

* chore: add api-key to redact regex

* fix: do not show endpoint selector if endpoint is falsy

* chore: remove logger from genAzureChatCompletion

* feat(ci): mock fetchEventSource

* refactor(ci): mock all model methods in BaseClient.test, as well as mock the implementation for getCompletion in FakeClient

* fix(OpenAIClient): consider chatCompletion if model name includes `gpt` as opposed to `gpt-`

* fix(ChatGPTClient/azureOpenAI): Remove 'model' option for Azure compatibility (cannot be sent in payload body)

* feat(ci): write new test suite that significantly increase test coverage for OpenAIClient and BaseClient by covering most of the real implementation of the `sendMessage` method
- test for the azure edge case where model option is appended to modelOptions, ensuring removal before sent to the azure endpoint
- test for expected azure url being passed to SSE POST request
- test for AZURE_OPENAI_DEFAULT_MODEL being set, but is not included in the URL deployment name as expected
- test getCompletion method to have correct payload
fix(ci/OpenAIClient.test.js): correctly mock hanging/async methods

* refactor(addTitle): allow azure to title as it aborts signal on completion
2023-12-15 13:27:13 -05:00
Danny Avila
c64970525b
feat: allow any reverse proxy URLs, add proxy support to model fetching (#1192)
* feat: allow any reverse proxy URLs

* feat: add proxy support to model fetching
2023-11-16 18:56:09 -05:00
Danny Avila
d5259e1525
feat(OpenAIClient): AZURE_USE_MODEL_AS_DEPLOYMENT_NAME, AZURE_OPENAI_DEFAULT_MODEL (#1165)
* feat(OpenAIClient): AZURE_USE_MODEL_AS_DEPLOYMENT_NAME, AZURE_OPENAI_DEFAULT_MODEL

* ci: fix initializeClient test
2023-11-10 09:58:17 -05:00
Danny Avila
5ab9802aa9
fix(OpenAIClient): use official SDK to identify client and avoid false Rate Limit Error (#1161)
* chore: add eslint ignore unused var pattern

* feat: add extractBaseURL helper for valid OpenAI reverse proxies, with tests

* feat(OpenAIClient): add new chatCompletion using official OpenAI node SDK

* fix(ci): revert change to FORCE_PROMPT condition
2023-11-09 14:04:36 -05:00
Danny Avila
abbc57a49a
fix(formatMessages): Conform Name Property to OpenAI Expected Regex (#1076)
* fix(formatMessages): conform name property to OpenAI expected regex

* fix(ci): prior test was expecting non-sanitized name input
2023-10-19 10:02:20 -04:00
Danny Avila
2dd545eaa4
fix(OpenAIClient/PluginsClient): allow non-v1 reverse proxy, handle "v1/completions" reverse proxy (#1029)
* fix(OpenAIClient): handle completions request in reverse proxy, also force prompt by env var

* fix(reverseProxyUrl): allow url without /v1/ but add server warning as it will not be compatible with plugins

* fix(ModelService): handle reverse proxy without v1

* refactor: make changes cleaner

* ci(OpenAIClient): add tests for OPENROUTER_API_KEY, FORCE_PROMPT, and reverseProxyUrl handling in setOptions
2023-10-08 16:57:25 -04:00
Danny Avila
317a1bd8da
feat: ConversationSummaryBufferMemory (#973)
* refactor: pass model in message edit payload, use encoder in standalone util function

* feat: add summaryBuffer helper

* refactor(api/messages): use new countTokens helper and add auth middleware at top

* wip: ConversationSummaryBufferMemory

* refactor: move pre-generation helpers to prompts dir

* chore: remove console log

* chore: remove test as payload will no longer carry tokenCount

* chore: update getMessagesWithinTokenLimit JSDoc

* refactor: optimize getMessagesForConversation and also break on summary, feat(ci): getMessagesForConversation tests

* refactor(getMessagesForConvo): count '00000000-0000-0000-0000-000000000000' as root message

* chore: add newer model to token map

* fix: condition was point to prop of array instead of message prop

* refactor(BaseClient): use object for refineMessages param, rename 'summary' to 'summaryMessage', add previous_summary
refactor(getMessagesWithinTokenLimit): replace text and tokenCount if should summarize, summary, and summaryTokenCount are present
fix/refactor(handleContextStrategy): use the right comparison length for context diff, and replace payload first message when a summary is present

* chore: log previous_summary if debugging

* refactor(formatMessage): assume if role is defined that it's a valid value

* refactor(getMessagesWithinTokenLimit): remove summary logic
refactor(handleContextStrategy): add usePrevSummary logic in case only summary was pruned
refactor(loadHistory): initial message query will return all ordered messages but keep track of the latest summary
refactor(getMessagesForConversation): use object for single param, edit jsdoc, edit all files using the method
refactor(ChatGPTClient): order messages before buildPrompt is called, TODO: add convoSumBuffMemory logic

* fix: undefined handling and summarizing only when shouldRefineContext is true

* chore(BaseClient): fix test results omitting system role for summaries and test edge case

* chore: export summaryBuffer from index file

* refactor(OpenAIClient/BaseClient): move refineMessages to subclass, implement LLM initialization for summaryBuffer

* feat: add OPENAI_SUMMARIZE to enable summarizing, refactor: rename client prop 'shouldRefineContext' to 'shouldSummarize', change contextStrategy value to 'summarize' from 'refine'

* refactor: rename refineMessages method to summarizeMessages for clarity

* chore: clarify summary future intent in .env.example

* refactor(initializeLLM): handle case for either 'model' or 'modelName' being passed

* feat(gptPlugins): enable summarization for plugins

* refactor(gptPlugins): utilize new initializeLLM method and formatting methods for messages, use payload array for currentMessages and assign pastMessages sooner

* refactor(agents): use ConversationSummaryBufferMemory for both agent types

* refactor(formatMessage): optimize original method for langchain, add helper function for langchain messages, add JSDocs and tests

* refactor(summaryBuffer): add helper to createSummaryBufferMemory, and use new formatting helpers

* fix: forgot to spread formatMessages also took opportunity to pluralize filename

* refactor: pass memory to tools, namely openapi specs. not used and may never be used by new method but added for testing

* ci(formatMessages): add more exhaustive checks for langchain messages

* feat: add debug env var for OpenAI

* chore: delete unnecessary comments

* chore: add extra note about summary feature

* fix: remove tokenCount from payload instructions

* fix: test fail

* fix: only pass instructions to payload when defined or not empty object

* refactor: fromPromptMessages is deprecated, use renamed method fromMessages

* refactor: use 'includes' instead of 'startsWith' for extended OpenRouter compatibility

* fix(PluginsClient.buildPromptBody): handle undefined message strings

* chore: log langchain titling error

* feat: getModelMaxTokens helper

* feat: tokenSplit helper

* feat: summary prompts updated

* fix: optimize _CUT_OFF_SUMMARIZER prompt

* refactor(summaryBuffer): use custom summary prompt, allow prompt to be passed, pass humanPrefix and aiPrefix to memory, along with any future variables, rename messagesToRefine to context

* fix(summaryBuffer): handle edge case where messagesToRefine exceeds summary context,
refactor(BaseClient): allow custom maxContextTokens to be passed to getMessagesWithinTokenLimit, add defined check before unshifting summaryMessage, update shouldSummarize based on this
refactor(OpenAIClient): use getModelMaxTokens, use cut-off message method for summary if no messages were left after pruning

* fix(handleContextStrategy): handle case where incoming prompt is bigger than model context

* chore: rename refinedContent to splitText

* chore: remove unnecessary debug log
2023-09-26 21:02:28 -04:00
Danny Avila
9491b753c3
fix: Match OpenAI Token Counting Strategy 🪙 (#945)
* wip token fix

* fix: complete token count refactor to match OpenAI example

* chore: add back sendPayload method (accidentally deleted)

* chore: revise JSDoc for getTokenCountForMessage
2023-09-14 19:40:21 -04:00
Danny Avila
afd43afb60
feat(GPT/Anthropic): Continue Regenerating & Generation Buttons (#808)
* feat(useMessageHandler.js/ts): Refactor and add features to handle user messages, support multiple endpoints/models, generate placeholder responses, regeneration, and stopGeneration function

fix(conversation.ts, buildTree.ts): Import TMessage type, handle null parentMessageId

feat(schemas.ts): Update and add schemas for various AI services, add default values, optional fields, and endpoint-to-schema mapping, create parseConvo function

chore(useMessageHandler.js, schemas.ts): Remove unused imports, variables, and chatGPT enum

* wip: add generation buttons

* refactor(cleanupPreset.ts): simplify cleanupPreset function
refactor(getDefaultConversation.js): remove unused code and simplify getDefaultConversation function

feat(utils): add getDefaultConversation function

This commit adds a new utility function called `getDefaultConversation` to the `client/src/utils/getDefaultConversation.ts` file. This function is responsible for generating a default conversation object based on the provided parameters.

The `getDefaultConversation` function takes in an object with the following properties:
- `conversation`: The conversation object to be used as a base.
- `endpointsConfig`: The configuration object containing information about the available endpoints.
- `preset`: An optional preset object that can be used to override the default behavior.

The function first tries to determine the target endpoint based on the preset object. If a valid endpoint is found, it is used as the target endpoint. If not, the function tries to retrieve the last conversation setup from the local storage and uses its endpoint if it is valid. If neither the preset nor the local storage contains a valid endpoint, the function falls back to a default endpoint.

Once the target endpoint is determined,

* fix(utils): remove console.error statement in buildDefaultConversation function
fix(schemas): add default values for catch blocks in openAISchema, googleSchema, bingAISchema, anthropicSchema, chatGPTBrowserSchema, and gptPluginsSchema

* fix: endpoint not changing on change of preset from other endpoint, wip: refactor

* refactor: preset items to TSX

* refactor: convert resetConvo to TS

* refactor(getDefaultConversation.ts): move defaultEndpoints array to the top of the file for better readability
refactor(getDefaultConversation.ts): extract getDefaultEndpoint function for better code organization and reusability

* feat(svg): add ContinueIcon component
feat(svg): add RegenerateIcon component
feat(svg): add ContinueIcon and RegenerateIcon components to index.ts

* feat(Button.tsx): add onClick and className props to Button component
feat(GenerationButtons.tsx): add logic to display Regenerate or StopGenerating button based on isSubmitting and messages
feat(Regenerate.tsx): create Regenerate component with RegenerateIcon and handleRegenerate function
feat(StopGenerating.tsx): create StopGenerating component with StopGeneratingIcon and handleStopGenerating function

* fix(TextChat.jsx): reorder imports and variables for better readability
fix(TextChat.jsx): fix typo in condition for isNotAppendable variable
fix(TextChat.jsx): remove unused handleStopGenerating function
fix(ContinueIcon.tsx): remove unnecessary closing tags for polygon elements
fix(useMessageHandler.ts): add missing type annotations for handleStopGenerating and handleRegenerate functions
fix(useMessageHandler.ts): remove unused variables in return statement

* fix(getDefaultConversation.ts): refactor code to use getLocalStorageItems function
feat(getLocalStorageItems.ts): add utility function to retrieve items from local storage

* fix(OpenAIClient.js): add support for streaming result in sendCompletion method
feat(OpenAIClient.js): add finish_reason metadata to opts in sendCompletion method
feat(Message.js): add finish_reason field to Message model
feat(messageSchema.js): add finish_reason field to messageSchema
feat(openAI.js): parse chatGptLabel and promptPrefix from req.body and pass rest of the modelOptions to endpointOption
feat(openAI.js): add addMetadata function to store metadata in ask function
feat(openAI.js): add metadata to response if available
feat(schemas.ts): add finish_reason field to tMessageSchema

* feat(types.ts): add TOnClick and TGenButtonProps types for button components
feat(Continue.tsx): create Continue component for generating button
feat(GenerationButtons.tsx): update GenerationButtons component to use Continue component
feat(Regenerate.tsx): create Regenerate component for regenerating button
feat(Stop.tsx): create Stop component for stop generating button

* feat(MessageHandler.jsx): add MessageHandler component to handle messages and conversations
fix(Root.jsx): fix import paths for Nav and MessageHandler components

* feat(useMessageHandler.ts): add support for generation parameter in ask function
feat(useMessageHandler.ts): add support for isEdited parameter in ask function
feat(useMessageHandler.ts): add support for continueGeneration function
fix(createPayload.ts): replace endpoint URL when isEdited parameter is true

* chore(client): set skipLibCheck to true in tsconfig.json

* fix(useMessageHandler.ts): remove unused clientId variable
fix(schemas.ts): make clientId field in tMessageSchema nullable and optional

* wip: edit route for continue generation

* refactor(api): move handlers to root of routes dir

* fix(useMessageHandler.ts): initialize currentMessages to an empty array if messages is null
fix(useMessageHandler.ts): update initialResponse text to use responseText variable
fix(useMessageHandler.ts): update setMessages logic for isRegenerate case
fix(MessageHandler.jsx): update setMessages logic for cancelHandler, createdHandler, and finalHandler

* fix(schemas.ts): make createdAt and updatedAt fields optional and set default values using new Date().toISOString()
fix(schemas.ts): change type annotation of TMessage from infer to input

* refactor(useMessageHandler.ts): rename AskProps type to TAskProps
refactor(useMessageHandler.ts): remove generation property from ask function arguments
refactor(useMessageHandler.ts): use nullish coalescing operator (??) instead of logical OR (||)
refactor(useMessageHandler.ts): pass the responseMessageId to message prop of submission

* fix(BaseClient.js): use nullish coalescing operator (??) instead of logical OR (||) for default values

* fix(BaseClient.js): fix responseMessageId assignment in handleStartMethods method
feat(BaseClient.js): add support for isEdited flag in sendMessage method
feat(BaseClient.js): add generation to responseMessage text in sendMessage method

* fix(openAI.js): remove unused imports and commented out code
feat(openAI.js): add support for generation parameter in request body
fix(openAI.js): remove console.log statement
fix(openAI.js): remove unused variables and parameters
fix(openAI.js): update response text in case of error
fix(openAI.js): handle error and abort message in case of error
fix(handlers.js): add generation parameter to createOnProgress function
fix(useMessageHandler.ts): update responseText variable to use generation parameter

* refactor(api/middleware): move inside server dir

* refactor: add endpoint specific, modular functions to build options and initialize clients, create server/utils, move middleware, separate utils into api general utils and server specific utils

* fix(abortMiddleware.js): import getConvo and getConvoTitle functions from models
feat(abortMiddleware.js): add abortAsk function to abortController to handle aborting of requests
fix(openAI.js): import buildOptions and initializeClient functions from endpoints/openAI
refactor(openAI.js): use getAbortData function to get data for abortAsk function

* refactor: move endpoint specific logic to an endpoints dir

* refactor(PluginService.js): fix import path for encrypt and decrypt functions in PluginService.js

* feat(openAI): add new endpoint for adding a title to a conversation

- Added a new file `addTitle.js` in the `api/server/routes/endpoints/openAI` directory.
- The `addTitle.js` file exports a function `addTitle` that takes in request parameters and performs the following actions:
  - If the `parentMessageId` is `'00000000-0000-0000-0000-000000000000'` and `newConvo` is true, it proceeds with the following steps:
    - Calls the `titleConvo` function from the `titleConvo` module, passing in the necessary parameters.
    - Calls the `saveConvo` function from the `saveConvo` module, passing in the user ID and conversation details.
- Updated the `index.js` file in the `api/server/routes/endpoints/openAI` directory to export the `addTitle` function.
- This change adds

* fix(abortMiddleware.js): remove console.log statement
refactor(gptPlugins.js): update imports and function parameters
feat(gptPlugins.js): add support for abortController and getAbortData
refactor(openAI.js): update imports and function parameters
feat(openAI.js): add support for abortController and getAbortData

fix(openAI.js): refactor code to use modularized functions and middleware
fix(buildOptions.js): refactor code to use destructuring and update variable names

* refactor(askChatGPTBrowser.js, bingAI.js, google.js): remove duplicate code for setting response headers
feat(askChatGPTBrowser.js, bingAI.js, google.js): add setHeaders middleware to set response headers

* feat(middleware): validateEndpoint, refactor buildOption to only be concerned of endpointOption

* fix(abortMiddleware.js): add 'finish_reason' property with value 'incomplete' to responseMessage object
fix(abortMessage.js): remove console.log statement for aborted message
fix(handlers.js): modify tokens assignment to handle empty generation string and trailing space

* fix(BaseClient.js): import addSpaceIfNeeded function from server/utils
fix(BaseClient.js): add space before generation in text property
fix(index.js): remove getCitations and citeText exports
feat(buildEndpointOption.js): add buildEndpointOption middleware
fix(index.js): import buildEndpointOption middleware
fix(anthropic.js): remove buildOptions function and use endpointOption from req.body
fix(gptPlugins.js): remove buildOptions function and use endpointOption from req.body
fix(openAI.js): remove buildOptions function and use endpointOption from req.body

feat(utils): add citations.js and handleText.js modules
fix(utils): fix import statements in index.js module

* refactor(gptPlugins.js): use getResponseSender function from librechat-data-provider

* feat(gptPlugins): complete 'continue generating'

* wip: anthropic continue regen

* feat(middleware): add validateRegistration middleware

A new middleware function called `validateRegistration` has been added to the list of exported middleware functions in `index.js`. This middleware is responsible for validating registration data before allowing the registration process to proceed.

* feat(Anthropic): complete continue regen

* chore: add librechat-data-provider to api/package.json

* fix(ci): backend-review will mock meilisearch, also installs data-provider as now needed

* chore(ci): remove unneeded SEARCH env var

* style(GenerationButtons): make text shorter for sake of space economy, even though this diverges from chat.openai.com

* style(GenerationButtons/ScrollToBottom): adjust visibility/position based on screen size

* chore(client): 'Editting' typo

* feat(GenerationButtons.tsx): add support for endpoint prop in GenerationButtons component
feat(OptionsBar.tsx): pass endpoint prop to GenerationButtons component
feat(useGenerations.ts): create useGenerations hook to handle generation logic
fix(schemas.ts): add searchResult field to tMessageSchema

* refactor(HoverButtons): convert to TSX and utilize new useGenerations hook

* fix(abortMiddleware): handle error with res headers set, or abortController not found, to ensure proper API error is sent to the client, chore(BaseClient): remove console log for onStart message meant for debugging

* refactor(api): remove librechat-data-provider dep for now as it complicates deployed docker build stage, re-use code in CJS, located in server/endpoints/schemas

* chore: remove console.logs from test files

* ci: add backend tests for AnthropicClient, focusing on new buildMessages logic

* refactor(FakeClient): use actual BaseClient sendMessage method for testing

* test(BaseClient.test.js): add test for loading chat history
test(BaseClient.test.js): add test for sendMessage logic with isEdited flag

* fix(buildEndpointOption.js): add support for azureOpenAI in buildFunction object
wip(endpoints.js): fetch Azure models from Azure OpenAI API if opts.azure is true

* fix(Button.tsx): add data-testid attribute to button component
fix(SelectDropDown.tsx): add data-testid attribute to Listbox.Button component
fix(messages.spec.ts): add waitForServerStream function to consolidate logic for awaiting the server response
feat(messages.spec.ts): add test for stopping and continuing message and improve browser/page context order and closing

* refactor(onProgress): speed up time to save initial message for editable routes

* chore: disable AI message editing (for now), was accidentally allowed

* refactor: ensure continue is only supported for latest message style: improve styling in dark mode and across all hover buttons/icons, including making edit icon for AI invisible (for now)

* fix: add test id to generation buttons so they never resolve to 2+ items

* chore(package.json): add 'packages/' to the list of ignored directories
chore(data-provider/package.json): bump version to 0.1.5
2023-08-17 12:50:05 -04:00
Youngwook Kim
197307d514
fix(OpenAIClient): resolve null pointer exception in tokenizer management (#689) 2023-07-23 11:59:11 -04:00
Danny Avila
e5336039fc
ci(backend-review.yml): add linter step to the backend review workflow (#625)
* ci(backend-review.yml): add linter step to the backend review workflow

* chore(backend-review.yml): remove prettier from lint-action configuration

* chore: apply new linting workflow

* chore(lint-staged.config.js): reorder lint-staged tasks for JavaScript and TypeScript files

* chore(eslint): update ignorePatterns in .eslintrc.js
chore(lint-action): remove prettier option in backend-review.yml
chore(package.json): add lint and lint:fix scripts

* chore(lint-staged.config.js): remove prettier --write command for js, jsx, ts, tsx files

* chore(titleConvo.js): remove unnecessary console.log statement
chore(titleConvo.js): add missing comma in options object

* chore: apply linting to all files

* chore(lint-staged.config.js): update lint-staged configuration to include prettier formatting
2023-07-14 09:36:49 -04:00
Danny Avila
8819e83d2c
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532)
* refactor: start new client classes, test localAi support

* feat: create base class, extend chatgpt from base

* refactor(BaseClient.js): change userId parameter to user
refactor(BaseClient.js): change userId parameter to user
feat(OpenAIClient.js): add sendMessage method
refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId
refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId
refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages
feat(index.js): export client classes
refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key
refactor(index.js): comment out askOpenAI route
feat(index.js): add openAI route

feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests.

* refactor(BaseClient.js): use optional chaining operator to access messageId property
refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt
refactor(OpenAIClient.js): use optional chaining operator to access messageId property
refactor(fetch-polyfill.js): remove fetch polyfill
refactor(openAI.js): comment out debug option in clientOptions

* refactor: update import statements and remove unused imports in several files
feat: add getAzureCredentials function to azureUtils module
docs: update comments in azureUtils module

* refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency

* feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials
feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl
feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used
feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods
feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model
feat(openAI.js): modify endpointOption to include modelOptions instead of individual options
refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property
refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter
refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter
refactor(OpenAIClient.js): modify ask method to include endpointOption parameter

* chore: delete draft file

* refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability

* refactor(BaseClient.js): move sendMessage method to BaseClient class
feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class.

* refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions
feat(BaseClient.js): add buildMessages method to BaseClient class
fix(ChatGPTClient.js): use message.text instead of message.message
refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody
refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable

refactor(OpenAIClient.js): move setOptions method to the bottom of the class
feat(OpenAIClient.js): add support for cl100k_base encoding
feat(OpenAIClient.js): add support for unofficial chat GPT models
feat(OpenAIClient.js): add support for custom modelOptions
feat(OpenAIClient.js): add caching for tokenizers
feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers
refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions
refactor(OpenAIClient.js): rename buildPrompt to buildMessages
refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js

* refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency

- In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'.
- In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method.

* refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages
fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty

* refactor(BaseClient.js): extract addInstructions method from sendMessage method
feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model
refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list

* refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method

* feat(BaseClient.js): add support for token count tracking and context strategy
feat(OpenAIClient.js): add support for token count tracking and context strategy
feat(Message.js): add tokenCount field to Message schema and updateMessage function

* refactor(BaseClient.js): add support for refining messages based on token limit
feat(OpenAIClient.js): add support for context refinement strategy
refactor(OpenAIClient.js): use context refinement strategy in message sending
refactor(server/index.js): improve code readability by breaking long lines

* refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity
feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement
feat(BaseClient.js): add `refineMessages` method to refine messages
feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy
feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options
refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method
refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency
refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency
refactor(BaseClient.js): change `remainingContext` to `remainingContext

* chore(openAI.js): comment out contextStrategy option in clientOptions

* chore(openAI.js): comment out debug option in clientOptions object

* test: BaseClient tests in progress

* test: Complete OpenAIClient & BaseClient tests

* fix(OpenAIClient.js): remove unnecessary whitespace
fix(OpenAIClient.js): remove unused variables and comments
fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests

* chore(.eslintrc.js): add rule for maximum of 1 empty line
feat(ask/openAI.js): add abortMessage utility function
fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters
feat(utils/index.js): export abortMessage utility function

* test: complete additional tests

* feat: Azure OpenAI as a separate endpoint

* chore: remove extraneous console logs

* fix(azureOpenAI): use chatCompletion endpoint

* chore(initializeClient.js): delete initializeClient.js file

chore(askOpenAI.js): delete old OpenAI route handler

chore(handlers.js): remove trailing whitespace in thought variable assignment

* chore(chatgpt-client.js): remove unused chatgpt-client.js file
refactor(index.js): remove askClient import and export from index.js

* chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance

The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second.

The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000.

A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again.

The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests.

Note: This is a test script and should not be used in production

* feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client
feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client
feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages
fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils
feat(BaseClient): return promptTokens and completionTokens

* refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient

* refactor: client paths

* chore(jest.config.js): remove jest.config.js file

* fix(PluginController.js): update file path to manifest.json
feat(gptPlugins.js): add support for aborting messages

refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency

* fix(BaseClient.js): fix spacing in generateTextStream function signature
refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function
refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function
refactor(PluginsClient.js): remove unused variables and date formatting in constructor
refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function

* refactor(GoogleClient): GoogleClient now extends BaseClient

* chore(.env.example): add AZURE_OPENAI_MODELS variable
fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true
fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true
fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement
docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables

* fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00