LibreChat/api/app/clients/specs/BaseClient.test.js

617 lines
24 KiB
JavaScript
Raw Normal View History

refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
const { initializeFakeClient } = require('./FakeClient');
jest.mock('../../../lib/db/connectDb');
jest.mock('../../../models', () => {
return function () {
return {
save: jest.fn(),
deleteConvos: jest.fn(),
getConvo: jest.fn(),
getMessages: jest.fn(),
saveMessage: jest.fn(),
updateMessage: jest.fn(),
saveConvo: jest.fn(),
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
};
};
});
jest.mock('langchain/chat_models/openai', () => {
return {
ChatOpenAI: jest.fn().mockImplementation(() => {
return {};
}),
};
});
let parentMessageId;
let conversationId;
const fakeMessages = [];
const userMessage = 'Hello, ChatGPT!';
const apiKey = 'fake-api-key';
feat(GPT/Anthropic): Continue Regenerating & Generation Buttons (#808) * feat(useMessageHandler.js/ts): Refactor and add features to handle user messages, support multiple endpoints/models, generate placeholder responses, regeneration, and stopGeneration function fix(conversation.ts, buildTree.ts): Import TMessage type, handle null parentMessageId feat(schemas.ts): Update and add schemas for various AI services, add default values, optional fields, and endpoint-to-schema mapping, create parseConvo function chore(useMessageHandler.js, schemas.ts): Remove unused imports, variables, and chatGPT enum * wip: add generation buttons * refactor(cleanupPreset.ts): simplify cleanupPreset function refactor(getDefaultConversation.js): remove unused code and simplify getDefaultConversation function feat(utils): add getDefaultConversation function This commit adds a new utility function called `getDefaultConversation` to the `client/src/utils/getDefaultConversation.ts` file. This function is responsible for generating a default conversation object based on the provided parameters. The `getDefaultConversation` function takes in an object with the following properties: - `conversation`: The conversation object to be used as a base. - `endpointsConfig`: The configuration object containing information about the available endpoints. - `preset`: An optional preset object that can be used to override the default behavior. The function first tries to determine the target endpoint based on the preset object. If a valid endpoint is found, it is used as the target endpoint. If not, the function tries to retrieve the last conversation setup from the local storage and uses its endpoint if it is valid. If neither the preset nor the local storage contains a valid endpoint, the function falls back to a default endpoint. Once the target endpoint is determined, * fix(utils): remove console.error statement in buildDefaultConversation function fix(schemas): add default values for catch blocks in openAISchema, googleSchema, bingAISchema, anthropicSchema, chatGPTBrowserSchema, and gptPluginsSchema * fix: endpoint not changing on change of preset from other endpoint, wip: refactor * refactor: preset items to TSX * refactor: convert resetConvo to TS * refactor(getDefaultConversation.ts): move defaultEndpoints array to the top of the file for better readability refactor(getDefaultConversation.ts): extract getDefaultEndpoint function for better code organization and reusability * feat(svg): add ContinueIcon component feat(svg): add RegenerateIcon component feat(svg): add ContinueIcon and RegenerateIcon components to index.ts * feat(Button.tsx): add onClick and className props to Button component feat(GenerationButtons.tsx): add logic to display Regenerate or StopGenerating button based on isSubmitting and messages feat(Regenerate.tsx): create Regenerate component with RegenerateIcon and handleRegenerate function feat(StopGenerating.tsx): create StopGenerating component with StopGeneratingIcon and handleStopGenerating function * fix(TextChat.jsx): reorder imports and variables for better readability fix(TextChat.jsx): fix typo in condition for isNotAppendable variable fix(TextChat.jsx): remove unused handleStopGenerating function fix(ContinueIcon.tsx): remove unnecessary closing tags for polygon elements fix(useMessageHandler.ts): add missing type annotations for handleStopGenerating and handleRegenerate functions fix(useMessageHandler.ts): remove unused variables in return statement * fix(getDefaultConversation.ts): refactor code to use getLocalStorageItems function feat(getLocalStorageItems.ts): add utility function to retrieve items from local storage * fix(OpenAIClient.js): add support for streaming result in sendCompletion method feat(OpenAIClient.js): add finish_reason metadata to opts in sendCompletion method feat(Message.js): add finish_reason field to Message model feat(messageSchema.js): add finish_reason field to messageSchema feat(openAI.js): parse chatGptLabel and promptPrefix from req.body and pass rest of the modelOptions to endpointOption feat(openAI.js): add addMetadata function to store metadata in ask function feat(openAI.js): add metadata to response if available feat(schemas.ts): add finish_reason field to tMessageSchema * feat(types.ts): add TOnClick and TGenButtonProps types for button components feat(Continue.tsx): create Continue component for generating button feat(GenerationButtons.tsx): update GenerationButtons component to use Continue component feat(Regenerate.tsx): create Regenerate component for regenerating button feat(Stop.tsx): create Stop component for stop generating button * feat(MessageHandler.jsx): add MessageHandler component to handle messages and conversations fix(Root.jsx): fix import paths for Nav and MessageHandler components * feat(useMessageHandler.ts): add support for generation parameter in ask function feat(useMessageHandler.ts): add support for isEdited parameter in ask function feat(useMessageHandler.ts): add support for continueGeneration function fix(createPayload.ts): replace endpoint URL when isEdited parameter is true * chore(client): set skipLibCheck to true in tsconfig.json * fix(useMessageHandler.ts): remove unused clientId variable fix(schemas.ts): make clientId field in tMessageSchema nullable and optional * wip: edit route for continue generation * refactor(api): move handlers to root of routes dir * fix(useMessageHandler.ts): initialize currentMessages to an empty array if messages is null fix(useMessageHandler.ts): update initialResponse text to use responseText variable fix(useMessageHandler.ts): update setMessages logic for isRegenerate case fix(MessageHandler.jsx): update setMessages logic for cancelHandler, createdHandler, and finalHandler * fix(schemas.ts): make createdAt and updatedAt fields optional and set default values using new Date().toISOString() fix(schemas.ts): change type annotation of TMessage from infer to input * refactor(useMessageHandler.ts): rename AskProps type to TAskProps refactor(useMessageHandler.ts): remove generation property from ask function arguments refactor(useMessageHandler.ts): use nullish coalescing operator (??) instead of logical OR (||) refactor(useMessageHandler.ts): pass the responseMessageId to message prop of submission * fix(BaseClient.js): use nullish coalescing operator (??) instead of logical OR (||) for default values * fix(BaseClient.js): fix responseMessageId assignment in handleStartMethods method feat(BaseClient.js): add support for isEdited flag in sendMessage method feat(BaseClient.js): add generation to responseMessage text in sendMessage method * fix(openAI.js): remove unused imports and commented out code feat(openAI.js): add support for generation parameter in request body fix(openAI.js): remove console.log statement fix(openAI.js): remove unused variables and parameters fix(openAI.js): update response text in case of error fix(openAI.js): handle error and abort message in case of error fix(handlers.js): add generation parameter to createOnProgress function fix(useMessageHandler.ts): update responseText variable to use generation parameter * refactor(api/middleware): move inside server dir * refactor: add endpoint specific, modular functions to build options and initialize clients, create server/utils, move middleware, separate utils into api general utils and server specific utils * fix(abortMiddleware.js): import getConvo and getConvoTitle functions from models feat(abortMiddleware.js): add abortAsk function to abortController to handle aborting of requests fix(openAI.js): import buildOptions and initializeClient functions from endpoints/openAI refactor(openAI.js): use getAbortData function to get data for abortAsk function * refactor: move endpoint specific logic to an endpoints dir * refactor(PluginService.js): fix import path for encrypt and decrypt functions in PluginService.js * feat(openAI): add new endpoint for adding a title to a conversation - Added a new file `addTitle.js` in the `api/server/routes/endpoints/openAI` directory. - The `addTitle.js` file exports a function `addTitle` that takes in request parameters and performs the following actions: - If the `parentMessageId` is `'00000000-0000-0000-0000-000000000000'` and `newConvo` is true, it proceeds with the following steps: - Calls the `titleConvo` function from the `titleConvo` module, passing in the necessary parameters. - Calls the `saveConvo` function from the `saveConvo` module, passing in the user ID and conversation details. - Updated the `index.js` file in the `api/server/routes/endpoints/openAI` directory to export the `addTitle` function. - This change adds * fix(abortMiddleware.js): remove console.log statement refactor(gptPlugins.js): update imports and function parameters feat(gptPlugins.js): add support for abortController and getAbortData refactor(openAI.js): update imports and function parameters feat(openAI.js): add support for abortController and getAbortData fix(openAI.js): refactor code to use modularized functions and middleware fix(buildOptions.js): refactor code to use destructuring and update variable names * refactor(askChatGPTBrowser.js, bingAI.js, google.js): remove duplicate code for setting response headers feat(askChatGPTBrowser.js, bingAI.js, google.js): add setHeaders middleware to set response headers * feat(middleware): validateEndpoint, refactor buildOption to only be concerned of endpointOption * fix(abortMiddleware.js): add 'finish_reason' property with value 'incomplete' to responseMessage object fix(abortMessage.js): remove console.log statement for aborted message fix(handlers.js): modify tokens assignment to handle empty generation string and trailing space * fix(BaseClient.js): import addSpaceIfNeeded function from server/utils fix(BaseClient.js): add space before generation in text property fix(index.js): remove getCitations and citeText exports feat(buildEndpointOption.js): add buildEndpointOption middleware fix(index.js): import buildEndpointOption middleware fix(anthropic.js): remove buildOptions function and use endpointOption from req.body fix(gptPlugins.js): remove buildOptions function and use endpointOption from req.body fix(openAI.js): remove buildOptions function and use endpointOption from req.body feat(utils): add citations.js and handleText.js modules fix(utils): fix import statements in index.js module * refactor(gptPlugins.js): use getResponseSender function from librechat-data-provider * feat(gptPlugins): complete 'continue generating' * wip: anthropic continue regen * feat(middleware): add validateRegistration middleware A new middleware function called `validateRegistration` has been added to the list of exported middleware functions in `index.js`. This middleware is responsible for validating registration data before allowing the registration process to proceed. * feat(Anthropic): complete continue regen * chore: add librechat-data-provider to api/package.json * fix(ci): backend-review will mock meilisearch, also installs data-provider as now needed * chore(ci): remove unneeded SEARCH env var * style(GenerationButtons): make text shorter for sake of space economy, even though this diverges from chat.openai.com * style(GenerationButtons/ScrollToBottom): adjust visibility/position based on screen size * chore(client): 'Editting' typo * feat(GenerationButtons.tsx): add support for endpoint prop in GenerationButtons component feat(OptionsBar.tsx): pass endpoint prop to GenerationButtons component feat(useGenerations.ts): create useGenerations hook to handle generation logic fix(schemas.ts): add searchResult field to tMessageSchema * refactor(HoverButtons): convert to TSX and utilize new useGenerations hook * fix(abortMiddleware): handle error with res headers set, or abortController not found, to ensure proper API error is sent to the client, chore(BaseClient): remove console log for onStart message meant for debugging * refactor(api): remove librechat-data-provider dep for now as it complicates deployed docker build stage, re-use code in CJS, located in server/endpoints/schemas * chore: remove console.logs from test files * ci: add backend tests for AnthropicClient, focusing on new buildMessages logic * refactor(FakeClient): use actual BaseClient sendMessage method for testing * test(BaseClient.test.js): add test for loading chat history test(BaseClient.test.js): add test for sendMessage logic with isEdited flag * fix(buildEndpointOption.js): add support for azureOpenAI in buildFunction object wip(endpoints.js): fetch Azure models from Azure OpenAI API if opts.azure is true * fix(Button.tsx): add data-testid attribute to button component fix(SelectDropDown.tsx): add data-testid attribute to Listbox.Button component fix(messages.spec.ts): add waitForServerStream function to consolidate logic for awaiting the server response feat(messages.spec.ts): add test for stopping and continuing message and improve browser/page context order and closing * refactor(onProgress): speed up time to save initial message for editable routes * chore: disable AI message editing (for now), was accidentally allowed * refactor: ensure continue is only supported for latest message style: improve styling in dark mode and across all hover buttons/icons, including making edit icon for AI invisible (for now) * fix: add test id to generation buttons so they never resolve to 2+ items * chore(package.json): add 'packages/' to the list of ignored directories chore(data-provider/package.json): bump version to 0.1.5
2023-08-17 12:50:05 -04:00
const messageHistory = [
{ role: 'user', isCreatedByUser: true, text: 'Hello', messageId: '1' },
{ role: 'assistant', isCreatedByUser: false, text: 'Hi', messageId: '2', parentMessageId: '1' },
{
role: 'user',
isCreatedByUser: true,
text: 'What\'s up',
messageId: '3',
parentMessageId: '2',
},
];
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
describe('BaseClient', () => {
let TestClient;
const options = {
// debug: true,
modelOptions: {
model: 'gpt-3.5-turbo',
temperature: 0,
},
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
};
beforeEach(() => {
TestClient = initializeFakeClient(apiKey, options, fakeMessages);
feat: ConversationSummaryBufferMemory (#973) * refactor: pass model in message edit payload, use encoder in standalone util function * feat: add summaryBuffer helper * refactor(api/messages): use new countTokens helper and add auth middleware at top * wip: ConversationSummaryBufferMemory * refactor: move pre-generation helpers to prompts dir * chore: remove console log * chore: remove test as payload will no longer carry tokenCount * chore: update getMessagesWithinTokenLimit JSDoc * refactor: optimize getMessagesForConversation and also break on summary, feat(ci): getMessagesForConversation tests * refactor(getMessagesForConvo): count '00000000-0000-0000-0000-000000000000' as root message * chore: add newer model to token map * fix: condition was point to prop of array instead of message prop * refactor(BaseClient): use object for refineMessages param, rename 'summary' to 'summaryMessage', add previous_summary refactor(getMessagesWithinTokenLimit): replace text and tokenCount if should summarize, summary, and summaryTokenCount are present fix/refactor(handleContextStrategy): use the right comparison length for context diff, and replace payload first message when a summary is present * chore: log previous_summary if debugging * refactor(formatMessage): assume if role is defined that it's a valid value * refactor(getMessagesWithinTokenLimit): remove summary logic refactor(handleContextStrategy): add usePrevSummary logic in case only summary was pruned refactor(loadHistory): initial message query will return all ordered messages but keep track of the latest summary refactor(getMessagesForConversation): use object for single param, edit jsdoc, edit all files using the method refactor(ChatGPTClient): order messages before buildPrompt is called, TODO: add convoSumBuffMemory logic * fix: undefined handling and summarizing only when shouldRefineContext is true * chore(BaseClient): fix test results omitting system role for summaries and test edge case * chore: export summaryBuffer from index file * refactor(OpenAIClient/BaseClient): move refineMessages to subclass, implement LLM initialization for summaryBuffer * feat: add OPENAI_SUMMARIZE to enable summarizing, refactor: rename client prop 'shouldRefineContext' to 'shouldSummarize', change contextStrategy value to 'summarize' from 'refine' * refactor: rename refineMessages method to summarizeMessages for clarity * chore: clarify summary future intent in .env.example * refactor(initializeLLM): handle case for either 'model' or 'modelName' being passed * feat(gptPlugins): enable summarization for plugins * refactor(gptPlugins): utilize new initializeLLM method and formatting methods for messages, use payload array for currentMessages and assign pastMessages sooner * refactor(agents): use ConversationSummaryBufferMemory for both agent types * refactor(formatMessage): optimize original method for langchain, add helper function for langchain messages, add JSDocs and tests * refactor(summaryBuffer): add helper to createSummaryBufferMemory, and use new formatting helpers * fix: forgot to spread formatMessages also took opportunity to pluralize filename * refactor: pass memory to tools, namely openapi specs. not used and may never be used by new method but added for testing * ci(formatMessages): add more exhaustive checks for langchain messages * feat: add debug env var for OpenAI * chore: delete unnecessary comments * chore: add extra note about summary feature * fix: remove tokenCount from payload instructions * fix: test fail * fix: only pass instructions to payload when defined or not empty object * refactor: fromPromptMessages is deprecated, use renamed method fromMessages * refactor: use 'includes' instead of 'startsWith' for extended OpenRouter compatibility * fix(PluginsClient.buildPromptBody): handle undefined message strings * chore: log langchain titling error * feat: getModelMaxTokens helper * feat: tokenSplit helper * feat: summary prompts updated * fix: optimize _CUT_OFF_SUMMARIZER prompt * refactor(summaryBuffer): use custom summary prompt, allow prompt to be passed, pass humanPrefix and aiPrefix to memory, along with any future variables, rename messagesToRefine to context * fix(summaryBuffer): handle edge case where messagesToRefine exceeds summary context, refactor(BaseClient): allow custom maxContextTokens to be passed to getMessagesWithinTokenLimit, add defined check before unshifting summaryMessage, update shouldSummarize based on this refactor(OpenAIClient): use getModelMaxTokens, use cut-off message method for summary if no messages were left after pruning * fix(handleContextStrategy): handle case where incoming prompt is bigger than model context * chore: rename refinedContent to splitText * chore: remove unnecessary debug log
2023-09-26 21:02:28 -04:00
TestClient.summarizeMessages = jest.fn().mockResolvedValue({
summaryMessage: {
role: 'system',
content: 'Refined answer',
},
summaryTokenCount: 5,
});
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
});
test('returns the input messages without instructions when addInstructions() is called with empty instructions', () => {
const messages = [{ content: 'Hello' }, { content: 'How are you?' }, { content: 'Goodbye' }];
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
const instructions = '';
const result = TestClient.addInstructions(messages, instructions);
expect(result).toEqual(messages);
});
test('returns the input messages with instructions properly added when addInstructions() is called with non-empty instructions', () => {
const messages = [{ content: 'Hello' }, { content: 'How are you?' }, { content: 'Goodbye' }];
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
const instructions = { content: 'Please respond to the question.' };
const result = TestClient.addInstructions(messages, instructions);
const expected = [
{ content: 'Hello' },
{ content: 'How are you?' },
{ content: 'Please respond to the question.' },
{ content: 'Goodbye' },
];
expect(result).toEqual(expected);
});
test('concats messages correctly in concatenateMessages()', () => {
const messages = [
{ name: 'User', content: 'Hello' },
{ name: 'Assistant', content: 'How can I help you?' },
{ name: 'User', content: 'I have a question.' },
];
const result = TestClient.concatenateMessages(messages);
const expected =
'User:\nHello\n\nAssistant:\nHow can I help you?\n\nUser:\nI have a question.\n\n';
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
expect(result).toBe(expected);
});
feat: ConversationSummaryBufferMemory (#973) * refactor: pass model in message edit payload, use encoder in standalone util function * feat: add summaryBuffer helper * refactor(api/messages): use new countTokens helper and add auth middleware at top * wip: ConversationSummaryBufferMemory * refactor: move pre-generation helpers to prompts dir * chore: remove console log * chore: remove test as payload will no longer carry tokenCount * chore: update getMessagesWithinTokenLimit JSDoc * refactor: optimize getMessagesForConversation and also break on summary, feat(ci): getMessagesForConversation tests * refactor(getMessagesForConvo): count '00000000-0000-0000-0000-000000000000' as root message * chore: add newer model to token map * fix: condition was point to prop of array instead of message prop * refactor(BaseClient): use object for refineMessages param, rename 'summary' to 'summaryMessage', add previous_summary refactor(getMessagesWithinTokenLimit): replace text and tokenCount if should summarize, summary, and summaryTokenCount are present fix/refactor(handleContextStrategy): use the right comparison length for context diff, and replace payload first message when a summary is present * chore: log previous_summary if debugging * refactor(formatMessage): assume if role is defined that it's a valid value * refactor(getMessagesWithinTokenLimit): remove summary logic refactor(handleContextStrategy): add usePrevSummary logic in case only summary was pruned refactor(loadHistory): initial message query will return all ordered messages but keep track of the latest summary refactor(getMessagesForConversation): use object for single param, edit jsdoc, edit all files using the method refactor(ChatGPTClient): order messages before buildPrompt is called, TODO: add convoSumBuffMemory logic * fix: undefined handling and summarizing only when shouldRefineContext is true * chore(BaseClient): fix test results omitting system role for summaries and test edge case * chore: export summaryBuffer from index file * refactor(OpenAIClient/BaseClient): move refineMessages to subclass, implement LLM initialization for summaryBuffer * feat: add OPENAI_SUMMARIZE to enable summarizing, refactor: rename client prop 'shouldRefineContext' to 'shouldSummarize', change contextStrategy value to 'summarize' from 'refine' * refactor: rename refineMessages method to summarizeMessages for clarity * chore: clarify summary future intent in .env.example * refactor(initializeLLM): handle case for either 'model' or 'modelName' being passed * feat(gptPlugins): enable summarization for plugins * refactor(gptPlugins): utilize new initializeLLM method and formatting methods for messages, use payload array for currentMessages and assign pastMessages sooner * refactor(agents): use ConversationSummaryBufferMemory for both agent types * refactor(formatMessage): optimize original method for langchain, add helper function for langchain messages, add JSDocs and tests * refactor(summaryBuffer): add helper to createSummaryBufferMemory, and use new formatting helpers * fix: forgot to spread formatMessages also took opportunity to pluralize filename * refactor: pass memory to tools, namely openapi specs. not used and may never be used by new method but added for testing * ci(formatMessages): add more exhaustive checks for langchain messages * feat: add debug env var for OpenAI * chore: delete unnecessary comments * chore: add extra note about summary feature * fix: remove tokenCount from payload instructions * fix: test fail * fix: only pass instructions to payload when defined or not empty object * refactor: fromPromptMessages is deprecated, use renamed method fromMessages * refactor: use 'includes' instead of 'startsWith' for extended OpenRouter compatibility * fix(PluginsClient.buildPromptBody): handle undefined message strings * chore: log langchain titling error * feat: getModelMaxTokens helper * feat: tokenSplit helper * feat: summary prompts updated * fix: optimize _CUT_OFF_SUMMARIZER prompt * refactor(summaryBuffer): use custom summary prompt, allow prompt to be passed, pass humanPrefix and aiPrefix to memory, along with any future variables, rename messagesToRefine to context * fix(summaryBuffer): handle edge case where messagesToRefine exceeds summary context, refactor(BaseClient): allow custom maxContextTokens to be passed to getMessagesWithinTokenLimit, add defined check before unshifting summaryMessage, update shouldSummarize based on this refactor(OpenAIClient): use getModelMaxTokens, use cut-off message method for summary if no messages were left after pruning * fix(handleContextStrategy): handle case where incoming prompt is bigger than model context * chore: rename refinedContent to splitText * chore: remove unnecessary debug log
2023-09-26 21:02:28 -04:00
test('refines messages correctly in summarizeMessages()', async () => {
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
const messagesToRefine = [
{ role: 'user', content: 'Hello', tokenCount: 10 },
{ role: 'assistant', content: 'How can I help you?', tokenCount: 20 },
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
];
const remainingContextTokens = 100;
const expectedRefinedMessage = {
feat: ConversationSummaryBufferMemory (#973) * refactor: pass model in message edit payload, use encoder in standalone util function * feat: add summaryBuffer helper * refactor(api/messages): use new countTokens helper and add auth middleware at top * wip: ConversationSummaryBufferMemory * refactor: move pre-generation helpers to prompts dir * chore: remove console log * chore: remove test as payload will no longer carry tokenCount * chore: update getMessagesWithinTokenLimit JSDoc * refactor: optimize getMessagesForConversation and also break on summary, feat(ci): getMessagesForConversation tests * refactor(getMessagesForConvo): count '00000000-0000-0000-0000-000000000000' as root message * chore: add newer model to token map * fix: condition was point to prop of array instead of message prop * refactor(BaseClient): use object for refineMessages param, rename 'summary' to 'summaryMessage', add previous_summary refactor(getMessagesWithinTokenLimit): replace text and tokenCount if should summarize, summary, and summaryTokenCount are present fix/refactor(handleContextStrategy): use the right comparison length for context diff, and replace payload first message when a summary is present * chore: log previous_summary if debugging * refactor(formatMessage): assume if role is defined that it's a valid value * refactor(getMessagesWithinTokenLimit): remove summary logic refactor(handleContextStrategy): add usePrevSummary logic in case only summary was pruned refactor(loadHistory): initial message query will return all ordered messages but keep track of the latest summary refactor(getMessagesForConversation): use object for single param, edit jsdoc, edit all files using the method refactor(ChatGPTClient): order messages before buildPrompt is called, TODO: add convoSumBuffMemory logic * fix: undefined handling and summarizing only when shouldRefineContext is true * chore(BaseClient): fix test results omitting system role for summaries and test edge case * chore: export summaryBuffer from index file * refactor(OpenAIClient/BaseClient): move refineMessages to subclass, implement LLM initialization for summaryBuffer * feat: add OPENAI_SUMMARIZE to enable summarizing, refactor: rename client prop 'shouldRefineContext' to 'shouldSummarize', change contextStrategy value to 'summarize' from 'refine' * refactor: rename refineMessages method to summarizeMessages for clarity * chore: clarify summary future intent in .env.example * refactor(initializeLLM): handle case for either 'model' or 'modelName' being passed * feat(gptPlugins): enable summarization for plugins * refactor(gptPlugins): utilize new initializeLLM method and formatting methods for messages, use payload array for currentMessages and assign pastMessages sooner * refactor(agents): use ConversationSummaryBufferMemory for both agent types * refactor(formatMessage): optimize original method for langchain, add helper function for langchain messages, add JSDocs and tests * refactor(summaryBuffer): add helper to createSummaryBufferMemory, and use new formatting helpers * fix: forgot to spread formatMessages also took opportunity to pluralize filename * refactor: pass memory to tools, namely openapi specs. not used and may never be used by new method but added for testing * ci(formatMessages): add more exhaustive checks for langchain messages * feat: add debug env var for OpenAI * chore: delete unnecessary comments * chore: add extra note about summary feature * fix: remove tokenCount from payload instructions * fix: test fail * fix: only pass instructions to payload when defined or not empty object * refactor: fromPromptMessages is deprecated, use renamed method fromMessages * refactor: use 'includes' instead of 'startsWith' for extended OpenRouter compatibility * fix(PluginsClient.buildPromptBody): handle undefined message strings * chore: log langchain titling error * feat: getModelMaxTokens helper * feat: tokenSplit helper * feat: summary prompts updated * fix: optimize _CUT_OFF_SUMMARIZER prompt * refactor(summaryBuffer): use custom summary prompt, allow prompt to be passed, pass humanPrefix and aiPrefix to memory, along with any future variables, rename messagesToRefine to context * fix(summaryBuffer): handle edge case where messagesToRefine exceeds summary context, refactor(BaseClient): allow custom maxContextTokens to be passed to getMessagesWithinTokenLimit, add defined check before unshifting summaryMessage, update shouldSummarize based on this refactor(OpenAIClient): use getModelMaxTokens, use cut-off message method for summary if no messages were left after pruning * fix(handleContextStrategy): handle case where incoming prompt is bigger than model context * chore: rename refinedContent to splitText * chore: remove unnecessary debug log
2023-09-26 21:02:28 -04:00
role: 'system',
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
content: 'Refined answer',
};
feat: ConversationSummaryBufferMemory (#973) * refactor: pass model in message edit payload, use encoder in standalone util function * feat: add summaryBuffer helper * refactor(api/messages): use new countTokens helper and add auth middleware at top * wip: ConversationSummaryBufferMemory * refactor: move pre-generation helpers to prompts dir * chore: remove console log * chore: remove test as payload will no longer carry tokenCount * chore: update getMessagesWithinTokenLimit JSDoc * refactor: optimize getMessagesForConversation and also break on summary, feat(ci): getMessagesForConversation tests * refactor(getMessagesForConvo): count '00000000-0000-0000-0000-000000000000' as root message * chore: add newer model to token map * fix: condition was point to prop of array instead of message prop * refactor(BaseClient): use object for refineMessages param, rename 'summary' to 'summaryMessage', add previous_summary refactor(getMessagesWithinTokenLimit): replace text and tokenCount if should summarize, summary, and summaryTokenCount are present fix/refactor(handleContextStrategy): use the right comparison length for context diff, and replace payload first message when a summary is present * chore: log previous_summary if debugging * refactor(formatMessage): assume if role is defined that it's a valid value * refactor(getMessagesWithinTokenLimit): remove summary logic refactor(handleContextStrategy): add usePrevSummary logic in case only summary was pruned refactor(loadHistory): initial message query will return all ordered messages but keep track of the latest summary refactor(getMessagesForConversation): use object for single param, edit jsdoc, edit all files using the method refactor(ChatGPTClient): order messages before buildPrompt is called, TODO: add convoSumBuffMemory logic * fix: undefined handling and summarizing only when shouldRefineContext is true * chore(BaseClient): fix test results omitting system role for summaries and test edge case * chore: export summaryBuffer from index file * refactor(OpenAIClient/BaseClient): move refineMessages to subclass, implement LLM initialization for summaryBuffer * feat: add OPENAI_SUMMARIZE to enable summarizing, refactor: rename client prop 'shouldRefineContext' to 'shouldSummarize', change contextStrategy value to 'summarize' from 'refine' * refactor: rename refineMessages method to summarizeMessages for clarity * chore: clarify summary future intent in .env.example * refactor(initializeLLM): handle case for either 'model' or 'modelName' being passed * feat(gptPlugins): enable summarization for plugins * refactor(gptPlugins): utilize new initializeLLM method and formatting methods for messages, use payload array for currentMessages and assign pastMessages sooner * refactor(agents): use ConversationSummaryBufferMemory for both agent types * refactor(formatMessage): optimize original method for langchain, add helper function for langchain messages, add JSDocs and tests * refactor(summaryBuffer): add helper to createSummaryBufferMemory, and use new formatting helpers * fix: forgot to spread formatMessages also took opportunity to pluralize filename * refactor: pass memory to tools, namely openapi specs. not used and may never be used by new method but added for testing * ci(formatMessages): add more exhaustive checks for langchain messages * feat: add debug env var for OpenAI * chore: delete unnecessary comments * chore: add extra note about summary feature * fix: remove tokenCount from payload instructions * fix: test fail * fix: only pass instructions to payload when defined or not empty object * refactor: fromPromptMessages is deprecated, use renamed method fromMessages * refactor: use 'includes' instead of 'startsWith' for extended OpenRouter compatibility * fix(PluginsClient.buildPromptBody): handle undefined message strings * chore: log langchain titling error * feat: getModelMaxTokens helper * feat: tokenSplit helper * feat: summary prompts updated * fix: optimize _CUT_OFF_SUMMARIZER prompt * refactor(summaryBuffer): use custom summary prompt, allow prompt to be passed, pass humanPrefix and aiPrefix to memory, along with any future variables, rename messagesToRefine to context * fix(summaryBuffer): handle edge case where messagesToRefine exceeds summary context, refactor(BaseClient): allow custom maxContextTokens to be passed to getMessagesWithinTokenLimit, add defined check before unshifting summaryMessage, update shouldSummarize based on this refactor(OpenAIClient): use getModelMaxTokens, use cut-off message method for summary if no messages were left after pruning * fix(handleContextStrategy): handle case where incoming prompt is bigger than model context * chore: rename refinedContent to splitText * chore: remove unnecessary debug log
2023-09-26 21:02:28 -04:00
const result = await TestClient.summarizeMessages({ messagesToRefine, remainingContextTokens });
expect(result.summaryMessage).toEqual(expectedRefinedMessage);
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
});
test('gets messages within token limit (under limit) correctly in getMessagesWithinTokenLimit()', async () => {
TestClient.maxContextTokens = 100;
feat: ConversationSummaryBufferMemory (#973) * refactor: pass model in message edit payload, use encoder in standalone util function * feat: add summaryBuffer helper * refactor(api/messages): use new countTokens helper and add auth middleware at top * wip: ConversationSummaryBufferMemory * refactor: move pre-generation helpers to prompts dir * chore: remove console log * chore: remove test as payload will no longer carry tokenCount * chore: update getMessagesWithinTokenLimit JSDoc * refactor: optimize getMessagesForConversation and also break on summary, feat(ci): getMessagesForConversation tests * refactor(getMessagesForConvo): count '00000000-0000-0000-0000-000000000000' as root message * chore: add newer model to token map * fix: condition was point to prop of array instead of message prop * refactor(BaseClient): use object for refineMessages param, rename 'summary' to 'summaryMessage', add previous_summary refactor(getMessagesWithinTokenLimit): replace text and tokenCount if should summarize, summary, and summaryTokenCount are present fix/refactor(handleContextStrategy): use the right comparison length for context diff, and replace payload first message when a summary is present * chore: log previous_summary if debugging * refactor(formatMessage): assume if role is defined that it's a valid value * refactor(getMessagesWithinTokenLimit): remove summary logic refactor(handleContextStrategy): add usePrevSummary logic in case only summary was pruned refactor(loadHistory): initial message query will return all ordered messages but keep track of the latest summary refactor(getMessagesForConversation): use object for single param, edit jsdoc, edit all files using the method refactor(ChatGPTClient): order messages before buildPrompt is called, TODO: add convoSumBuffMemory logic * fix: undefined handling and summarizing only when shouldRefineContext is true * chore(BaseClient): fix test results omitting system role for summaries and test edge case * chore: export summaryBuffer from index file * refactor(OpenAIClient/BaseClient): move refineMessages to subclass, implement LLM initialization for summaryBuffer * feat: add OPENAI_SUMMARIZE to enable summarizing, refactor: rename client prop 'shouldRefineContext' to 'shouldSummarize', change contextStrategy value to 'summarize' from 'refine' * refactor: rename refineMessages method to summarizeMessages for clarity * chore: clarify summary future intent in .env.example * refactor(initializeLLM): handle case for either 'model' or 'modelName' being passed * feat(gptPlugins): enable summarization for plugins * refactor(gptPlugins): utilize new initializeLLM method and formatting methods for messages, use payload array for currentMessages and assign pastMessages sooner * refactor(agents): use ConversationSummaryBufferMemory for both agent types * refactor(formatMessage): optimize original method for langchain, add helper function for langchain messages, add JSDocs and tests * refactor(summaryBuffer): add helper to createSummaryBufferMemory, and use new formatting helpers * fix: forgot to spread formatMessages also took opportunity to pluralize filename * refactor: pass memory to tools, namely openapi specs. not used and may never be used by new method but added for testing * ci(formatMessages): add more exhaustive checks for langchain messages * feat: add debug env var for OpenAI * chore: delete unnecessary comments * chore: add extra note about summary feature * fix: remove tokenCount from payload instructions * fix: test fail * fix: only pass instructions to payload when defined or not empty object * refactor: fromPromptMessages is deprecated, use renamed method fromMessages * refactor: use 'includes' instead of 'startsWith' for extended OpenRouter compatibility * fix(PluginsClient.buildPromptBody): handle undefined message strings * chore: log langchain titling error * feat: getModelMaxTokens helper * feat: tokenSplit helper * feat: summary prompts updated * fix: optimize _CUT_OFF_SUMMARIZER prompt * refactor(summaryBuffer): use custom summary prompt, allow prompt to be passed, pass humanPrefix and aiPrefix to memory, along with any future variables, rename messagesToRefine to context * fix(summaryBuffer): handle edge case where messagesToRefine exceeds summary context, refactor(BaseClient): allow custom maxContextTokens to be passed to getMessagesWithinTokenLimit, add defined check before unshifting summaryMessage, update shouldSummarize based on this refactor(OpenAIClient): use getModelMaxTokens, use cut-off message method for summary if no messages were left after pruning * fix(handleContextStrategy): handle case where incoming prompt is bigger than model context * chore: rename refinedContent to splitText * chore: remove unnecessary debug log
2023-09-26 21:02:28 -04:00
TestClient.shouldSummarize = true;
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
const messages = [
{ role: 'user', content: 'Hello', tokenCount: 5 },
{ role: 'assistant', content: 'How can I help you?', tokenCount: 19 },
{ role: 'user', content: 'I have a question.', tokenCount: 18 },
];
const expectedContext = [
{ role: 'user', content: 'Hello', tokenCount: 5 }, // 'Hello'.length
{ role: 'assistant', content: 'How can I help you?', tokenCount: 19 },
{ role: 'user', content: 'I have a question.', tokenCount: 18 },
];
// Subtract 3 tokens for Assistant Label priming after all messages have been counted.
const expectedRemainingContextTokens = 58 - 3; // (100 - 5 - 19 - 18) - 3
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
const expectedMessagesToRefine = [];
feat: ConversationSummaryBufferMemory (#973) * refactor: pass model in message edit payload, use encoder in standalone util function * feat: add summaryBuffer helper * refactor(api/messages): use new countTokens helper and add auth middleware at top * wip: ConversationSummaryBufferMemory * refactor: move pre-generation helpers to prompts dir * chore: remove console log * chore: remove test as payload will no longer carry tokenCount * chore: update getMessagesWithinTokenLimit JSDoc * refactor: optimize getMessagesForConversation and also break on summary, feat(ci): getMessagesForConversation tests * refactor(getMessagesForConvo): count '00000000-0000-0000-0000-000000000000' as root message * chore: add newer model to token map * fix: condition was point to prop of array instead of message prop * refactor(BaseClient): use object for refineMessages param, rename 'summary' to 'summaryMessage', add previous_summary refactor(getMessagesWithinTokenLimit): replace text and tokenCount if should summarize, summary, and summaryTokenCount are present fix/refactor(handleContextStrategy): use the right comparison length for context diff, and replace payload first message when a summary is present * chore: log previous_summary if debugging * refactor(formatMessage): assume if role is defined that it's a valid value * refactor(getMessagesWithinTokenLimit): remove summary logic refactor(handleContextStrategy): add usePrevSummary logic in case only summary was pruned refactor(loadHistory): initial message query will return all ordered messages but keep track of the latest summary refactor(getMessagesForConversation): use object for single param, edit jsdoc, edit all files using the method refactor(ChatGPTClient): order messages before buildPrompt is called, TODO: add convoSumBuffMemory logic * fix: undefined handling and summarizing only when shouldRefineContext is true * chore(BaseClient): fix test results omitting system role for summaries and test edge case * chore: export summaryBuffer from index file * refactor(OpenAIClient/BaseClient): move refineMessages to subclass, implement LLM initialization for summaryBuffer * feat: add OPENAI_SUMMARIZE to enable summarizing, refactor: rename client prop 'shouldRefineContext' to 'shouldSummarize', change contextStrategy value to 'summarize' from 'refine' * refactor: rename refineMessages method to summarizeMessages for clarity * chore: clarify summary future intent in .env.example * refactor(initializeLLM): handle case for either 'model' or 'modelName' being passed * feat(gptPlugins): enable summarization for plugins * refactor(gptPlugins): utilize new initializeLLM method and formatting methods for messages, use payload array for currentMessages and assign pastMessages sooner * refactor(agents): use ConversationSummaryBufferMemory for both agent types * refactor(formatMessage): optimize original method for langchain, add helper function for langchain messages, add JSDocs and tests * refactor(summaryBuffer): add helper to createSummaryBufferMemory, and use new formatting helpers * fix: forgot to spread formatMessages also took opportunity to pluralize filename * refactor: pass memory to tools, namely openapi specs. not used and may never be used by new method but added for testing * ci(formatMessages): add more exhaustive checks for langchain messages * feat: add debug env var for OpenAI * chore: delete unnecessary comments * chore: add extra note about summary feature * fix: remove tokenCount from payload instructions * fix: test fail * fix: only pass instructions to payload when defined or not empty object * refactor: fromPromptMessages is deprecated, use renamed method fromMessages * refactor: use 'includes' instead of 'startsWith' for extended OpenRouter compatibility * fix(PluginsClient.buildPromptBody): handle undefined message strings * chore: log langchain titling error * feat: getModelMaxTokens helper * feat: tokenSplit helper * feat: summary prompts updated * fix: optimize _CUT_OFF_SUMMARIZER prompt * refactor(summaryBuffer): use custom summary prompt, allow prompt to be passed, pass humanPrefix and aiPrefix to memory, along with any future variables, rename messagesToRefine to context * fix(summaryBuffer): handle edge case where messagesToRefine exceeds summary context, refactor(BaseClient): allow custom maxContextTokens to be passed to getMessagesWithinTokenLimit, add defined check before unshifting summaryMessage, update shouldSummarize based on this refactor(OpenAIClient): use getModelMaxTokens, use cut-off message method for summary if no messages were left after pruning * fix(handleContextStrategy): handle case where incoming prompt is bigger than model context * chore: rename refinedContent to splitText * chore: remove unnecessary debug log
2023-09-26 21:02:28 -04:00
const lastExpectedMessage =
expectedMessagesToRefine?.[expectedMessagesToRefine.length - 1] ?? {};
const expectedIndex = messages.findIndex((msg) => msg.content === lastExpectedMessage?.content);
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
const result = await TestClient.getMessagesWithinTokenLimit(messages);
feat: ConversationSummaryBufferMemory (#973) * refactor: pass model in message edit payload, use encoder in standalone util function * feat: add summaryBuffer helper * refactor(api/messages): use new countTokens helper and add auth middleware at top * wip: ConversationSummaryBufferMemory * refactor: move pre-generation helpers to prompts dir * chore: remove console log * chore: remove test as payload will no longer carry tokenCount * chore: update getMessagesWithinTokenLimit JSDoc * refactor: optimize getMessagesForConversation and also break on summary, feat(ci): getMessagesForConversation tests * refactor(getMessagesForConvo): count '00000000-0000-0000-0000-000000000000' as root message * chore: add newer model to token map * fix: condition was point to prop of array instead of message prop * refactor(BaseClient): use object for refineMessages param, rename 'summary' to 'summaryMessage', add previous_summary refactor(getMessagesWithinTokenLimit): replace text and tokenCount if should summarize, summary, and summaryTokenCount are present fix/refactor(handleContextStrategy): use the right comparison length for context diff, and replace payload first message when a summary is present * chore: log previous_summary if debugging * refactor(formatMessage): assume if role is defined that it's a valid value * refactor(getMessagesWithinTokenLimit): remove summary logic refactor(handleContextStrategy): add usePrevSummary logic in case only summary was pruned refactor(loadHistory): initial message query will return all ordered messages but keep track of the latest summary refactor(getMessagesForConversation): use object for single param, edit jsdoc, edit all files using the method refactor(ChatGPTClient): order messages before buildPrompt is called, TODO: add convoSumBuffMemory logic * fix: undefined handling and summarizing only when shouldRefineContext is true * chore(BaseClient): fix test results omitting system role for summaries and test edge case * chore: export summaryBuffer from index file * refactor(OpenAIClient/BaseClient): move refineMessages to subclass, implement LLM initialization for summaryBuffer * feat: add OPENAI_SUMMARIZE to enable summarizing, refactor: rename client prop 'shouldRefineContext' to 'shouldSummarize', change contextStrategy value to 'summarize' from 'refine' * refactor: rename refineMessages method to summarizeMessages for clarity * chore: clarify summary future intent in .env.example * refactor(initializeLLM): handle case for either 'model' or 'modelName' being passed * feat(gptPlugins): enable summarization for plugins * refactor(gptPlugins): utilize new initializeLLM method and formatting methods for messages, use payload array for currentMessages and assign pastMessages sooner * refactor(agents): use ConversationSummaryBufferMemory for both agent types * refactor(formatMessage): optimize original method for langchain, add helper function for langchain messages, add JSDocs and tests * refactor(summaryBuffer): add helper to createSummaryBufferMemory, and use new formatting helpers * fix: forgot to spread formatMessages also took opportunity to pluralize filename * refactor: pass memory to tools, namely openapi specs. not used and may never be used by new method but added for testing * ci(formatMessages): add more exhaustive checks for langchain messages * feat: add debug env var for OpenAI * chore: delete unnecessary comments * chore: add extra note about summary feature * fix: remove tokenCount from payload instructions * fix: test fail * fix: only pass instructions to payload when defined or not empty object * refactor: fromPromptMessages is deprecated, use renamed method fromMessages * refactor: use 'includes' instead of 'startsWith' for extended OpenRouter compatibility * fix(PluginsClient.buildPromptBody): handle undefined message strings * chore: log langchain titling error * feat: getModelMaxTokens helper * feat: tokenSplit helper * feat: summary prompts updated * fix: optimize _CUT_OFF_SUMMARIZER prompt * refactor(summaryBuffer): use custom summary prompt, allow prompt to be passed, pass humanPrefix and aiPrefix to memory, along with any future variables, rename messagesToRefine to context * fix(summaryBuffer): handle edge case where messagesToRefine exceeds summary context, refactor(BaseClient): allow custom maxContextTokens to be passed to getMessagesWithinTokenLimit, add defined check before unshifting summaryMessage, update shouldSummarize based on this refactor(OpenAIClient): use getModelMaxTokens, use cut-off message method for summary if no messages were left after pruning * fix(handleContextStrategy): handle case where incoming prompt is bigger than model context * chore: rename refinedContent to splitText * chore: remove unnecessary debug log
2023-09-26 21:02:28 -04:00
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
expect(result.context).toEqual(expectedContext);
feat: ConversationSummaryBufferMemory (#973) * refactor: pass model in message edit payload, use encoder in standalone util function * feat: add summaryBuffer helper * refactor(api/messages): use new countTokens helper and add auth middleware at top * wip: ConversationSummaryBufferMemory * refactor: move pre-generation helpers to prompts dir * chore: remove console log * chore: remove test as payload will no longer carry tokenCount * chore: update getMessagesWithinTokenLimit JSDoc * refactor: optimize getMessagesForConversation and also break on summary, feat(ci): getMessagesForConversation tests * refactor(getMessagesForConvo): count '00000000-0000-0000-0000-000000000000' as root message * chore: add newer model to token map * fix: condition was point to prop of array instead of message prop * refactor(BaseClient): use object for refineMessages param, rename 'summary' to 'summaryMessage', add previous_summary refactor(getMessagesWithinTokenLimit): replace text and tokenCount if should summarize, summary, and summaryTokenCount are present fix/refactor(handleContextStrategy): use the right comparison length for context diff, and replace payload first message when a summary is present * chore: log previous_summary if debugging * refactor(formatMessage): assume if role is defined that it's a valid value * refactor(getMessagesWithinTokenLimit): remove summary logic refactor(handleContextStrategy): add usePrevSummary logic in case only summary was pruned refactor(loadHistory): initial message query will return all ordered messages but keep track of the latest summary refactor(getMessagesForConversation): use object for single param, edit jsdoc, edit all files using the method refactor(ChatGPTClient): order messages before buildPrompt is called, TODO: add convoSumBuffMemory logic * fix: undefined handling and summarizing only when shouldRefineContext is true * chore(BaseClient): fix test results omitting system role for summaries and test edge case * chore: export summaryBuffer from index file * refactor(OpenAIClient/BaseClient): move refineMessages to subclass, implement LLM initialization for summaryBuffer * feat: add OPENAI_SUMMARIZE to enable summarizing, refactor: rename client prop 'shouldRefineContext' to 'shouldSummarize', change contextStrategy value to 'summarize' from 'refine' * refactor: rename refineMessages method to summarizeMessages for clarity * chore: clarify summary future intent in .env.example * refactor(initializeLLM): handle case for either 'model' or 'modelName' being passed * feat(gptPlugins): enable summarization for plugins * refactor(gptPlugins): utilize new initializeLLM method and formatting methods for messages, use payload array for currentMessages and assign pastMessages sooner * refactor(agents): use ConversationSummaryBufferMemory for both agent types * refactor(formatMessage): optimize original method for langchain, add helper function for langchain messages, add JSDocs and tests * refactor(summaryBuffer): add helper to createSummaryBufferMemory, and use new formatting helpers * fix: forgot to spread formatMessages also took opportunity to pluralize filename * refactor: pass memory to tools, namely openapi specs. not used and may never be used by new method but added for testing * ci(formatMessages): add more exhaustive checks for langchain messages * feat: add debug env var for OpenAI * chore: delete unnecessary comments * chore: add extra note about summary feature * fix: remove tokenCount from payload instructions * fix: test fail * fix: only pass instructions to payload when defined or not empty object * refactor: fromPromptMessages is deprecated, use renamed method fromMessages * refactor: use 'includes' instead of 'startsWith' for extended OpenRouter compatibility * fix(PluginsClient.buildPromptBody): handle undefined message strings * chore: log langchain titling error * feat: getModelMaxTokens helper * feat: tokenSplit helper * feat: summary prompts updated * fix: optimize _CUT_OFF_SUMMARIZER prompt * refactor(summaryBuffer): use custom summary prompt, allow prompt to be passed, pass humanPrefix and aiPrefix to memory, along with any future variables, rename messagesToRefine to context * fix(summaryBuffer): handle edge case where messagesToRefine exceeds summary context, refactor(BaseClient): allow custom maxContextTokens to be passed to getMessagesWithinTokenLimit, add defined check before unshifting summaryMessage, update shouldSummarize based on this refactor(OpenAIClient): use getModelMaxTokens, use cut-off message method for summary if no messages were left after pruning * fix(handleContextStrategy): handle case where incoming prompt is bigger than model context * chore: rename refinedContent to splitText * chore: remove unnecessary debug log
2023-09-26 21:02:28 -04:00
expect(result.summaryIndex).toEqual(expectedIndex);
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
expect(result.remainingContextTokens).toBe(expectedRemainingContextTokens);
expect(result.messagesToRefine).toEqual(expectedMessagesToRefine);
});
feat: ConversationSummaryBufferMemory (#973) * refactor: pass model in message edit payload, use encoder in standalone util function * feat: add summaryBuffer helper * refactor(api/messages): use new countTokens helper and add auth middleware at top * wip: ConversationSummaryBufferMemory * refactor: move pre-generation helpers to prompts dir * chore: remove console log * chore: remove test as payload will no longer carry tokenCount * chore: update getMessagesWithinTokenLimit JSDoc * refactor: optimize getMessagesForConversation and also break on summary, feat(ci): getMessagesForConversation tests * refactor(getMessagesForConvo): count '00000000-0000-0000-0000-000000000000' as root message * chore: add newer model to token map * fix: condition was point to prop of array instead of message prop * refactor(BaseClient): use object for refineMessages param, rename 'summary' to 'summaryMessage', add previous_summary refactor(getMessagesWithinTokenLimit): replace text and tokenCount if should summarize, summary, and summaryTokenCount are present fix/refactor(handleContextStrategy): use the right comparison length for context diff, and replace payload first message when a summary is present * chore: log previous_summary if debugging * refactor(formatMessage): assume if role is defined that it's a valid value * refactor(getMessagesWithinTokenLimit): remove summary logic refactor(handleContextStrategy): add usePrevSummary logic in case only summary was pruned refactor(loadHistory): initial message query will return all ordered messages but keep track of the latest summary refactor(getMessagesForConversation): use object for single param, edit jsdoc, edit all files using the method refactor(ChatGPTClient): order messages before buildPrompt is called, TODO: add convoSumBuffMemory logic * fix: undefined handling and summarizing only when shouldRefineContext is true * chore(BaseClient): fix test results omitting system role for summaries and test edge case * chore: export summaryBuffer from index file * refactor(OpenAIClient/BaseClient): move refineMessages to subclass, implement LLM initialization for summaryBuffer * feat: add OPENAI_SUMMARIZE to enable summarizing, refactor: rename client prop 'shouldRefineContext' to 'shouldSummarize', change contextStrategy value to 'summarize' from 'refine' * refactor: rename refineMessages method to summarizeMessages for clarity * chore: clarify summary future intent in .env.example * refactor(initializeLLM): handle case for either 'model' or 'modelName' being passed * feat(gptPlugins): enable summarization for plugins * refactor(gptPlugins): utilize new initializeLLM method and formatting methods for messages, use payload array for currentMessages and assign pastMessages sooner * refactor(agents): use ConversationSummaryBufferMemory for both agent types * refactor(formatMessage): optimize original method for langchain, add helper function for langchain messages, add JSDocs and tests * refactor(summaryBuffer): add helper to createSummaryBufferMemory, and use new formatting helpers * fix: forgot to spread formatMessages also took opportunity to pluralize filename * refactor: pass memory to tools, namely openapi specs. not used and may never be used by new method but added for testing * ci(formatMessages): add more exhaustive checks for langchain messages * feat: add debug env var for OpenAI * chore: delete unnecessary comments * chore: add extra note about summary feature * fix: remove tokenCount from payload instructions * fix: test fail * fix: only pass instructions to payload when defined or not empty object * refactor: fromPromptMessages is deprecated, use renamed method fromMessages * refactor: use 'includes' instead of 'startsWith' for extended OpenRouter compatibility * fix(PluginsClient.buildPromptBody): handle undefined message strings * chore: log langchain titling error * feat: getModelMaxTokens helper * feat: tokenSplit helper * feat: summary prompts updated * fix: optimize _CUT_OFF_SUMMARIZER prompt * refactor(summaryBuffer): use custom summary prompt, allow prompt to be passed, pass humanPrefix and aiPrefix to memory, along with any future variables, rename messagesToRefine to context * fix(summaryBuffer): handle edge case where messagesToRefine exceeds summary context, refactor(BaseClient): allow custom maxContextTokens to be passed to getMessagesWithinTokenLimit, add defined check before unshifting summaryMessage, update shouldSummarize based on this refactor(OpenAIClient): use getModelMaxTokens, use cut-off message method for summary if no messages were left after pruning * fix(handleContextStrategy): handle case where incoming prompt is bigger than model context * chore: rename refinedContent to splitText * chore: remove unnecessary debug log
2023-09-26 21:02:28 -04:00
test('gets result over token limit correctly in getMessagesWithinTokenLimit()', async () => {
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
TestClient.maxContextTokens = 50; // Set a lower limit
feat: ConversationSummaryBufferMemory (#973) * refactor: pass model in message edit payload, use encoder in standalone util function * feat: add summaryBuffer helper * refactor(api/messages): use new countTokens helper and add auth middleware at top * wip: ConversationSummaryBufferMemory * refactor: move pre-generation helpers to prompts dir * chore: remove console log * chore: remove test as payload will no longer carry tokenCount * chore: update getMessagesWithinTokenLimit JSDoc * refactor: optimize getMessagesForConversation and also break on summary, feat(ci): getMessagesForConversation tests * refactor(getMessagesForConvo): count '00000000-0000-0000-0000-000000000000' as root message * chore: add newer model to token map * fix: condition was point to prop of array instead of message prop * refactor(BaseClient): use object for refineMessages param, rename 'summary' to 'summaryMessage', add previous_summary refactor(getMessagesWithinTokenLimit): replace text and tokenCount if should summarize, summary, and summaryTokenCount are present fix/refactor(handleContextStrategy): use the right comparison length for context diff, and replace payload first message when a summary is present * chore: log previous_summary if debugging * refactor(formatMessage): assume if role is defined that it's a valid value * refactor(getMessagesWithinTokenLimit): remove summary logic refactor(handleContextStrategy): add usePrevSummary logic in case only summary was pruned refactor(loadHistory): initial message query will return all ordered messages but keep track of the latest summary refactor(getMessagesForConversation): use object for single param, edit jsdoc, edit all files using the method refactor(ChatGPTClient): order messages before buildPrompt is called, TODO: add convoSumBuffMemory logic * fix: undefined handling and summarizing only when shouldRefineContext is true * chore(BaseClient): fix test results omitting system role for summaries and test edge case * chore: export summaryBuffer from index file * refactor(OpenAIClient/BaseClient): move refineMessages to subclass, implement LLM initialization for summaryBuffer * feat: add OPENAI_SUMMARIZE to enable summarizing, refactor: rename client prop 'shouldRefineContext' to 'shouldSummarize', change contextStrategy value to 'summarize' from 'refine' * refactor: rename refineMessages method to summarizeMessages for clarity * chore: clarify summary future intent in .env.example * refactor(initializeLLM): handle case for either 'model' or 'modelName' being passed * feat(gptPlugins): enable summarization for plugins * refactor(gptPlugins): utilize new initializeLLM method and formatting methods for messages, use payload array for currentMessages and assign pastMessages sooner * refactor(agents): use ConversationSummaryBufferMemory for both agent types * refactor(formatMessage): optimize original method for langchain, add helper function for langchain messages, add JSDocs and tests * refactor(summaryBuffer): add helper to createSummaryBufferMemory, and use new formatting helpers * fix: forgot to spread formatMessages also took opportunity to pluralize filename * refactor: pass memory to tools, namely openapi specs. not used and may never be used by new method but added for testing * ci(formatMessages): add more exhaustive checks for langchain messages * feat: add debug env var for OpenAI * chore: delete unnecessary comments * chore: add extra note about summary feature * fix: remove tokenCount from payload instructions * fix: test fail * fix: only pass instructions to payload when defined or not empty object * refactor: fromPromptMessages is deprecated, use renamed method fromMessages * refactor: use 'includes' instead of 'startsWith' for extended OpenRouter compatibility * fix(PluginsClient.buildPromptBody): handle undefined message strings * chore: log langchain titling error * feat: getModelMaxTokens helper * feat: tokenSplit helper * feat: summary prompts updated * fix: optimize _CUT_OFF_SUMMARIZER prompt * refactor(summaryBuffer): use custom summary prompt, allow prompt to be passed, pass humanPrefix and aiPrefix to memory, along with any future variables, rename messagesToRefine to context * fix(summaryBuffer): handle edge case where messagesToRefine exceeds summary context, refactor(BaseClient): allow custom maxContextTokens to be passed to getMessagesWithinTokenLimit, add defined check before unshifting summaryMessage, update shouldSummarize based on this refactor(OpenAIClient): use getModelMaxTokens, use cut-off message method for summary if no messages were left after pruning * fix(handleContextStrategy): handle case where incoming prompt is bigger than model context * chore: rename refinedContent to splitText * chore: remove unnecessary debug log
2023-09-26 21:02:28 -04:00
TestClient.shouldSummarize = true;
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
const messages = [
feat: ConversationSummaryBufferMemory (#973) * refactor: pass model in message edit payload, use encoder in standalone util function * feat: add summaryBuffer helper * refactor(api/messages): use new countTokens helper and add auth middleware at top * wip: ConversationSummaryBufferMemory * refactor: move pre-generation helpers to prompts dir * chore: remove console log * chore: remove test as payload will no longer carry tokenCount * chore: update getMessagesWithinTokenLimit JSDoc * refactor: optimize getMessagesForConversation and also break on summary, feat(ci): getMessagesForConversation tests * refactor(getMessagesForConvo): count '00000000-0000-0000-0000-000000000000' as root message * chore: add newer model to token map * fix: condition was point to prop of array instead of message prop * refactor(BaseClient): use object for refineMessages param, rename 'summary' to 'summaryMessage', add previous_summary refactor(getMessagesWithinTokenLimit): replace text and tokenCount if should summarize, summary, and summaryTokenCount are present fix/refactor(handleContextStrategy): use the right comparison length for context diff, and replace payload first message when a summary is present * chore: log previous_summary if debugging * refactor(formatMessage): assume if role is defined that it's a valid value * refactor(getMessagesWithinTokenLimit): remove summary logic refactor(handleContextStrategy): add usePrevSummary logic in case only summary was pruned refactor(loadHistory): initial message query will return all ordered messages but keep track of the latest summary refactor(getMessagesForConversation): use object for single param, edit jsdoc, edit all files using the method refactor(ChatGPTClient): order messages before buildPrompt is called, TODO: add convoSumBuffMemory logic * fix: undefined handling and summarizing only when shouldRefineContext is true * chore(BaseClient): fix test results omitting system role for summaries and test edge case * chore: export summaryBuffer from index file * refactor(OpenAIClient/BaseClient): move refineMessages to subclass, implement LLM initialization for summaryBuffer * feat: add OPENAI_SUMMARIZE to enable summarizing, refactor: rename client prop 'shouldRefineContext' to 'shouldSummarize', change contextStrategy value to 'summarize' from 'refine' * refactor: rename refineMessages method to summarizeMessages for clarity * chore: clarify summary future intent in .env.example * refactor(initializeLLM): handle case for either 'model' or 'modelName' being passed * feat(gptPlugins): enable summarization for plugins * refactor(gptPlugins): utilize new initializeLLM method and formatting methods for messages, use payload array for currentMessages and assign pastMessages sooner * refactor(agents): use ConversationSummaryBufferMemory for both agent types * refactor(formatMessage): optimize original method for langchain, add helper function for langchain messages, add JSDocs and tests * refactor(summaryBuffer): add helper to createSummaryBufferMemory, and use new formatting helpers * fix: forgot to spread formatMessages also took opportunity to pluralize filename * refactor: pass memory to tools, namely openapi specs. not used and may never be used by new method but added for testing * ci(formatMessages): add more exhaustive checks for langchain messages * feat: add debug env var for OpenAI * chore: delete unnecessary comments * chore: add extra note about summary feature * fix: remove tokenCount from payload instructions * fix: test fail * fix: only pass instructions to payload when defined or not empty object * refactor: fromPromptMessages is deprecated, use renamed method fromMessages * refactor: use 'includes' instead of 'startsWith' for extended OpenRouter compatibility * fix(PluginsClient.buildPromptBody): handle undefined message strings * chore: log langchain titling error * feat: getModelMaxTokens helper * feat: tokenSplit helper * feat: summary prompts updated * fix: optimize _CUT_OFF_SUMMARIZER prompt * refactor(summaryBuffer): use custom summary prompt, allow prompt to be passed, pass humanPrefix and aiPrefix to memory, along with any future variables, rename messagesToRefine to context * fix(summaryBuffer): handle edge case where messagesToRefine exceeds summary context, refactor(BaseClient): allow custom maxContextTokens to be passed to getMessagesWithinTokenLimit, add defined check before unshifting summaryMessage, update shouldSummarize based on this refactor(OpenAIClient): use getModelMaxTokens, use cut-off message method for summary if no messages were left after pruning * fix(handleContextStrategy): handle case where incoming prompt is bigger than model context * chore: rename refinedContent to splitText * chore: remove unnecessary debug log
2023-09-26 21:02:28 -04:00
{ role: 'user', content: 'Hello', tokenCount: 30 },
{ role: 'assistant', content: 'How can I help you?', tokenCount: 30 },
{ role: 'user', content: 'I have a question.', tokenCount: 5 },
{ role: 'user', content: 'I need a coffee, stat!', tokenCount: 19 },
{ role: 'assistant', content: 'Sure, I can help with that.', tokenCount: 18 },
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
];
// Subtract 3 tokens for Assistant Label priming after all messages have been counted.
feat: ConversationSummaryBufferMemory (#973) * refactor: pass model in message edit payload, use encoder in standalone util function * feat: add summaryBuffer helper * refactor(api/messages): use new countTokens helper and add auth middleware at top * wip: ConversationSummaryBufferMemory * refactor: move pre-generation helpers to prompts dir * chore: remove console log * chore: remove test as payload will no longer carry tokenCount * chore: update getMessagesWithinTokenLimit JSDoc * refactor: optimize getMessagesForConversation and also break on summary, feat(ci): getMessagesForConversation tests * refactor(getMessagesForConvo): count '00000000-0000-0000-0000-000000000000' as root message * chore: add newer model to token map * fix: condition was point to prop of array instead of message prop * refactor(BaseClient): use object for refineMessages param, rename 'summary' to 'summaryMessage', add previous_summary refactor(getMessagesWithinTokenLimit): replace text and tokenCount if should summarize, summary, and summaryTokenCount are present fix/refactor(handleContextStrategy): use the right comparison length for context diff, and replace payload first message when a summary is present * chore: log previous_summary if debugging * refactor(formatMessage): assume if role is defined that it's a valid value * refactor(getMessagesWithinTokenLimit): remove summary logic refactor(handleContextStrategy): add usePrevSummary logic in case only summary was pruned refactor(loadHistory): initial message query will return all ordered messages but keep track of the latest summary refactor(getMessagesForConversation): use object for single param, edit jsdoc, edit all files using the method refactor(ChatGPTClient): order messages before buildPrompt is called, TODO: add convoSumBuffMemory logic * fix: undefined handling and summarizing only when shouldRefineContext is true * chore(BaseClient): fix test results omitting system role for summaries and test edge case * chore: export summaryBuffer from index file * refactor(OpenAIClient/BaseClient): move refineMessages to subclass, implement LLM initialization for summaryBuffer * feat: add OPENAI_SUMMARIZE to enable summarizing, refactor: rename client prop 'shouldRefineContext' to 'shouldSummarize', change contextStrategy value to 'summarize' from 'refine' * refactor: rename refineMessages method to summarizeMessages for clarity * chore: clarify summary future intent in .env.example * refactor(initializeLLM): handle case for either 'model' or 'modelName' being passed * feat(gptPlugins): enable summarization for plugins * refactor(gptPlugins): utilize new initializeLLM method and formatting methods for messages, use payload array for currentMessages and assign pastMessages sooner * refactor(agents): use ConversationSummaryBufferMemory for both agent types * refactor(formatMessage): optimize original method for langchain, add helper function for langchain messages, add JSDocs and tests * refactor(summaryBuffer): add helper to createSummaryBufferMemory, and use new formatting helpers * fix: forgot to spread formatMessages also took opportunity to pluralize filename * refactor: pass memory to tools, namely openapi specs. not used and may never be used by new method but added for testing * ci(formatMessages): add more exhaustive checks for langchain messages * feat: add debug env var for OpenAI * chore: delete unnecessary comments * chore: add extra note about summary feature * fix: remove tokenCount from payload instructions * fix: test fail * fix: only pass instructions to payload when defined or not empty object * refactor: fromPromptMessages is deprecated, use renamed method fromMessages * refactor: use 'includes' instead of 'startsWith' for extended OpenRouter compatibility * fix(PluginsClient.buildPromptBody): handle undefined message strings * chore: log langchain titling error * feat: getModelMaxTokens helper * feat: tokenSplit helper * feat: summary prompts updated * fix: optimize _CUT_OFF_SUMMARIZER prompt * refactor(summaryBuffer): use custom summary prompt, allow prompt to be passed, pass humanPrefix and aiPrefix to memory, along with any future variables, rename messagesToRefine to context * fix(summaryBuffer): handle edge case where messagesToRefine exceeds summary context, refactor(BaseClient): allow custom maxContextTokens to be passed to getMessagesWithinTokenLimit, add defined check before unshifting summaryMessage, update shouldSummarize based on this refactor(OpenAIClient): use getModelMaxTokens, use cut-off message method for summary if no messages were left after pruning * fix(handleContextStrategy): handle case where incoming prompt is bigger than model context * chore: rename refinedContent to splitText * chore: remove unnecessary debug log
2023-09-26 21:02:28 -04:00
const expectedRemainingContextTokens = 5; // (50 - 18 - 19 - 5) - 3
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
const expectedMessagesToRefine = [
feat: ConversationSummaryBufferMemory (#973) * refactor: pass model in message edit payload, use encoder in standalone util function * feat: add summaryBuffer helper * refactor(api/messages): use new countTokens helper and add auth middleware at top * wip: ConversationSummaryBufferMemory * refactor: move pre-generation helpers to prompts dir * chore: remove console log * chore: remove test as payload will no longer carry tokenCount * chore: update getMessagesWithinTokenLimit JSDoc * refactor: optimize getMessagesForConversation and also break on summary, feat(ci): getMessagesForConversation tests * refactor(getMessagesForConvo): count '00000000-0000-0000-0000-000000000000' as root message * chore: add newer model to token map * fix: condition was point to prop of array instead of message prop * refactor(BaseClient): use object for refineMessages param, rename 'summary' to 'summaryMessage', add previous_summary refactor(getMessagesWithinTokenLimit): replace text and tokenCount if should summarize, summary, and summaryTokenCount are present fix/refactor(handleContextStrategy): use the right comparison length for context diff, and replace payload first message when a summary is present * chore: log previous_summary if debugging * refactor(formatMessage): assume if role is defined that it's a valid value * refactor(getMessagesWithinTokenLimit): remove summary logic refactor(handleContextStrategy): add usePrevSummary logic in case only summary was pruned refactor(loadHistory): initial message query will return all ordered messages but keep track of the latest summary refactor(getMessagesForConversation): use object for single param, edit jsdoc, edit all files using the method refactor(ChatGPTClient): order messages before buildPrompt is called, TODO: add convoSumBuffMemory logic * fix: undefined handling and summarizing only when shouldRefineContext is true * chore(BaseClient): fix test results omitting system role for summaries and test edge case * chore: export summaryBuffer from index file * refactor(OpenAIClient/BaseClient): move refineMessages to subclass, implement LLM initialization for summaryBuffer * feat: add OPENAI_SUMMARIZE to enable summarizing, refactor: rename client prop 'shouldRefineContext' to 'shouldSummarize', change contextStrategy value to 'summarize' from 'refine' * refactor: rename refineMessages method to summarizeMessages for clarity * chore: clarify summary future intent in .env.example * refactor(initializeLLM): handle case for either 'model' or 'modelName' being passed * feat(gptPlugins): enable summarization for plugins * refactor(gptPlugins): utilize new initializeLLM method and formatting methods for messages, use payload array for currentMessages and assign pastMessages sooner * refactor(agents): use ConversationSummaryBufferMemory for both agent types * refactor(formatMessage): optimize original method for langchain, add helper function for langchain messages, add JSDocs and tests * refactor(summaryBuffer): add helper to createSummaryBufferMemory, and use new formatting helpers * fix: forgot to spread formatMessages also took opportunity to pluralize filename * refactor: pass memory to tools, namely openapi specs. not used and may never be used by new method but added for testing * ci(formatMessages): add more exhaustive checks for langchain messages * feat: add debug env var for OpenAI * chore: delete unnecessary comments * chore: add extra note about summary feature * fix: remove tokenCount from payload instructions * fix: test fail * fix: only pass instructions to payload when defined or not empty object * refactor: fromPromptMessages is deprecated, use renamed method fromMessages * refactor: use 'includes' instead of 'startsWith' for extended OpenRouter compatibility * fix(PluginsClient.buildPromptBody): handle undefined message strings * chore: log langchain titling error * feat: getModelMaxTokens helper * feat: tokenSplit helper * feat: summary prompts updated * fix: optimize _CUT_OFF_SUMMARIZER prompt * refactor(summaryBuffer): use custom summary prompt, allow prompt to be passed, pass humanPrefix and aiPrefix to memory, along with any future variables, rename messagesToRefine to context * fix(summaryBuffer): handle edge case where messagesToRefine exceeds summary context, refactor(BaseClient): allow custom maxContextTokens to be passed to getMessagesWithinTokenLimit, add defined check before unshifting summaryMessage, update shouldSummarize based on this refactor(OpenAIClient): use getModelMaxTokens, use cut-off message method for summary if no messages were left after pruning * fix(handleContextStrategy): handle case where incoming prompt is bigger than model context * chore: rename refinedContent to splitText * chore: remove unnecessary debug log
2023-09-26 21:02:28 -04:00
{ role: 'user', content: 'Hello', tokenCount: 30 },
{ role: 'assistant', content: 'How can I help you?', tokenCount: 30 },
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
];
feat: ConversationSummaryBufferMemory (#973) * refactor: pass model in message edit payload, use encoder in standalone util function * feat: add summaryBuffer helper * refactor(api/messages): use new countTokens helper and add auth middleware at top * wip: ConversationSummaryBufferMemory * refactor: move pre-generation helpers to prompts dir * chore: remove console log * chore: remove test as payload will no longer carry tokenCount * chore: update getMessagesWithinTokenLimit JSDoc * refactor: optimize getMessagesForConversation and also break on summary, feat(ci): getMessagesForConversation tests * refactor(getMessagesForConvo): count '00000000-0000-0000-0000-000000000000' as root message * chore: add newer model to token map * fix: condition was point to prop of array instead of message prop * refactor(BaseClient): use object for refineMessages param, rename 'summary' to 'summaryMessage', add previous_summary refactor(getMessagesWithinTokenLimit): replace text and tokenCount if should summarize, summary, and summaryTokenCount are present fix/refactor(handleContextStrategy): use the right comparison length for context diff, and replace payload first message when a summary is present * chore: log previous_summary if debugging * refactor(formatMessage): assume if role is defined that it's a valid value * refactor(getMessagesWithinTokenLimit): remove summary logic refactor(handleContextStrategy): add usePrevSummary logic in case only summary was pruned refactor(loadHistory): initial message query will return all ordered messages but keep track of the latest summary refactor(getMessagesForConversation): use object for single param, edit jsdoc, edit all files using the method refactor(ChatGPTClient): order messages before buildPrompt is called, TODO: add convoSumBuffMemory logic * fix: undefined handling and summarizing only when shouldRefineContext is true * chore(BaseClient): fix test results omitting system role for summaries and test edge case * chore: export summaryBuffer from index file * refactor(OpenAIClient/BaseClient): move refineMessages to subclass, implement LLM initialization for summaryBuffer * feat: add OPENAI_SUMMARIZE to enable summarizing, refactor: rename client prop 'shouldRefineContext' to 'shouldSummarize', change contextStrategy value to 'summarize' from 'refine' * refactor: rename refineMessages method to summarizeMessages for clarity * chore: clarify summary future intent in .env.example * refactor(initializeLLM): handle case for either 'model' or 'modelName' being passed * feat(gptPlugins): enable summarization for plugins * refactor(gptPlugins): utilize new initializeLLM method and formatting methods for messages, use payload array for currentMessages and assign pastMessages sooner * refactor(agents): use ConversationSummaryBufferMemory for both agent types * refactor(formatMessage): optimize original method for langchain, add helper function for langchain messages, add JSDocs and tests * refactor(summaryBuffer): add helper to createSummaryBufferMemory, and use new formatting helpers * fix: forgot to spread formatMessages also took opportunity to pluralize filename * refactor: pass memory to tools, namely openapi specs. not used and may never be used by new method but added for testing * ci(formatMessages): add more exhaustive checks for langchain messages * feat: add debug env var for OpenAI * chore: delete unnecessary comments * chore: add extra note about summary feature * fix: remove tokenCount from payload instructions * fix: test fail * fix: only pass instructions to payload when defined or not empty object * refactor: fromPromptMessages is deprecated, use renamed method fromMessages * refactor: use 'includes' instead of 'startsWith' for extended OpenRouter compatibility * fix(PluginsClient.buildPromptBody): handle undefined message strings * chore: log langchain titling error * feat: getModelMaxTokens helper * feat: tokenSplit helper * feat: summary prompts updated * fix: optimize _CUT_OFF_SUMMARIZER prompt * refactor(summaryBuffer): use custom summary prompt, allow prompt to be passed, pass humanPrefix and aiPrefix to memory, along with any future variables, rename messagesToRefine to context * fix(summaryBuffer): handle edge case where messagesToRefine exceeds summary context, refactor(BaseClient): allow custom maxContextTokens to be passed to getMessagesWithinTokenLimit, add defined check before unshifting summaryMessage, update shouldSummarize based on this refactor(OpenAIClient): use getModelMaxTokens, use cut-off message method for summary if no messages were left after pruning * fix(handleContextStrategy): handle case where incoming prompt is bigger than model context * chore: rename refinedContent to splitText * chore: remove unnecessary debug log
2023-09-26 21:02:28 -04:00
const expectedContext = [
{ role: 'user', content: 'I have a question.', tokenCount: 5 },
{ role: 'user', content: 'I need a coffee, stat!', tokenCount: 19 },
{ role: 'assistant', content: 'Sure, I can help with that.', tokenCount: 18 },
];
const lastExpectedMessage =
expectedMessagesToRefine?.[expectedMessagesToRefine.length - 1] ?? {};
const expectedIndex = messages.findIndex((msg) => msg.content === lastExpectedMessage?.content);
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
const result = await TestClient.getMessagesWithinTokenLimit(messages);
feat: ConversationSummaryBufferMemory (#973) * refactor: pass model in message edit payload, use encoder in standalone util function * feat: add summaryBuffer helper * refactor(api/messages): use new countTokens helper and add auth middleware at top * wip: ConversationSummaryBufferMemory * refactor: move pre-generation helpers to prompts dir * chore: remove console log * chore: remove test as payload will no longer carry tokenCount * chore: update getMessagesWithinTokenLimit JSDoc * refactor: optimize getMessagesForConversation and also break on summary, feat(ci): getMessagesForConversation tests * refactor(getMessagesForConvo): count '00000000-0000-0000-0000-000000000000' as root message * chore: add newer model to token map * fix: condition was point to prop of array instead of message prop * refactor(BaseClient): use object for refineMessages param, rename 'summary' to 'summaryMessage', add previous_summary refactor(getMessagesWithinTokenLimit): replace text and tokenCount if should summarize, summary, and summaryTokenCount are present fix/refactor(handleContextStrategy): use the right comparison length for context diff, and replace payload first message when a summary is present * chore: log previous_summary if debugging * refactor(formatMessage): assume if role is defined that it's a valid value * refactor(getMessagesWithinTokenLimit): remove summary logic refactor(handleContextStrategy): add usePrevSummary logic in case only summary was pruned refactor(loadHistory): initial message query will return all ordered messages but keep track of the latest summary refactor(getMessagesForConversation): use object for single param, edit jsdoc, edit all files using the method refactor(ChatGPTClient): order messages before buildPrompt is called, TODO: add convoSumBuffMemory logic * fix: undefined handling and summarizing only when shouldRefineContext is true * chore(BaseClient): fix test results omitting system role for summaries and test edge case * chore: export summaryBuffer from index file * refactor(OpenAIClient/BaseClient): move refineMessages to subclass, implement LLM initialization for summaryBuffer * feat: add OPENAI_SUMMARIZE to enable summarizing, refactor: rename client prop 'shouldRefineContext' to 'shouldSummarize', change contextStrategy value to 'summarize' from 'refine' * refactor: rename refineMessages method to summarizeMessages for clarity * chore: clarify summary future intent in .env.example * refactor(initializeLLM): handle case for either 'model' or 'modelName' being passed * feat(gptPlugins): enable summarization for plugins * refactor(gptPlugins): utilize new initializeLLM method and formatting methods for messages, use payload array for currentMessages and assign pastMessages sooner * refactor(agents): use ConversationSummaryBufferMemory for both agent types * refactor(formatMessage): optimize original method for langchain, add helper function for langchain messages, add JSDocs and tests * refactor(summaryBuffer): add helper to createSummaryBufferMemory, and use new formatting helpers * fix: forgot to spread formatMessages also took opportunity to pluralize filename * refactor: pass memory to tools, namely openapi specs. not used and may never be used by new method but added for testing * ci(formatMessages): add more exhaustive checks for langchain messages * feat: add debug env var for OpenAI * chore: delete unnecessary comments * chore: add extra note about summary feature * fix: remove tokenCount from payload instructions * fix: test fail * fix: only pass instructions to payload when defined or not empty object * refactor: fromPromptMessages is deprecated, use renamed method fromMessages * refactor: use 'includes' instead of 'startsWith' for extended OpenRouter compatibility * fix(PluginsClient.buildPromptBody): handle undefined message strings * chore: log langchain titling error * feat: getModelMaxTokens helper * feat: tokenSplit helper * feat: summary prompts updated * fix: optimize _CUT_OFF_SUMMARIZER prompt * refactor(summaryBuffer): use custom summary prompt, allow prompt to be passed, pass humanPrefix and aiPrefix to memory, along with any future variables, rename messagesToRefine to context * fix(summaryBuffer): handle edge case where messagesToRefine exceeds summary context, refactor(BaseClient): allow custom maxContextTokens to be passed to getMessagesWithinTokenLimit, add defined check before unshifting summaryMessage, update shouldSummarize based on this refactor(OpenAIClient): use getModelMaxTokens, use cut-off message method for summary if no messages were left after pruning * fix(handleContextStrategy): handle case where incoming prompt is bigger than model context * chore: rename refinedContent to splitText * chore: remove unnecessary debug log
2023-09-26 21:02:28 -04:00
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
expect(result.context).toEqual(expectedContext);
feat: ConversationSummaryBufferMemory (#973) * refactor: pass model in message edit payload, use encoder in standalone util function * feat: add summaryBuffer helper * refactor(api/messages): use new countTokens helper and add auth middleware at top * wip: ConversationSummaryBufferMemory * refactor: move pre-generation helpers to prompts dir * chore: remove console log * chore: remove test as payload will no longer carry tokenCount * chore: update getMessagesWithinTokenLimit JSDoc * refactor: optimize getMessagesForConversation and also break on summary, feat(ci): getMessagesForConversation tests * refactor(getMessagesForConvo): count '00000000-0000-0000-0000-000000000000' as root message * chore: add newer model to token map * fix: condition was point to prop of array instead of message prop * refactor(BaseClient): use object for refineMessages param, rename 'summary' to 'summaryMessage', add previous_summary refactor(getMessagesWithinTokenLimit): replace text and tokenCount if should summarize, summary, and summaryTokenCount are present fix/refactor(handleContextStrategy): use the right comparison length for context diff, and replace payload first message when a summary is present * chore: log previous_summary if debugging * refactor(formatMessage): assume if role is defined that it's a valid value * refactor(getMessagesWithinTokenLimit): remove summary logic refactor(handleContextStrategy): add usePrevSummary logic in case only summary was pruned refactor(loadHistory): initial message query will return all ordered messages but keep track of the latest summary refactor(getMessagesForConversation): use object for single param, edit jsdoc, edit all files using the method refactor(ChatGPTClient): order messages before buildPrompt is called, TODO: add convoSumBuffMemory logic * fix: undefined handling and summarizing only when shouldRefineContext is true * chore(BaseClient): fix test results omitting system role for summaries and test edge case * chore: export summaryBuffer from index file * refactor(OpenAIClient/BaseClient): move refineMessages to subclass, implement LLM initialization for summaryBuffer * feat: add OPENAI_SUMMARIZE to enable summarizing, refactor: rename client prop 'shouldRefineContext' to 'shouldSummarize', change contextStrategy value to 'summarize' from 'refine' * refactor: rename refineMessages method to summarizeMessages for clarity * chore: clarify summary future intent in .env.example * refactor(initializeLLM): handle case for either 'model' or 'modelName' being passed * feat(gptPlugins): enable summarization for plugins * refactor(gptPlugins): utilize new initializeLLM method and formatting methods for messages, use payload array for currentMessages and assign pastMessages sooner * refactor(agents): use ConversationSummaryBufferMemory for both agent types * refactor(formatMessage): optimize original method for langchain, add helper function for langchain messages, add JSDocs and tests * refactor(summaryBuffer): add helper to createSummaryBufferMemory, and use new formatting helpers * fix: forgot to spread formatMessages also took opportunity to pluralize filename * refactor: pass memory to tools, namely openapi specs. not used and may never be used by new method but added for testing * ci(formatMessages): add more exhaustive checks for langchain messages * feat: add debug env var for OpenAI * chore: delete unnecessary comments * chore: add extra note about summary feature * fix: remove tokenCount from payload instructions * fix: test fail * fix: only pass instructions to payload when defined or not empty object * refactor: fromPromptMessages is deprecated, use renamed method fromMessages * refactor: use 'includes' instead of 'startsWith' for extended OpenRouter compatibility * fix(PluginsClient.buildPromptBody): handle undefined message strings * chore: log langchain titling error * feat: getModelMaxTokens helper * feat: tokenSplit helper * feat: summary prompts updated * fix: optimize _CUT_OFF_SUMMARIZER prompt * refactor(summaryBuffer): use custom summary prompt, allow prompt to be passed, pass humanPrefix and aiPrefix to memory, along with any future variables, rename messagesToRefine to context * fix(summaryBuffer): handle edge case where messagesToRefine exceeds summary context, refactor(BaseClient): allow custom maxContextTokens to be passed to getMessagesWithinTokenLimit, add defined check before unshifting summaryMessage, update shouldSummarize based on this refactor(OpenAIClient): use getModelMaxTokens, use cut-off message method for summary if no messages were left after pruning * fix(handleContextStrategy): handle case where incoming prompt is bigger than model context * chore: rename refinedContent to splitText * chore: remove unnecessary debug log
2023-09-26 21:02:28 -04:00
expect(result.summaryIndex).toEqual(expectedIndex);
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
expect(result.remainingContextTokens).toBe(expectedRemainingContextTokens);
expect(result.messagesToRefine).toEqual(expectedMessagesToRefine);
});
test('handles context strategy correctly in handleContextStrategy()', async () => {
TestClient.addInstructions = jest
.fn()
.mockReturnValue([
{ content: 'Hello' },
{ content: 'How can I help you?' },
{ content: 'Please provide more details.' },
{ content: 'I can assist you with that.' },
]);
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
TestClient.getMessagesWithinTokenLimit = jest.fn().mockReturnValue({
context: [
{ content: 'How can I help you?' },
{ content: 'Please provide more details.' },
{ content: 'I can assist you with that.' },
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
],
remainingContextTokens: 80,
messagesToRefine: [{ content: 'Hello' }],
feat: ConversationSummaryBufferMemory (#973) * refactor: pass model in message edit payload, use encoder in standalone util function * feat: add summaryBuffer helper * refactor(api/messages): use new countTokens helper and add auth middleware at top * wip: ConversationSummaryBufferMemory * refactor: move pre-generation helpers to prompts dir * chore: remove console log * chore: remove test as payload will no longer carry tokenCount * chore: update getMessagesWithinTokenLimit JSDoc * refactor: optimize getMessagesForConversation and also break on summary, feat(ci): getMessagesForConversation tests * refactor(getMessagesForConvo): count '00000000-0000-0000-0000-000000000000' as root message * chore: add newer model to token map * fix: condition was point to prop of array instead of message prop * refactor(BaseClient): use object for refineMessages param, rename 'summary' to 'summaryMessage', add previous_summary refactor(getMessagesWithinTokenLimit): replace text and tokenCount if should summarize, summary, and summaryTokenCount are present fix/refactor(handleContextStrategy): use the right comparison length for context diff, and replace payload first message when a summary is present * chore: log previous_summary if debugging * refactor(formatMessage): assume if role is defined that it's a valid value * refactor(getMessagesWithinTokenLimit): remove summary logic refactor(handleContextStrategy): add usePrevSummary logic in case only summary was pruned refactor(loadHistory): initial message query will return all ordered messages but keep track of the latest summary refactor(getMessagesForConversation): use object for single param, edit jsdoc, edit all files using the method refactor(ChatGPTClient): order messages before buildPrompt is called, TODO: add convoSumBuffMemory logic * fix: undefined handling and summarizing only when shouldRefineContext is true * chore(BaseClient): fix test results omitting system role for summaries and test edge case * chore: export summaryBuffer from index file * refactor(OpenAIClient/BaseClient): move refineMessages to subclass, implement LLM initialization for summaryBuffer * feat: add OPENAI_SUMMARIZE to enable summarizing, refactor: rename client prop 'shouldRefineContext' to 'shouldSummarize', change contextStrategy value to 'summarize' from 'refine' * refactor: rename refineMessages method to summarizeMessages for clarity * chore: clarify summary future intent in .env.example * refactor(initializeLLM): handle case for either 'model' or 'modelName' being passed * feat(gptPlugins): enable summarization for plugins * refactor(gptPlugins): utilize new initializeLLM method and formatting methods for messages, use payload array for currentMessages and assign pastMessages sooner * refactor(agents): use ConversationSummaryBufferMemory for both agent types * refactor(formatMessage): optimize original method for langchain, add helper function for langchain messages, add JSDocs and tests * refactor(summaryBuffer): add helper to createSummaryBufferMemory, and use new formatting helpers * fix: forgot to spread formatMessages also took opportunity to pluralize filename * refactor: pass memory to tools, namely openapi specs. not used and may never be used by new method but added for testing * ci(formatMessages): add more exhaustive checks for langchain messages * feat: add debug env var for OpenAI * chore: delete unnecessary comments * chore: add extra note about summary feature * fix: remove tokenCount from payload instructions * fix: test fail * fix: only pass instructions to payload when defined or not empty object * refactor: fromPromptMessages is deprecated, use renamed method fromMessages * refactor: use 'includes' instead of 'startsWith' for extended OpenRouter compatibility * fix(PluginsClient.buildPromptBody): handle undefined message strings * chore: log langchain titling error * feat: getModelMaxTokens helper * feat: tokenSplit helper * feat: summary prompts updated * fix: optimize _CUT_OFF_SUMMARIZER prompt * refactor(summaryBuffer): use custom summary prompt, allow prompt to be passed, pass humanPrefix and aiPrefix to memory, along with any future variables, rename messagesToRefine to context * fix(summaryBuffer): handle edge case where messagesToRefine exceeds summary context, refactor(BaseClient): allow custom maxContextTokens to be passed to getMessagesWithinTokenLimit, add defined check before unshifting summaryMessage, update shouldSummarize based on this refactor(OpenAIClient): use getModelMaxTokens, use cut-off message method for summary if no messages were left after pruning * fix(handleContextStrategy): handle case where incoming prompt is bigger than model context * chore: rename refinedContent to splitText * chore: remove unnecessary debug log
2023-09-26 21:02:28 -04:00
summaryIndex: 3,
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
});
feat: ConversationSummaryBufferMemory (#973) * refactor: pass model in message edit payload, use encoder in standalone util function * feat: add summaryBuffer helper * refactor(api/messages): use new countTokens helper and add auth middleware at top * wip: ConversationSummaryBufferMemory * refactor: move pre-generation helpers to prompts dir * chore: remove console log * chore: remove test as payload will no longer carry tokenCount * chore: update getMessagesWithinTokenLimit JSDoc * refactor: optimize getMessagesForConversation and also break on summary, feat(ci): getMessagesForConversation tests * refactor(getMessagesForConvo): count '00000000-0000-0000-0000-000000000000' as root message * chore: add newer model to token map * fix: condition was point to prop of array instead of message prop * refactor(BaseClient): use object for refineMessages param, rename 'summary' to 'summaryMessage', add previous_summary refactor(getMessagesWithinTokenLimit): replace text and tokenCount if should summarize, summary, and summaryTokenCount are present fix/refactor(handleContextStrategy): use the right comparison length for context diff, and replace payload first message when a summary is present * chore: log previous_summary if debugging * refactor(formatMessage): assume if role is defined that it's a valid value * refactor(getMessagesWithinTokenLimit): remove summary logic refactor(handleContextStrategy): add usePrevSummary logic in case only summary was pruned refactor(loadHistory): initial message query will return all ordered messages but keep track of the latest summary refactor(getMessagesForConversation): use object for single param, edit jsdoc, edit all files using the method refactor(ChatGPTClient): order messages before buildPrompt is called, TODO: add convoSumBuffMemory logic * fix: undefined handling and summarizing only when shouldRefineContext is true * chore(BaseClient): fix test results omitting system role for summaries and test edge case * chore: export summaryBuffer from index file * refactor(OpenAIClient/BaseClient): move refineMessages to subclass, implement LLM initialization for summaryBuffer * feat: add OPENAI_SUMMARIZE to enable summarizing, refactor: rename client prop 'shouldRefineContext' to 'shouldSummarize', change contextStrategy value to 'summarize' from 'refine' * refactor: rename refineMessages method to summarizeMessages for clarity * chore: clarify summary future intent in .env.example * refactor(initializeLLM): handle case for either 'model' or 'modelName' being passed * feat(gptPlugins): enable summarization for plugins * refactor(gptPlugins): utilize new initializeLLM method and formatting methods for messages, use payload array for currentMessages and assign pastMessages sooner * refactor(agents): use ConversationSummaryBufferMemory for both agent types * refactor(formatMessage): optimize original method for langchain, add helper function for langchain messages, add JSDocs and tests * refactor(summaryBuffer): add helper to createSummaryBufferMemory, and use new formatting helpers * fix: forgot to spread formatMessages also took opportunity to pluralize filename * refactor: pass memory to tools, namely openapi specs. not used and may never be used by new method but added for testing * ci(formatMessages): add more exhaustive checks for langchain messages * feat: add debug env var for OpenAI * chore: delete unnecessary comments * chore: add extra note about summary feature * fix: remove tokenCount from payload instructions * fix: test fail * fix: only pass instructions to payload when defined or not empty object * refactor: fromPromptMessages is deprecated, use renamed method fromMessages * refactor: use 'includes' instead of 'startsWith' for extended OpenRouter compatibility * fix(PluginsClient.buildPromptBody): handle undefined message strings * chore: log langchain titling error * feat: getModelMaxTokens helper * feat: tokenSplit helper * feat: summary prompts updated * fix: optimize _CUT_OFF_SUMMARIZER prompt * refactor(summaryBuffer): use custom summary prompt, allow prompt to be passed, pass humanPrefix and aiPrefix to memory, along with any future variables, rename messagesToRefine to context * fix(summaryBuffer): handle edge case where messagesToRefine exceeds summary context, refactor(BaseClient): allow custom maxContextTokens to be passed to getMessagesWithinTokenLimit, add defined check before unshifting summaryMessage, update shouldSummarize based on this refactor(OpenAIClient): use getModelMaxTokens, use cut-off message method for summary if no messages were left after pruning * fix(handleContextStrategy): handle case where incoming prompt is bigger than model context * chore: rename refinedContent to splitText * chore: remove unnecessary debug log
2023-09-26 21:02:28 -04:00
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
TestClient.getTokenCountForResponse = jest.fn().mockReturnValue(40);
const instructions = { content: 'Please provide more details.' };
const orderedMessages = [
{ content: 'Hello' },
{ content: 'How can I help you?' },
{ content: 'Please provide more details.' },
{ content: 'I can assist you with that.' },
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
];
const formattedMessages = [
{ content: 'Hello' },
{ content: 'How can I help you?' },
{ content: 'Please provide more details.' },
{ content: 'I can assist you with that.' },
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
];
const expectedResult = {
payload: [
{
feat: ConversationSummaryBufferMemory (#973) * refactor: pass model in message edit payload, use encoder in standalone util function * feat: add summaryBuffer helper * refactor(api/messages): use new countTokens helper and add auth middleware at top * wip: ConversationSummaryBufferMemory * refactor: move pre-generation helpers to prompts dir * chore: remove console log * chore: remove test as payload will no longer carry tokenCount * chore: update getMessagesWithinTokenLimit JSDoc * refactor: optimize getMessagesForConversation and also break on summary, feat(ci): getMessagesForConversation tests * refactor(getMessagesForConvo): count '00000000-0000-0000-0000-000000000000' as root message * chore: add newer model to token map * fix: condition was point to prop of array instead of message prop * refactor(BaseClient): use object for refineMessages param, rename 'summary' to 'summaryMessage', add previous_summary refactor(getMessagesWithinTokenLimit): replace text and tokenCount if should summarize, summary, and summaryTokenCount are present fix/refactor(handleContextStrategy): use the right comparison length for context diff, and replace payload first message when a summary is present * chore: log previous_summary if debugging * refactor(formatMessage): assume if role is defined that it's a valid value * refactor(getMessagesWithinTokenLimit): remove summary logic refactor(handleContextStrategy): add usePrevSummary logic in case only summary was pruned refactor(loadHistory): initial message query will return all ordered messages but keep track of the latest summary refactor(getMessagesForConversation): use object for single param, edit jsdoc, edit all files using the method refactor(ChatGPTClient): order messages before buildPrompt is called, TODO: add convoSumBuffMemory logic * fix: undefined handling and summarizing only when shouldRefineContext is true * chore(BaseClient): fix test results omitting system role for summaries and test edge case * chore: export summaryBuffer from index file * refactor(OpenAIClient/BaseClient): move refineMessages to subclass, implement LLM initialization for summaryBuffer * feat: add OPENAI_SUMMARIZE to enable summarizing, refactor: rename client prop 'shouldRefineContext' to 'shouldSummarize', change contextStrategy value to 'summarize' from 'refine' * refactor: rename refineMessages method to summarizeMessages for clarity * chore: clarify summary future intent in .env.example * refactor(initializeLLM): handle case for either 'model' or 'modelName' being passed * feat(gptPlugins): enable summarization for plugins * refactor(gptPlugins): utilize new initializeLLM method and formatting methods for messages, use payload array for currentMessages and assign pastMessages sooner * refactor(agents): use ConversationSummaryBufferMemory for both agent types * refactor(formatMessage): optimize original method for langchain, add helper function for langchain messages, add JSDocs and tests * refactor(summaryBuffer): add helper to createSummaryBufferMemory, and use new formatting helpers * fix: forgot to spread formatMessages also took opportunity to pluralize filename * refactor: pass memory to tools, namely openapi specs. not used and may never be used by new method but added for testing * ci(formatMessages): add more exhaustive checks for langchain messages * feat: add debug env var for OpenAI * chore: delete unnecessary comments * chore: add extra note about summary feature * fix: remove tokenCount from payload instructions * fix: test fail * fix: only pass instructions to payload when defined or not empty object * refactor: fromPromptMessages is deprecated, use renamed method fromMessages * refactor: use 'includes' instead of 'startsWith' for extended OpenRouter compatibility * fix(PluginsClient.buildPromptBody): handle undefined message strings * chore: log langchain titling error * feat: getModelMaxTokens helper * feat: tokenSplit helper * feat: summary prompts updated * fix: optimize _CUT_OFF_SUMMARIZER prompt * refactor(summaryBuffer): use custom summary prompt, allow prompt to be passed, pass humanPrefix and aiPrefix to memory, along with any future variables, rename messagesToRefine to context * fix(summaryBuffer): handle edge case where messagesToRefine exceeds summary context, refactor(BaseClient): allow custom maxContextTokens to be passed to getMessagesWithinTokenLimit, add defined check before unshifting summaryMessage, update shouldSummarize based on this refactor(OpenAIClient): use getModelMaxTokens, use cut-off message method for summary if no messages were left after pruning * fix(handleContextStrategy): handle case where incoming prompt is bigger than model context * chore: rename refinedContent to splitText * chore: remove unnecessary debug log
2023-09-26 21:02:28 -04:00
role: 'system',
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
content: 'Refined answer',
},
{ content: 'How can I help you?' },
{ content: 'Please provide more details.' },
{ content: 'I can assist you with that.' },
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
],
promptTokens: expect.any(Number),
tokenCountMap: {},
messages: expect.any(Array),
};
feat: ConversationSummaryBufferMemory (#973) * refactor: pass model in message edit payload, use encoder in standalone util function * feat: add summaryBuffer helper * refactor(api/messages): use new countTokens helper and add auth middleware at top * wip: ConversationSummaryBufferMemory * refactor: move pre-generation helpers to prompts dir * chore: remove console log * chore: remove test as payload will no longer carry tokenCount * chore: update getMessagesWithinTokenLimit JSDoc * refactor: optimize getMessagesForConversation and also break on summary, feat(ci): getMessagesForConversation tests * refactor(getMessagesForConvo): count '00000000-0000-0000-0000-000000000000' as root message * chore: add newer model to token map * fix: condition was point to prop of array instead of message prop * refactor(BaseClient): use object for refineMessages param, rename 'summary' to 'summaryMessage', add previous_summary refactor(getMessagesWithinTokenLimit): replace text and tokenCount if should summarize, summary, and summaryTokenCount are present fix/refactor(handleContextStrategy): use the right comparison length for context diff, and replace payload first message when a summary is present * chore: log previous_summary if debugging * refactor(formatMessage): assume if role is defined that it's a valid value * refactor(getMessagesWithinTokenLimit): remove summary logic refactor(handleContextStrategy): add usePrevSummary logic in case only summary was pruned refactor(loadHistory): initial message query will return all ordered messages but keep track of the latest summary refactor(getMessagesForConversation): use object for single param, edit jsdoc, edit all files using the method refactor(ChatGPTClient): order messages before buildPrompt is called, TODO: add convoSumBuffMemory logic * fix: undefined handling and summarizing only when shouldRefineContext is true * chore(BaseClient): fix test results omitting system role for summaries and test edge case * chore: export summaryBuffer from index file * refactor(OpenAIClient/BaseClient): move refineMessages to subclass, implement LLM initialization for summaryBuffer * feat: add OPENAI_SUMMARIZE to enable summarizing, refactor: rename client prop 'shouldRefineContext' to 'shouldSummarize', change contextStrategy value to 'summarize' from 'refine' * refactor: rename refineMessages method to summarizeMessages for clarity * chore: clarify summary future intent in .env.example * refactor(initializeLLM): handle case for either 'model' or 'modelName' being passed * feat(gptPlugins): enable summarization for plugins * refactor(gptPlugins): utilize new initializeLLM method and formatting methods for messages, use payload array for currentMessages and assign pastMessages sooner * refactor(agents): use ConversationSummaryBufferMemory for both agent types * refactor(formatMessage): optimize original method for langchain, add helper function for langchain messages, add JSDocs and tests * refactor(summaryBuffer): add helper to createSummaryBufferMemory, and use new formatting helpers * fix: forgot to spread formatMessages also took opportunity to pluralize filename * refactor: pass memory to tools, namely openapi specs. not used and may never be used by new method but added for testing * ci(formatMessages): add more exhaustive checks for langchain messages * feat: add debug env var for OpenAI * chore: delete unnecessary comments * chore: add extra note about summary feature * fix: remove tokenCount from payload instructions * fix: test fail * fix: only pass instructions to payload when defined or not empty object * refactor: fromPromptMessages is deprecated, use renamed method fromMessages * refactor: use 'includes' instead of 'startsWith' for extended OpenRouter compatibility * fix(PluginsClient.buildPromptBody): handle undefined message strings * chore: log langchain titling error * feat: getModelMaxTokens helper * feat: tokenSplit helper * feat: summary prompts updated * fix: optimize _CUT_OFF_SUMMARIZER prompt * refactor(summaryBuffer): use custom summary prompt, allow prompt to be passed, pass humanPrefix and aiPrefix to memory, along with any future variables, rename messagesToRefine to context * fix(summaryBuffer): handle edge case where messagesToRefine exceeds summary context, refactor(BaseClient): allow custom maxContextTokens to be passed to getMessagesWithinTokenLimit, add defined check before unshifting summaryMessage, update shouldSummarize based on this refactor(OpenAIClient): use getModelMaxTokens, use cut-off message method for summary if no messages were left after pruning * fix(handleContextStrategy): handle case where incoming prompt is bigger than model context * chore: rename refinedContent to splitText * chore: remove unnecessary debug log
2023-09-26 21:02:28 -04:00
TestClient.shouldSummarize = true;
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
const result = await TestClient.handleContextStrategy({
instructions,
orderedMessages,
formattedMessages,
});
feat: ConversationSummaryBufferMemory (#973) * refactor: pass model in message edit payload, use encoder in standalone util function * feat: add summaryBuffer helper * refactor(api/messages): use new countTokens helper and add auth middleware at top * wip: ConversationSummaryBufferMemory * refactor: move pre-generation helpers to prompts dir * chore: remove console log * chore: remove test as payload will no longer carry tokenCount * chore: update getMessagesWithinTokenLimit JSDoc * refactor: optimize getMessagesForConversation and also break on summary, feat(ci): getMessagesForConversation tests * refactor(getMessagesForConvo): count '00000000-0000-0000-0000-000000000000' as root message * chore: add newer model to token map * fix: condition was point to prop of array instead of message prop * refactor(BaseClient): use object for refineMessages param, rename 'summary' to 'summaryMessage', add previous_summary refactor(getMessagesWithinTokenLimit): replace text and tokenCount if should summarize, summary, and summaryTokenCount are present fix/refactor(handleContextStrategy): use the right comparison length for context diff, and replace payload first message when a summary is present * chore: log previous_summary if debugging * refactor(formatMessage): assume if role is defined that it's a valid value * refactor(getMessagesWithinTokenLimit): remove summary logic refactor(handleContextStrategy): add usePrevSummary logic in case only summary was pruned refactor(loadHistory): initial message query will return all ordered messages but keep track of the latest summary refactor(getMessagesForConversation): use object for single param, edit jsdoc, edit all files using the method refactor(ChatGPTClient): order messages before buildPrompt is called, TODO: add convoSumBuffMemory logic * fix: undefined handling and summarizing only when shouldRefineContext is true * chore(BaseClient): fix test results omitting system role for summaries and test edge case * chore: export summaryBuffer from index file * refactor(OpenAIClient/BaseClient): move refineMessages to subclass, implement LLM initialization for summaryBuffer * feat: add OPENAI_SUMMARIZE to enable summarizing, refactor: rename client prop 'shouldRefineContext' to 'shouldSummarize', change contextStrategy value to 'summarize' from 'refine' * refactor: rename refineMessages method to summarizeMessages for clarity * chore: clarify summary future intent in .env.example * refactor(initializeLLM): handle case for either 'model' or 'modelName' being passed * feat(gptPlugins): enable summarization for plugins * refactor(gptPlugins): utilize new initializeLLM method and formatting methods for messages, use payload array for currentMessages and assign pastMessages sooner * refactor(agents): use ConversationSummaryBufferMemory for both agent types * refactor(formatMessage): optimize original method for langchain, add helper function for langchain messages, add JSDocs and tests * refactor(summaryBuffer): add helper to createSummaryBufferMemory, and use new formatting helpers * fix: forgot to spread formatMessages also took opportunity to pluralize filename * refactor: pass memory to tools, namely openapi specs. not used and may never be used by new method but added for testing * ci(formatMessages): add more exhaustive checks for langchain messages * feat: add debug env var for OpenAI * chore: delete unnecessary comments * chore: add extra note about summary feature * fix: remove tokenCount from payload instructions * fix: test fail * fix: only pass instructions to payload when defined or not empty object * refactor: fromPromptMessages is deprecated, use renamed method fromMessages * refactor: use 'includes' instead of 'startsWith' for extended OpenRouter compatibility * fix(PluginsClient.buildPromptBody): handle undefined message strings * chore: log langchain titling error * feat: getModelMaxTokens helper * feat: tokenSplit helper * feat: summary prompts updated * fix: optimize _CUT_OFF_SUMMARIZER prompt * refactor(summaryBuffer): use custom summary prompt, allow prompt to be passed, pass humanPrefix and aiPrefix to memory, along with any future variables, rename messagesToRefine to context * fix(summaryBuffer): handle edge case where messagesToRefine exceeds summary context, refactor(BaseClient): allow custom maxContextTokens to be passed to getMessagesWithinTokenLimit, add defined check before unshifting summaryMessage, update shouldSummarize based on this refactor(OpenAIClient): use getModelMaxTokens, use cut-off message method for summary if no messages were left after pruning * fix(handleContextStrategy): handle case where incoming prompt is bigger than model context * chore: rename refinedContent to splitText * chore: remove unnecessary debug log
2023-09-26 21:02:28 -04:00
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
expect(result).toEqual(expectedResult);
});
feat: ConversationSummaryBufferMemory (#973) * refactor: pass model in message edit payload, use encoder in standalone util function * feat: add summaryBuffer helper * refactor(api/messages): use new countTokens helper and add auth middleware at top * wip: ConversationSummaryBufferMemory * refactor: move pre-generation helpers to prompts dir * chore: remove console log * chore: remove test as payload will no longer carry tokenCount * chore: update getMessagesWithinTokenLimit JSDoc * refactor: optimize getMessagesForConversation and also break on summary, feat(ci): getMessagesForConversation tests * refactor(getMessagesForConvo): count '00000000-0000-0000-0000-000000000000' as root message * chore: add newer model to token map * fix: condition was point to prop of array instead of message prop * refactor(BaseClient): use object for refineMessages param, rename 'summary' to 'summaryMessage', add previous_summary refactor(getMessagesWithinTokenLimit): replace text and tokenCount if should summarize, summary, and summaryTokenCount are present fix/refactor(handleContextStrategy): use the right comparison length for context diff, and replace payload first message when a summary is present * chore: log previous_summary if debugging * refactor(formatMessage): assume if role is defined that it's a valid value * refactor(getMessagesWithinTokenLimit): remove summary logic refactor(handleContextStrategy): add usePrevSummary logic in case only summary was pruned refactor(loadHistory): initial message query will return all ordered messages but keep track of the latest summary refactor(getMessagesForConversation): use object for single param, edit jsdoc, edit all files using the method refactor(ChatGPTClient): order messages before buildPrompt is called, TODO: add convoSumBuffMemory logic * fix: undefined handling and summarizing only when shouldRefineContext is true * chore(BaseClient): fix test results omitting system role for summaries and test edge case * chore: export summaryBuffer from index file * refactor(OpenAIClient/BaseClient): move refineMessages to subclass, implement LLM initialization for summaryBuffer * feat: add OPENAI_SUMMARIZE to enable summarizing, refactor: rename client prop 'shouldRefineContext' to 'shouldSummarize', change contextStrategy value to 'summarize' from 'refine' * refactor: rename refineMessages method to summarizeMessages for clarity * chore: clarify summary future intent in .env.example * refactor(initializeLLM): handle case for either 'model' or 'modelName' being passed * feat(gptPlugins): enable summarization for plugins * refactor(gptPlugins): utilize new initializeLLM method and formatting methods for messages, use payload array for currentMessages and assign pastMessages sooner * refactor(agents): use ConversationSummaryBufferMemory for both agent types * refactor(formatMessage): optimize original method for langchain, add helper function for langchain messages, add JSDocs and tests * refactor(summaryBuffer): add helper to createSummaryBufferMemory, and use new formatting helpers * fix: forgot to spread formatMessages also took opportunity to pluralize filename * refactor: pass memory to tools, namely openapi specs. not used and may never be used by new method but added for testing * ci(formatMessages): add more exhaustive checks for langchain messages * feat: add debug env var for OpenAI * chore: delete unnecessary comments * chore: add extra note about summary feature * fix: remove tokenCount from payload instructions * fix: test fail * fix: only pass instructions to payload when defined or not empty object * refactor: fromPromptMessages is deprecated, use renamed method fromMessages * refactor: use 'includes' instead of 'startsWith' for extended OpenRouter compatibility * fix(PluginsClient.buildPromptBody): handle undefined message strings * chore: log langchain titling error * feat: getModelMaxTokens helper * feat: tokenSplit helper * feat: summary prompts updated * fix: optimize _CUT_OFF_SUMMARIZER prompt * refactor(summaryBuffer): use custom summary prompt, allow prompt to be passed, pass humanPrefix and aiPrefix to memory, along with any future variables, rename messagesToRefine to context * fix(summaryBuffer): handle edge case where messagesToRefine exceeds summary context, refactor(BaseClient): allow custom maxContextTokens to be passed to getMessagesWithinTokenLimit, add defined check before unshifting summaryMessage, update shouldSummarize based on this refactor(OpenAIClient): use getModelMaxTokens, use cut-off message method for summary if no messages were left after pruning * fix(handleContextStrategy): handle case where incoming prompt is bigger than model context * chore: rename refinedContent to splitText * chore: remove unnecessary debug log
2023-09-26 21:02:28 -04:00
describe('getMessagesForConversation', () => {
it('should return an empty array if the parentMessageId does not exist', () => {
const result = TestClient.constructor.getMessagesForConversation({
messages: unorderedMessages,
parentMessageId: '999',
});
expect(result).toEqual([]);
});
it('should handle messages with messageId property', () => {
const messagesWithMessageId = [
{ messageId: '1', parentMessageId: null, text: 'Message 1' },
{ messageId: '2', parentMessageId: '1', text: 'Message 2' },
];
const result = TestClient.constructor.getMessagesForConversation({
messages: messagesWithMessageId,
parentMessageId: '2',
});
expect(result).toEqual([
{ messageId: '1', parentMessageId: null, text: 'Message 1' },
{ messageId: '2', parentMessageId: '1', text: 'Message 2' },
]);
});
const messagesWithNullParent = [
{ id: '1', parentMessageId: null, text: 'Message 1' },
{ id: '2', parentMessageId: null, text: 'Message 2' },
];
it('should handle messages with null parentMessageId that are not root', () => {
const result = TestClient.constructor.getMessagesForConversation({
messages: messagesWithNullParent,
parentMessageId: '2',
});
expect(result).toEqual([{ id: '2', parentMessageId: null, text: 'Message 2' }]);
});
const cyclicMessages = [
{ id: '3', parentMessageId: '2', text: 'Message 3' },
{ id: '1', parentMessageId: '3', text: 'Message 1' },
{ id: '2', parentMessageId: '1', text: 'Message 2' },
];
it('should handle cyclic references without going into an infinite loop', () => {
const result = TestClient.constructor.getMessagesForConversation({
messages: cyclicMessages,
parentMessageId: '3',
});
expect(result).toEqual([
{ id: '1', parentMessageId: '3', text: 'Message 1' },
{ id: '2', parentMessageId: '1', text: 'Message 2' },
{ id: '3', parentMessageId: '2', text: 'Message 3' },
]);
});
const unorderedMessages = [
{ id: '3', parentMessageId: '2', text: 'Message 3' },
{ id: '2', parentMessageId: '1', text: 'Message 2' },
{ id: '1', parentMessageId: '00000000-0000-0000-0000-000000000000', text: 'Message 1' },
];
it('should return ordered messages based on parentMessageId', () => {
const result = TestClient.constructor.getMessagesForConversation({
messages: unorderedMessages,
parentMessageId: '3',
});
expect(result).toEqual([
{ id: '1', parentMessageId: '00000000-0000-0000-0000-000000000000', text: 'Message 1' },
{ id: '2', parentMessageId: '1', text: 'Message 2' },
{ id: '3', parentMessageId: '2', text: 'Message 3' },
]);
});
const unorderedBranchedMessages = [
{ id: '4', parentMessageId: '2', text: 'Message 4', summary: 'Summary for Message 4' },
{ id: '10', parentMessageId: '7', text: 'Message 10' },
{ id: '1', parentMessageId: null, text: 'Message 1' },
{ id: '6', parentMessageId: '5', text: 'Message 7' },
{ id: '7', parentMessageId: '5', text: 'Message 7' },
{ id: '2', parentMessageId: '1', text: 'Message 2' },
{ id: '8', parentMessageId: '6', text: 'Message 8' },
{ id: '5', parentMessageId: '3', text: 'Message 5' },
{ id: '3', parentMessageId: '1', text: 'Message 3' },
{ id: '6', parentMessageId: '4', text: 'Message 6' },
{ id: '8', parentMessageId: '7', text: 'Message 9' },
{ id: '9', parentMessageId: '7', text: 'Message 9' },
{ id: '11', parentMessageId: '2', text: 'Message 11', summary: 'Summary for Message 11' },
];
it('should return ordered messages from a branched array based on parentMessageId', () => {
const result = TestClient.constructor.getMessagesForConversation({
messages: unorderedBranchedMessages,
parentMessageId: '10',
summary: true,
});
expect(result).toEqual([
{ id: '1', parentMessageId: null, text: 'Message 1' },
{ id: '3', parentMessageId: '1', text: 'Message 3' },
{ id: '5', parentMessageId: '3', text: 'Message 5' },
{ id: '7', parentMessageId: '5', text: 'Message 7' },
{ id: '10', parentMessageId: '7', text: 'Message 10' },
]);
});
it('should return an empty array if no messages are provided', () => {
const result = TestClient.constructor.getMessagesForConversation({
messages: [],
parentMessageId: '3',
});
expect(result).toEqual([]);
});
it('should map over the ordered messages if mapMethod is provided', () => {
const mapMethod = (msg) => msg.text;
const result = TestClient.constructor.getMessagesForConversation({
messages: unorderedMessages,
parentMessageId: '3',
mapMethod,
});
expect(result).toEqual(['Message 1', 'Message 2', 'Message 3']);
});
let unorderedMessagesWithSummary = [
{ id: '4', parentMessageId: '3', text: 'Message 4' },
{ id: '2', parentMessageId: '1', text: 'Message 2', summary: 'Summary for Message 2' },
{ id: '3', parentMessageId: '2', text: 'Message 3', summary: 'Summary for Message 3' },
{ id: '1', parentMessageId: null, text: 'Message 1' },
];
it('should start with the message that has a summary property and continue until the specified parentMessageId', () => {
const result = TestClient.constructor.getMessagesForConversation({
messages: unorderedMessagesWithSummary,
parentMessageId: '4',
summary: true,
});
expect(result).toEqual([
{
id: '3',
parentMessageId: '2',
role: 'system',
text: 'Summary for Message 3',
summary: 'Summary for Message 3',
},
{ id: '4', parentMessageId: '3', text: 'Message 4' },
]);
});
it('should handle multiple summaries and return the branch from the latest to the parentMessageId', () => {
unorderedMessagesWithSummary = [
{ id: '5', parentMessageId: '4', text: 'Message 5' },
{ id: '2', parentMessageId: '1', text: 'Message 2', summary: 'Summary for Message 2' },
{ id: '3', parentMessageId: '2', text: 'Message 3', summary: 'Summary for Message 3' },
{ id: '4', parentMessageId: '3', text: 'Message 4', summary: 'Summary for Message 4' },
{ id: '1', parentMessageId: null, text: 'Message 1' },
];
const result = TestClient.constructor.getMessagesForConversation({
messages: unorderedMessagesWithSummary,
parentMessageId: '5',
summary: true,
});
expect(result).toEqual([
{
id: '4',
parentMessageId: '3',
role: 'system',
text: 'Summary for Message 4',
summary: 'Summary for Message 4',
},
{ id: '5', parentMessageId: '4', text: 'Message 5' },
]);
});
it('should handle summary at root edge case and continue until the parentMessageId', () => {
unorderedMessagesWithSummary = [
{ id: '5', parentMessageId: '4', text: 'Message 5' },
{ id: '1', parentMessageId: null, text: 'Message 1', summary: 'Summary for Message 1' },
{ id: '4', parentMessageId: '3', text: 'Message 4', summary: 'Summary for Message 4' },
{ id: '2', parentMessageId: '1', text: 'Message 2', summary: 'Summary for Message 2' },
{ id: '3', parentMessageId: '2', text: 'Message 3', summary: 'Summary for Message 3' },
];
const result = TestClient.constructor.getMessagesForConversation({
messages: unorderedMessagesWithSummary,
parentMessageId: '5',
summary: true,
});
expect(result).toEqual([
{
id: '4',
parentMessageId: '3',
role: 'system',
text: 'Summary for Message 4',
summary: 'Summary for Message 4',
},
{ id: '5', parentMessageId: '4', text: 'Message 5' },
]);
});
});
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
describe('sendMessage', () => {
test('sendMessage should return a response message', async () => {
const expectedResult = expect.objectContaining({
sender: TestClient.sender,
text: expect.any(String),
isCreatedByUser: false,
messageId: expect.any(String),
parentMessageId: expect.any(String),
conversationId: expect.any(String),
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
});
const response = await TestClient.sendMessage(userMessage);
parentMessageId = response.messageId;
conversationId = response.conversationId;
expect(response).toEqual(expectedResult);
});
test('sendMessage should work with provided conversationId and parentMessageId', async () => {
const userMessage = 'Second message in the conversation';
const opts = {
conversationId,
parentMessageId,
getIds: jest.fn(),
onStart: jest.fn(),
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
};
const expectedResult = expect.objectContaining({
sender: TestClient.sender,
text: expect.any(String),
isCreatedByUser: false,
messageId: expect.any(String),
parentMessageId: expect.any(String),
conversationId: opts.conversationId,
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
});
const response = await TestClient.sendMessage(userMessage, opts);
parentMessageId = response.messageId;
expect(response.conversationId).toEqual(conversationId);
expect(response).toEqual(expectedResult);
expect(opts.getIds).toHaveBeenCalled();
expect(opts.onStart).toHaveBeenCalled();
expect(TestClient.getBuildMessagesOptions).toHaveBeenCalled();
expect(TestClient.getSaveOptions).toHaveBeenCalled();
});
test('should return chat history', async () => {
feat(GPT/Anthropic): Continue Regenerating & Generation Buttons (#808) * feat(useMessageHandler.js/ts): Refactor and add features to handle user messages, support multiple endpoints/models, generate placeholder responses, regeneration, and stopGeneration function fix(conversation.ts, buildTree.ts): Import TMessage type, handle null parentMessageId feat(schemas.ts): Update and add schemas for various AI services, add default values, optional fields, and endpoint-to-schema mapping, create parseConvo function chore(useMessageHandler.js, schemas.ts): Remove unused imports, variables, and chatGPT enum * wip: add generation buttons * refactor(cleanupPreset.ts): simplify cleanupPreset function refactor(getDefaultConversation.js): remove unused code and simplify getDefaultConversation function feat(utils): add getDefaultConversation function This commit adds a new utility function called `getDefaultConversation` to the `client/src/utils/getDefaultConversation.ts` file. This function is responsible for generating a default conversation object based on the provided parameters. The `getDefaultConversation` function takes in an object with the following properties: - `conversation`: The conversation object to be used as a base. - `endpointsConfig`: The configuration object containing information about the available endpoints. - `preset`: An optional preset object that can be used to override the default behavior. The function first tries to determine the target endpoint based on the preset object. If a valid endpoint is found, it is used as the target endpoint. If not, the function tries to retrieve the last conversation setup from the local storage and uses its endpoint if it is valid. If neither the preset nor the local storage contains a valid endpoint, the function falls back to a default endpoint. Once the target endpoint is determined, * fix(utils): remove console.error statement in buildDefaultConversation function fix(schemas): add default values for catch blocks in openAISchema, googleSchema, bingAISchema, anthropicSchema, chatGPTBrowserSchema, and gptPluginsSchema * fix: endpoint not changing on change of preset from other endpoint, wip: refactor * refactor: preset items to TSX * refactor: convert resetConvo to TS * refactor(getDefaultConversation.ts): move defaultEndpoints array to the top of the file for better readability refactor(getDefaultConversation.ts): extract getDefaultEndpoint function for better code organization and reusability * feat(svg): add ContinueIcon component feat(svg): add RegenerateIcon component feat(svg): add ContinueIcon and RegenerateIcon components to index.ts * feat(Button.tsx): add onClick and className props to Button component feat(GenerationButtons.tsx): add logic to display Regenerate or StopGenerating button based on isSubmitting and messages feat(Regenerate.tsx): create Regenerate component with RegenerateIcon and handleRegenerate function feat(StopGenerating.tsx): create StopGenerating component with StopGeneratingIcon and handleStopGenerating function * fix(TextChat.jsx): reorder imports and variables for better readability fix(TextChat.jsx): fix typo in condition for isNotAppendable variable fix(TextChat.jsx): remove unused handleStopGenerating function fix(ContinueIcon.tsx): remove unnecessary closing tags for polygon elements fix(useMessageHandler.ts): add missing type annotations for handleStopGenerating and handleRegenerate functions fix(useMessageHandler.ts): remove unused variables in return statement * fix(getDefaultConversation.ts): refactor code to use getLocalStorageItems function feat(getLocalStorageItems.ts): add utility function to retrieve items from local storage * fix(OpenAIClient.js): add support for streaming result in sendCompletion method feat(OpenAIClient.js): add finish_reason metadata to opts in sendCompletion method feat(Message.js): add finish_reason field to Message model feat(messageSchema.js): add finish_reason field to messageSchema feat(openAI.js): parse chatGptLabel and promptPrefix from req.body and pass rest of the modelOptions to endpointOption feat(openAI.js): add addMetadata function to store metadata in ask function feat(openAI.js): add metadata to response if available feat(schemas.ts): add finish_reason field to tMessageSchema * feat(types.ts): add TOnClick and TGenButtonProps types for button components feat(Continue.tsx): create Continue component for generating button feat(GenerationButtons.tsx): update GenerationButtons component to use Continue component feat(Regenerate.tsx): create Regenerate component for regenerating button feat(Stop.tsx): create Stop component for stop generating button * feat(MessageHandler.jsx): add MessageHandler component to handle messages and conversations fix(Root.jsx): fix import paths for Nav and MessageHandler components * feat(useMessageHandler.ts): add support for generation parameter in ask function feat(useMessageHandler.ts): add support for isEdited parameter in ask function feat(useMessageHandler.ts): add support for continueGeneration function fix(createPayload.ts): replace endpoint URL when isEdited parameter is true * chore(client): set skipLibCheck to true in tsconfig.json * fix(useMessageHandler.ts): remove unused clientId variable fix(schemas.ts): make clientId field in tMessageSchema nullable and optional * wip: edit route for continue generation * refactor(api): move handlers to root of routes dir * fix(useMessageHandler.ts): initialize currentMessages to an empty array if messages is null fix(useMessageHandler.ts): update initialResponse text to use responseText variable fix(useMessageHandler.ts): update setMessages logic for isRegenerate case fix(MessageHandler.jsx): update setMessages logic for cancelHandler, createdHandler, and finalHandler * fix(schemas.ts): make createdAt and updatedAt fields optional and set default values using new Date().toISOString() fix(schemas.ts): change type annotation of TMessage from infer to input * refactor(useMessageHandler.ts): rename AskProps type to TAskProps refactor(useMessageHandler.ts): remove generation property from ask function arguments refactor(useMessageHandler.ts): use nullish coalescing operator (??) instead of logical OR (||) refactor(useMessageHandler.ts): pass the responseMessageId to message prop of submission * fix(BaseClient.js): use nullish coalescing operator (??) instead of logical OR (||) for default values * fix(BaseClient.js): fix responseMessageId assignment in handleStartMethods method feat(BaseClient.js): add support for isEdited flag in sendMessage method feat(BaseClient.js): add generation to responseMessage text in sendMessage method * fix(openAI.js): remove unused imports and commented out code feat(openAI.js): add support for generation parameter in request body fix(openAI.js): remove console.log statement fix(openAI.js): remove unused variables and parameters fix(openAI.js): update response text in case of error fix(openAI.js): handle error and abort message in case of error fix(handlers.js): add generation parameter to createOnProgress function fix(useMessageHandler.ts): update responseText variable to use generation parameter * refactor(api/middleware): move inside server dir * refactor: add endpoint specific, modular functions to build options and initialize clients, create server/utils, move middleware, separate utils into api general utils and server specific utils * fix(abortMiddleware.js): import getConvo and getConvoTitle functions from models feat(abortMiddleware.js): add abortAsk function to abortController to handle aborting of requests fix(openAI.js): import buildOptions and initializeClient functions from endpoints/openAI refactor(openAI.js): use getAbortData function to get data for abortAsk function * refactor: move endpoint specific logic to an endpoints dir * refactor(PluginService.js): fix import path for encrypt and decrypt functions in PluginService.js * feat(openAI): add new endpoint for adding a title to a conversation - Added a new file `addTitle.js` in the `api/server/routes/endpoints/openAI` directory. - The `addTitle.js` file exports a function `addTitle` that takes in request parameters and performs the following actions: - If the `parentMessageId` is `'00000000-0000-0000-0000-000000000000'` and `newConvo` is true, it proceeds with the following steps: - Calls the `titleConvo` function from the `titleConvo` module, passing in the necessary parameters. - Calls the `saveConvo` function from the `saveConvo` module, passing in the user ID and conversation details. - Updated the `index.js` file in the `api/server/routes/endpoints/openAI` directory to export the `addTitle` function. - This change adds * fix(abortMiddleware.js): remove console.log statement refactor(gptPlugins.js): update imports and function parameters feat(gptPlugins.js): add support for abortController and getAbortData refactor(openAI.js): update imports and function parameters feat(openAI.js): add support for abortController and getAbortData fix(openAI.js): refactor code to use modularized functions and middleware fix(buildOptions.js): refactor code to use destructuring and update variable names * refactor(askChatGPTBrowser.js, bingAI.js, google.js): remove duplicate code for setting response headers feat(askChatGPTBrowser.js, bingAI.js, google.js): add setHeaders middleware to set response headers * feat(middleware): validateEndpoint, refactor buildOption to only be concerned of endpointOption * fix(abortMiddleware.js): add 'finish_reason' property with value 'incomplete' to responseMessage object fix(abortMessage.js): remove console.log statement for aborted message fix(handlers.js): modify tokens assignment to handle empty generation string and trailing space * fix(BaseClient.js): import addSpaceIfNeeded function from server/utils fix(BaseClient.js): add space before generation in text property fix(index.js): remove getCitations and citeText exports feat(buildEndpointOption.js): add buildEndpointOption middleware fix(index.js): import buildEndpointOption middleware fix(anthropic.js): remove buildOptions function and use endpointOption from req.body fix(gptPlugins.js): remove buildOptions function and use endpointOption from req.body fix(openAI.js): remove buildOptions function and use endpointOption from req.body feat(utils): add citations.js and handleText.js modules fix(utils): fix import statements in index.js module * refactor(gptPlugins.js): use getResponseSender function from librechat-data-provider * feat(gptPlugins): complete 'continue generating' * wip: anthropic continue regen * feat(middleware): add validateRegistration middleware A new middleware function called `validateRegistration` has been added to the list of exported middleware functions in `index.js`. This middleware is responsible for validating registration data before allowing the registration process to proceed. * feat(Anthropic): complete continue regen * chore: add librechat-data-provider to api/package.json * fix(ci): backend-review will mock meilisearch, also installs data-provider as now needed * chore(ci): remove unneeded SEARCH env var * style(GenerationButtons): make text shorter for sake of space economy, even though this diverges from chat.openai.com * style(GenerationButtons/ScrollToBottom): adjust visibility/position based on screen size * chore(client): 'Editting' typo * feat(GenerationButtons.tsx): add support for endpoint prop in GenerationButtons component feat(OptionsBar.tsx): pass endpoint prop to GenerationButtons component feat(useGenerations.ts): create useGenerations hook to handle generation logic fix(schemas.ts): add searchResult field to tMessageSchema * refactor(HoverButtons): convert to TSX and utilize new useGenerations hook * fix(abortMiddleware): handle error with res headers set, or abortController not found, to ensure proper API error is sent to the client, chore(BaseClient): remove console log for onStart message meant for debugging * refactor(api): remove librechat-data-provider dep for now as it complicates deployed docker build stage, re-use code in CJS, located in server/endpoints/schemas * chore: remove console.logs from test files * ci: add backend tests for AnthropicClient, focusing on new buildMessages logic * refactor(FakeClient): use actual BaseClient sendMessage method for testing * test(BaseClient.test.js): add test for loading chat history test(BaseClient.test.js): add test for sendMessage logic with isEdited flag * fix(buildEndpointOption.js): add support for azureOpenAI in buildFunction object wip(endpoints.js): fetch Azure models from Azure OpenAI API if opts.azure is true * fix(Button.tsx): add data-testid attribute to button component fix(SelectDropDown.tsx): add data-testid attribute to Listbox.Button component fix(messages.spec.ts): add waitForServerStream function to consolidate logic for awaiting the server response feat(messages.spec.ts): add test for stopping and continuing message and improve browser/page context order and closing * refactor(onProgress): speed up time to save initial message for editable routes * chore: disable AI message editing (for now), was accidentally allowed * refactor: ensure continue is only supported for latest message style: improve styling in dark mode and across all hover buttons/icons, including making edit icon for AI invisible (for now) * fix: add test id to generation buttons so they never resolve to 2+ items * chore(package.json): add 'packages/' to the list of ignored directories chore(data-provider/package.json): bump version to 0.1.5
2023-08-17 12:50:05 -04:00
TestClient = initializeFakeClient(apiKey, options, messageHistory);
const chatMessages = await TestClient.loadHistory(conversationId, '2');
expect(TestClient.currentMessages).toHaveLength(2);
expect(chatMessages[0].text).toEqual('Hello');
const chatMessages2 = await TestClient.loadHistory(conversationId, '3');
expect(TestClient.currentMessages).toHaveLength(3);
expect(chatMessages2[chatMessages2.length - 1].text).toEqual('What\'s up');
});
/* Most of the new sendMessage logic revolving around edited/continued AI messages
* can be summarized by the following test. The condition will load the entire history up to
* the message that is being edited, which will trigger the AI API to 'continue' the response.
* The 'userMessage' is only passed by convention and is not necessary for the generation.
*/
it('should not push userMessage to currentMessages when isEdited is true and vice versa', async () => {
const overrideParentMessageId = 'user-message-id';
const responseMessageId = 'response-message-id';
const newHistory = messageHistory.slice();
newHistory.push({
role: 'assistant',
isCreatedByUser: false,
text: 'test message',
messageId: responseMessageId,
parentMessageId: '3',
});
TestClient = initializeFakeClient(apiKey, options, newHistory);
const sendMessageOptions = {
isEdited: true,
overrideParentMessageId,
parentMessageId: '3',
responseMessageId,
};
await TestClient.sendMessage('test message', sendMessageOptions);
const currentMessages = TestClient.currentMessages;
expect(currentMessages[currentMessages.length - 1].messageId).not.toEqual(
overrideParentMessageId,
);
// Test the opposite case
sendMessageOptions.isEdited = false;
await TestClient.sendMessage('test message', sendMessageOptions);
const currentMessages2 = TestClient.currentMessages;
expect(currentMessages2[currentMessages2.length - 1].messageId).toEqual(
overrideParentMessageId,
);
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
});
test('setOptions is called with the correct arguments', async () => {
TestClient.setOptions = jest.fn();
const opts = { conversationId: '123', parentMessageId: '456' };
await TestClient.sendMessage('Hello, world!', opts);
expect(TestClient.setOptions).toHaveBeenCalledWith(opts);
TestClient.setOptions.mockClear();
});
test('loadHistory is called with the correct arguments', async () => {
const opts = { conversationId: '123', parentMessageId: '456' };
await TestClient.sendMessage('Hello, world!', opts);
expect(TestClient.loadHistory).toHaveBeenCalledWith(
opts.conversationId,
opts.parentMessageId,
);
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
});
test('getIds is called with the correct arguments', async () => {
const getIds = jest.fn();
const opts = { getIds };
const response = await TestClient.sendMessage('Hello, world!', opts);
expect(getIds).toHaveBeenCalledWith({
userMessage: expect.objectContaining({ text: 'Hello, world!' }),
conversationId: response.conversationId,
responseMessageId: response.messageId,
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
});
});
test('onStart is called with the correct arguments', async () => {
const onStart = jest.fn();
const opts = { onStart };
await TestClient.sendMessage('Hello, world!', opts);
expect(onStart).toHaveBeenCalledWith(expect.objectContaining({ text: 'Hello, world!' }));
});
test('saveMessageToDatabase is called with the correct arguments', async () => {
const saveOptions = TestClient.getSaveOptions();
const user = {}; // Mock user
const opts = { user };
await TestClient.sendMessage('Hello, world!', opts);
expect(TestClient.saveMessageToDatabase).toHaveBeenCalledWith(
expect.objectContaining({
sender: expect.any(String),
text: expect.any(String),
isCreatedByUser: expect.any(Boolean),
messageId: expect.any(String),
parentMessageId: expect.any(String),
conversationId: expect.any(String),
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
}),
saveOptions,
user,
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
);
});
test('sendCompletion is called with the correct arguments', async () => {
const payload = {}; // Mock payload
TestClient.buildMessages.mockReturnValue({ prompt: payload, tokenCountMap: null });
const opts = {};
await TestClient.sendMessage('Hello, world!', opts);
expect(TestClient.sendCompletion).toHaveBeenCalledWith(payload, opts);
});
test('getTokenCountForResponse is called with the correct arguments', async () => {
const tokenCountMap = {}; // Mock tokenCountMap
TestClient.buildMessages.mockReturnValue({ prompt: [], tokenCountMap });
TestClient.getTokenCountForResponse = jest.fn();
const response = await TestClient.sendMessage('Hello, world!', {});
expect(TestClient.getTokenCountForResponse).toHaveBeenCalledWith(response);
});
test('returns an object with the correct shape', async () => {
const response = await TestClient.sendMessage('Hello, world!', {});
expect(response).toEqual(
expect.objectContaining({
sender: expect.any(String),
text: expect.any(String),
isCreatedByUser: expect.any(Boolean),
messageId: expect.any(String),
parentMessageId: expect.any(String),
conversationId: expect.any(String),
}),
);
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
});
});
});