2025-02-26 15:02:03 -05:00
|
|
|
const { SplitStreamHandler } = require('@librechat/agents');
|
2024-07-20 08:53:16 -04:00
|
|
|
const { anthropicSettings } = require('librechat-data-provider');
|
|
|
|
|
const AnthropicClient = require('~/app/clients/AnthropicClient');
|
|
|
|
|
|
feat(GPT/Anthropic): Continue Regenerating & Generation Buttons (#808)
* feat(useMessageHandler.js/ts): Refactor and add features to handle user messages, support multiple endpoints/models, generate placeholder responses, regeneration, and stopGeneration function
fix(conversation.ts, buildTree.ts): Import TMessage type, handle null parentMessageId
feat(schemas.ts): Update and add schemas for various AI services, add default values, optional fields, and endpoint-to-schema mapping, create parseConvo function
chore(useMessageHandler.js, schemas.ts): Remove unused imports, variables, and chatGPT enum
* wip: add generation buttons
* refactor(cleanupPreset.ts): simplify cleanupPreset function
refactor(getDefaultConversation.js): remove unused code and simplify getDefaultConversation function
feat(utils): add getDefaultConversation function
This commit adds a new utility function called `getDefaultConversation` to the `client/src/utils/getDefaultConversation.ts` file. This function is responsible for generating a default conversation object based on the provided parameters.
The `getDefaultConversation` function takes in an object with the following properties:
- `conversation`: The conversation object to be used as a base.
- `endpointsConfig`: The configuration object containing information about the available endpoints.
- `preset`: An optional preset object that can be used to override the default behavior.
The function first tries to determine the target endpoint based on the preset object. If a valid endpoint is found, it is used as the target endpoint. If not, the function tries to retrieve the last conversation setup from the local storage and uses its endpoint if it is valid. If neither the preset nor the local storage contains a valid endpoint, the function falls back to a default endpoint.
Once the target endpoint is determined,
* fix(utils): remove console.error statement in buildDefaultConversation function
fix(schemas): add default values for catch blocks in openAISchema, googleSchema, bingAISchema, anthropicSchema, chatGPTBrowserSchema, and gptPluginsSchema
* fix: endpoint not changing on change of preset from other endpoint, wip: refactor
* refactor: preset items to TSX
* refactor: convert resetConvo to TS
* refactor(getDefaultConversation.ts): move defaultEndpoints array to the top of the file for better readability
refactor(getDefaultConversation.ts): extract getDefaultEndpoint function for better code organization and reusability
* feat(svg): add ContinueIcon component
feat(svg): add RegenerateIcon component
feat(svg): add ContinueIcon and RegenerateIcon components to index.ts
* feat(Button.tsx): add onClick and className props to Button component
feat(GenerationButtons.tsx): add logic to display Regenerate or StopGenerating button based on isSubmitting and messages
feat(Regenerate.tsx): create Regenerate component with RegenerateIcon and handleRegenerate function
feat(StopGenerating.tsx): create StopGenerating component with StopGeneratingIcon and handleStopGenerating function
* fix(TextChat.jsx): reorder imports and variables for better readability
fix(TextChat.jsx): fix typo in condition for isNotAppendable variable
fix(TextChat.jsx): remove unused handleStopGenerating function
fix(ContinueIcon.tsx): remove unnecessary closing tags for polygon elements
fix(useMessageHandler.ts): add missing type annotations for handleStopGenerating and handleRegenerate functions
fix(useMessageHandler.ts): remove unused variables in return statement
* fix(getDefaultConversation.ts): refactor code to use getLocalStorageItems function
feat(getLocalStorageItems.ts): add utility function to retrieve items from local storage
* fix(OpenAIClient.js): add support for streaming result in sendCompletion method
feat(OpenAIClient.js): add finish_reason metadata to opts in sendCompletion method
feat(Message.js): add finish_reason field to Message model
feat(messageSchema.js): add finish_reason field to messageSchema
feat(openAI.js): parse chatGptLabel and promptPrefix from req.body and pass rest of the modelOptions to endpointOption
feat(openAI.js): add addMetadata function to store metadata in ask function
feat(openAI.js): add metadata to response if available
feat(schemas.ts): add finish_reason field to tMessageSchema
* feat(types.ts): add TOnClick and TGenButtonProps types for button components
feat(Continue.tsx): create Continue component for generating button
feat(GenerationButtons.tsx): update GenerationButtons component to use Continue component
feat(Regenerate.tsx): create Regenerate component for regenerating button
feat(Stop.tsx): create Stop component for stop generating button
* feat(MessageHandler.jsx): add MessageHandler component to handle messages and conversations
fix(Root.jsx): fix import paths for Nav and MessageHandler components
* feat(useMessageHandler.ts): add support for generation parameter in ask function
feat(useMessageHandler.ts): add support for isEdited parameter in ask function
feat(useMessageHandler.ts): add support for continueGeneration function
fix(createPayload.ts): replace endpoint URL when isEdited parameter is true
* chore(client): set skipLibCheck to true in tsconfig.json
* fix(useMessageHandler.ts): remove unused clientId variable
fix(schemas.ts): make clientId field in tMessageSchema nullable and optional
* wip: edit route for continue generation
* refactor(api): move handlers to root of routes dir
* fix(useMessageHandler.ts): initialize currentMessages to an empty array if messages is null
fix(useMessageHandler.ts): update initialResponse text to use responseText variable
fix(useMessageHandler.ts): update setMessages logic for isRegenerate case
fix(MessageHandler.jsx): update setMessages logic for cancelHandler, createdHandler, and finalHandler
* fix(schemas.ts): make createdAt and updatedAt fields optional and set default values using new Date().toISOString()
fix(schemas.ts): change type annotation of TMessage from infer to input
* refactor(useMessageHandler.ts): rename AskProps type to TAskProps
refactor(useMessageHandler.ts): remove generation property from ask function arguments
refactor(useMessageHandler.ts): use nullish coalescing operator (??) instead of logical OR (||)
refactor(useMessageHandler.ts): pass the responseMessageId to message prop of submission
* fix(BaseClient.js): use nullish coalescing operator (??) instead of logical OR (||) for default values
* fix(BaseClient.js): fix responseMessageId assignment in handleStartMethods method
feat(BaseClient.js): add support for isEdited flag in sendMessage method
feat(BaseClient.js): add generation to responseMessage text in sendMessage method
* fix(openAI.js): remove unused imports and commented out code
feat(openAI.js): add support for generation parameter in request body
fix(openAI.js): remove console.log statement
fix(openAI.js): remove unused variables and parameters
fix(openAI.js): update response text in case of error
fix(openAI.js): handle error and abort message in case of error
fix(handlers.js): add generation parameter to createOnProgress function
fix(useMessageHandler.ts): update responseText variable to use generation parameter
* refactor(api/middleware): move inside server dir
* refactor: add endpoint specific, modular functions to build options and initialize clients, create server/utils, move middleware, separate utils into api general utils and server specific utils
* fix(abortMiddleware.js): import getConvo and getConvoTitle functions from models
feat(abortMiddleware.js): add abortAsk function to abortController to handle aborting of requests
fix(openAI.js): import buildOptions and initializeClient functions from endpoints/openAI
refactor(openAI.js): use getAbortData function to get data for abortAsk function
* refactor: move endpoint specific logic to an endpoints dir
* refactor(PluginService.js): fix import path for encrypt and decrypt functions in PluginService.js
* feat(openAI): add new endpoint for adding a title to a conversation
- Added a new file `addTitle.js` in the `api/server/routes/endpoints/openAI` directory.
- The `addTitle.js` file exports a function `addTitle` that takes in request parameters and performs the following actions:
- If the `parentMessageId` is `'00000000-0000-0000-0000-000000000000'` and `newConvo` is true, it proceeds with the following steps:
- Calls the `titleConvo` function from the `titleConvo` module, passing in the necessary parameters.
- Calls the `saveConvo` function from the `saveConvo` module, passing in the user ID and conversation details.
- Updated the `index.js` file in the `api/server/routes/endpoints/openAI` directory to export the `addTitle` function.
- This change adds
* fix(abortMiddleware.js): remove console.log statement
refactor(gptPlugins.js): update imports and function parameters
feat(gptPlugins.js): add support for abortController and getAbortData
refactor(openAI.js): update imports and function parameters
feat(openAI.js): add support for abortController and getAbortData
fix(openAI.js): refactor code to use modularized functions and middleware
fix(buildOptions.js): refactor code to use destructuring and update variable names
* refactor(askChatGPTBrowser.js, bingAI.js, google.js): remove duplicate code for setting response headers
feat(askChatGPTBrowser.js, bingAI.js, google.js): add setHeaders middleware to set response headers
* feat(middleware): validateEndpoint, refactor buildOption to only be concerned of endpointOption
* fix(abortMiddleware.js): add 'finish_reason' property with value 'incomplete' to responseMessage object
fix(abortMessage.js): remove console.log statement for aborted message
fix(handlers.js): modify tokens assignment to handle empty generation string and trailing space
* fix(BaseClient.js): import addSpaceIfNeeded function from server/utils
fix(BaseClient.js): add space before generation in text property
fix(index.js): remove getCitations and citeText exports
feat(buildEndpointOption.js): add buildEndpointOption middleware
fix(index.js): import buildEndpointOption middleware
fix(anthropic.js): remove buildOptions function and use endpointOption from req.body
fix(gptPlugins.js): remove buildOptions function and use endpointOption from req.body
fix(openAI.js): remove buildOptions function and use endpointOption from req.body
feat(utils): add citations.js and handleText.js modules
fix(utils): fix import statements in index.js module
* refactor(gptPlugins.js): use getResponseSender function from librechat-data-provider
* feat(gptPlugins): complete 'continue generating'
* wip: anthropic continue regen
* feat(middleware): add validateRegistration middleware
A new middleware function called `validateRegistration` has been added to the list of exported middleware functions in `index.js`. This middleware is responsible for validating registration data before allowing the registration process to proceed.
* feat(Anthropic): complete continue regen
* chore: add librechat-data-provider to api/package.json
* fix(ci): backend-review will mock meilisearch, also installs data-provider as now needed
* chore(ci): remove unneeded SEARCH env var
* style(GenerationButtons): make text shorter for sake of space economy, even though this diverges from chat.openai.com
* style(GenerationButtons/ScrollToBottom): adjust visibility/position based on screen size
* chore(client): 'Editting' typo
* feat(GenerationButtons.tsx): add support for endpoint prop in GenerationButtons component
feat(OptionsBar.tsx): pass endpoint prop to GenerationButtons component
feat(useGenerations.ts): create useGenerations hook to handle generation logic
fix(schemas.ts): add searchResult field to tMessageSchema
* refactor(HoverButtons): convert to TSX and utilize new useGenerations hook
* fix(abortMiddleware): handle error with res headers set, or abortController not found, to ensure proper API error is sent to the client, chore(BaseClient): remove console log for onStart message meant for debugging
* refactor(api): remove librechat-data-provider dep for now as it complicates deployed docker build stage, re-use code in CJS, located in server/endpoints/schemas
* chore: remove console.logs from test files
* ci: add backend tests for AnthropicClient, focusing on new buildMessages logic
* refactor(FakeClient): use actual BaseClient sendMessage method for testing
* test(BaseClient.test.js): add test for loading chat history
test(BaseClient.test.js): add test for sendMessage logic with isEdited flag
* fix(buildEndpointOption.js): add support for azureOpenAI in buildFunction object
wip(endpoints.js): fetch Azure models from Azure OpenAI API if opts.azure is true
* fix(Button.tsx): add data-testid attribute to button component
fix(SelectDropDown.tsx): add data-testid attribute to Listbox.Button component
fix(messages.spec.ts): add waitForServerStream function to consolidate logic for awaiting the server response
feat(messages.spec.ts): add test for stopping and continuing message and improve browser/page context order and closing
* refactor(onProgress): speed up time to save initial message for editable routes
* chore: disable AI message editing (for now), was accidentally allowed
* refactor: ensure continue is only supported for latest message style: improve styling in dark mode and across all hover buttons/icons, including making edit icon for AI invisible (for now)
* fix: add test id to generation buttons so they never resolve to 2+ items
* chore(package.json): add 'packages/' to the list of ignored directories
chore(data-provider/package.json): bump version to 0.1.5
2023-08-17 12:50:05 -04:00
|
|
|
const HUMAN_PROMPT = '\n\nHuman:';
|
|
|
|
|
const AI_PROMPT = '\n\nAssistant:';
|
|
|
|
|
|
|
|
|
|
describe('AnthropicClient', () => {
|
|
|
|
|
let client;
|
|
|
|
|
const model = 'claude-2';
|
|
|
|
|
const parentMessageId = '1';
|
|
|
|
|
const messages = [
|
|
|
|
|
{ role: 'user', isCreatedByUser: true, text: 'Hello', messageId: parentMessageId },
|
|
|
|
|
{ role: 'assistant', isCreatedByUser: false, text: 'Hi', messageId: '2', parentMessageId },
|
|
|
|
|
{
|
|
|
|
|
role: 'user',
|
|
|
|
|
isCreatedByUser: true,
|
2025-05-22 15:00:44 -04:00
|
|
|
text: "What's up",
|
feat(GPT/Anthropic): Continue Regenerating & Generation Buttons (#808)
* feat(useMessageHandler.js/ts): Refactor and add features to handle user messages, support multiple endpoints/models, generate placeholder responses, regeneration, and stopGeneration function
fix(conversation.ts, buildTree.ts): Import TMessage type, handle null parentMessageId
feat(schemas.ts): Update and add schemas for various AI services, add default values, optional fields, and endpoint-to-schema mapping, create parseConvo function
chore(useMessageHandler.js, schemas.ts): Remove unused imports, variables, and chatGPT enum
* wip: add generation buttons
* refactor(cleanupPreset.ts): simplify cleanupPreset function
refactor(getDefaultConversation.js): remove unused code and simplify getDefaultConversation function
feat(utils): add getDefaultConversation function
This commit adds a new utility function called `getDefaultConversation` to the `client/src/utils/getDefaultConversation.ts` file. This function is responsible for generating a default conversation object based on the provided parameters.
The `getDefaultConversation` function takes in an object with the following properties:
- `conversation`: The conversation object to be used as a base.
- `endpointsConfig`: The configuration object containing information about the available endpoints.
- `preset`: An optional preset object that can be used to override the default behavior.
The function first tries to determine the target endpoint based on the preset object. If a valid endpoint is found, it is used as the target endpoint. If not, the function tries to retrieve the last conversation setup from the local storage and uses its endpoint if it is valid. If neither the preset nor the local storage contains a valid endpoint, the function falls back to a default endpoint.
Once the target endpoint is determined,
* fix(utils): remove console.error statement in buildDefaultConversation function
fix(schemas): add default values for catch blocks in openAISchema, googleSchema, bingAISchema, anthropicSchema, chatGPTBrowserSchema, and gptPluginsSchema
* fix: endpoint not changing on change of preset from other endpoint, wip: refactor
* refactor: preset items to TSX
* refactor: convert resetConvo to TS
* refactor(getDefaultConversation.ts): move defaultEndpoints array to the top of the file for better readability
refactor(getDefaultConversation.ts): extract getDefaultEndpoint function for better code organization and reusability
* feat(svg): add ContinueIcon component
feat(svg): add RegenerateIcon component
feat(svg): add ContinueIcon and RegenerateIcon components to index.ts
* feat(Button.tsx): add onClick and className props to Button component
feat(GenerationButtons.tsx): add logic to display Regenerate or StopGenerating button based on isSubmitting and messages
feat(Regenerate.tsx): create Regenerate component with RegenerateIcon and handleRegenerate function
feat(StopGenerating.tsx): create StopGenerating component with StopGeneratingIcon and handleStopGenerating function
* fix(TextChat.jsx): reorder imports and variables for better readability
fix(TextChat.jsx): fix typo in condition for isNotAppendable variable
fix(TextChat.jsx): remove unused handleStopGenerating function
fix(ContinueIcon.tsx): remove unnecessary closing tags for polygon elements
fix(useMessageHandler.ts): add missing type annotations for handleStopGenerating and handleRegenerate functions
fix(useMessageHandler.ts): remove unused variables in return statement
* fix(getDefaultConversation.ts): refactor code to use getLocalStorageItems function
feat(getLocalStorageItems.ts): add utility function to retrieve items from local storage
* fix(OpenAIClient.js): add support for streaming result in sendCompletion method
feat(OpenAIClient.js): add finish_reason metadata to opts in sendCompletion method
feat(Message.js): add finish_reason field to Message model
feat(messageSchema.js): add finish_reason field to messageSchema
feat(openAI.js): parse chatGptLabel and promptPrefix from req.body and pass rest of the modelOptions to endpointOption
feat(openAI.js): add addMetadata function to store metadata in ask function
feat(openAI.js): add metadata to response if available
feat(schemas.ts): add finish_reason field to tMessageSchema
* feat(types.ts): add TOnClick and TGenButtonProps types for button components
feat(Continue.tsx): create Continue component for generating button
feat(GenerationButtons.tsx): update GenerationButtons component to use Continue component
feat(Regenerate.tsx): create Regenerate component for regenerating button
feat(Stop.tsx): create Stop component for stop generating button
* feat(MessageHandler.jsx): add MessageHandler component to handle messages and conversations
fix(Root.jsx): fix import paths for Nav and MessageHandler components
* feat(useMessageHandler.ts): add support for generation parameter in ask function
feat(useMessageHandler.ts): add support for isEdited parameter in ask function
feat(useMessageHandler.ts): add support for continueGeneration function
fix(createPayload.ts): replace endpoint URL when isEdited parameter is true
* chore(client): set skipLibCheck to true in tsconfig.json
* fix(useMessageHandler.ts): remove unused clientId variable
fix(schemas.ts): make clientId field in tMessageSchema nullable and optional
* wip: edit route for continue generation
* refactor(api): move handlers to root of routes dir
* fix(useMessageHandler.ts): initialize currentMessages to an empty array if messages is null
fix(useMessageHandler.ts): update initialResponse text to use responseText variable
fix(useMessageHandler.ts): update setMessages logic for isRegenerate case
fix(MessageHandler.jsx): update setMessages logic for cancelHandler, createdHandler, and finalHandler
* fix(schemas.ts): make createdAt and updatedAt fields optional and set default values using new Date().toISOString()
fix(schemas.ts): change type annotation of TMessage from infer to input
* refactor(useMessageHandler.ts): rename AskProps type to TAskProps
refactor(useMessageHandler.ts): remove generation property from ask function arguments
refactor(useMessageHandler.ts): use nullish coalescing operator (??) instead of logical OR (||)
refactor(useMessageHandler.ts): pass the responseMessageId to message prop of submission
* fix(BaseClient.js): use nullish coalescing operator (??) instead of logical OR (||) for default values
* fix(BaseClient.js): fix responseMessageId assignment in handleStartMethods method
feat(BaseClient.js): add support for isEdited flag in sendMessage method
feat(BaseClient.js): add generation to responseMessage text in sendMessage method
* fix(openAI.js): remove unused imports and commented out code
feat(openAI.js): add support for generation parameter in request body
fix(openAI.js): remove console.log statement
fix(openAI.js): remove unused variables and parameters
fix(openAI.js): update response text in case of error
fix(openAI.js): handle error and abort message in case of error
fix(handlers.js): add generation parameter to createOnProgress function
fix(useMessageHandler.ts): update responseText variable to use generation parameter
* refactor(api/middleware): move inside server dir
* refactor: add endpoint specific, modular functions to build options and initialize clients, create server/utils, move middleware, separate utils into api general utils and server specific utils
* fix(abortMiddleware.js): import getConvo and getConvoTitle functions from models
feat(abortMiddleware.js): add abortAsk function to abortController to handle aborting of requests
fix(openAI.js): import buildOptions and initializeClient functions from endpoints/openAI
refactor(openAI.js): use getAbortData function to get data for abortAsk function
* refactor: move endpoint specific logic to an endpoints dir
* refactor(PluginService.js): fix import path for encrypt and decrypt functions in PluginService.js
* feat(openAI): add new endpoint for adding a title to a conversation
- Added a new file `addTitle.js` in the `api/server/routes/endpoints/openAI` directory.
- The `addTitle.js` file exports a function `addTitle` that takes in request parameters and performs the following actions:
- If the `parentMessageId` is `'00000000-0000-0000-0000-000000000000'` and `newConvo` is true, it proceeds with the following steps:
- Calls the `titleConvo` function from the `titleConvo` module, passing in the necessary parameters.
- Calls the `saveConvo` function from the `saveConvo` module, passing in the user ID and conversation details.
- Updated the `index.js` file in the `api/server/routes/endpoints/openAI` directory to export the `addTitle` function.
- This change adds
* fix(abortMiddleware.js): remove console.log statement
refactor(gptPlugins.js): update imports and function parameters
feat(gptPlugins.js): add support for abortController and getAbortData
refactor(openAI.js): update imports and function parameters
feat(openAI.js): add support for abortController and getAbortData
fix(openAI.js): refactor code to use modularized functions and middleware
fix(buildOptions.js): refactor code to use destructuring and update variable names
* refactor(askChatGPTBrowser.js, bingAI.js, google.js): remove duplicate code for setting response headers
feat(askChatGPTBrowser.js, bingAI.js, google.js): add setHeaders middleware to set response headers
* feat(middleware): validateEndpoint, refactor buildOption to only be concerned of endpointOption
* fix(abortMiddleware.js): add 'finish_reason' property with value 'incomplete' to responseMessage object
fix(abortMessage.js): remove console.log statement for aborted message
fix(handlers.js): modify tokens assignment to handle empty generation string and trailing space
* fix(BaseClient.js): import addSpaceIfNeeded function from server/utils
fix(BaseClient.js): add space before generation in text property
fix(index.js): remove getCitations and citeText exports
feat(buildEndpointOption.js): add buildEndpointOption middleware
fix(index.js): import buildEndpointOption middleware
fix(anthropic.js): remove buildOptions function and use endpointOption from req.body
fix(gptPlugins.js): remove buildOptions function and use endpointOption from req.body
fix(openAI.js): remove buildOptions function and use endpointOption from req.body
feat(utils): add citations.js and handleText.js modules
fix(utils): fix import statements in index.js module
* refactor(gptPlugins.js): use getResponseSender function from librechat-data-provider
* feat(gptPlugins): complete 'continue generating'
* wip: anthropic continue regen
* feat(middleware): add validateRegistration middleware
A new middleware function called `validateRegistration` has been added to the list of exported middleware functions in `index.js`. This middleware is responsible for validating registration data before allowing the registration process to proceed.
* feat(Anthropic): complete continue regen
* chore: add librechat-data-provider to api/package.json
* fix(ci): backend-review will mock meilisearch, also installs data-provider as now needed
* chore(ci): remove unneeded SEARCH env var
* style(GenerationButtons): make text shorter for sake of space economy, even though this diverges from chat.openai.com
* style(GenerationButtons/ScrollToBottom): adjust visibility/position based on screen size
* chore(client): 'Editting' typo
* feat(GenerationButtons.tsx): add support for endpoint prop in GenerationButtons component
feat(OptionsBar.tsx): pass endpoint prop to GenerationButtons component
feat(useGenerations.ts): create useGenerations hook to handle generation logic
fix(schemas.ts): add searchResult field to tMessageSchema
* refactor(HoverButtons): convert to TSX and utilize new useGenerations hook
* fix(abortMiddleware): handle error with res headers set, or abortController not found, to ensure proper API error is sent to the client, chore(BaseClient): remove console log for onStart message meant for debugging
* refactor(api): remove librechat-data-provider dep for now as it complicates deployed docker build stage, re-use code in CJS, located in server/endpoints/schemas
* chore: remove console.logs from test files
* ci: add backend tests for AnthropicClient, focusing on new buildMessages logic
* refactor(FakeClient): use actual BaseClient sendMessage method for testing
* test(BaseClient.test.js): add test for loading chat history
test(BaseClient.test.js): add test for sendMessage logic with isEdited flag
* fix(buildEndpointOption.js): add support for azureOpenAI in buildFunction object
wip(endpoints.js): fetch Azure models from Azure OpenAI API if opts.azure is true
* fix(Button.tsx): add data-testid attribute to button component
fix(SelectDropDown.tsx): add data-testid attribute to Listbox.Button component
fix(messages.spec.ts): add waitForServerStream function to consolidate logic for awaiting the server response
feat(messages.spec.ts): add test for stopping and continuing message and improve browser/page context order and closing
* refactor(onProgress): speed up time to save initial message for editable routes
* chore: disable AI message editing (for now), was accidentally allowed
* refactor: ensure continue is only supported for latest message style: improve styling in dark mode and across all hover buttons/icons, including making edit icon for AI invisible (for now)
* fix: add test id to generation buttons so they never resolve to 2+ items
* chore(package.json): add 'packages/' to the list of ignored directories
chore(data-provider/package.json): bump version to 0.1.5
2023-08-17 12:50:05 -04:00
|
|
|
messageId: '3',
|
|
|
|
|
parentMessageId: '2',
|
|
|
|
|
},
|
|
|
|
|
];
|
|
|
|
|
|
|
|
|
|
beforeEach(() => {
|
|
|
|
|
const options = {
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model,
|
2024-07-20 08:53:16 -04:00
|
|
|
temperature: anthropicSettings.temperature.default,
|
feat(GPT/Anthropic): Continue Regenerating & Generation Buttons (#808)
* feat(useMessageHandler.js/ts): Refactor and add features to handle user messages, support multiple endpoints/models, generate placeholder responses, regeneration, and stopGeneration function
fix(conversation.ts, buildTree.ts): Import TMessage type, handle null parentMessageId
feat(schemas.ts): Update and add schemas for various AI services, add default values, optional fields, and endpoint-to-schema mapping, create parseConvo function
chore(useMessageHandler.js, schemas.ts): Remove unused imports, variables, and chatGPT enum
* wip: add generation buttons
* refactor(cleanupPreset.ts): simplify cleanupPreset function
refactor(getDefaultConversation.js): remove unused code and simplify getDefaultConversation function
feat(utils): add getDefaultConversation function
This commit adds a new utility function called `getDefaultConversation` to the `client/src/utils/getDefaultConversation.ts` file. This function is responsible for generating a default conversation object based on the provided parameters.
The `getDefaultConversation` function takes in an object with the following properties:
- `conversation`: The conversation object to be used as a base.
- `endpointsConfig`: The configuration object containing information about the available endpoints.
- `preset`: An optional preset object that can be used to override the default behavior.
The function first tries to determine the target endpoint based on the preset object. If a valid endpoint is found, it is used as the target endpoint. If not, the function tries to retrieve the last conversation setup from the local storage and uses its endpoint if it is valid. If neither the preset nor the local storage contains a valid endpoint, the function falls back to a default endpoint.
Once the target endpoint is determined,
* fix(utils): remove console.error statement in buildDefaultConversation function
fix(schemas): add default values for catch blocks in openAISchema, googleSchema, bingAISchema, anthropicSchema, chatGPTBrowserSchema, and gptPluginsSchema
* fix: endpoint not changing on change of preset from other endpoint, wip: refactor
* refactor: preset items to TSX
* refactor: convert resetConvo to TS
* refactor(getDefaultConversation.ts): move defaultEndpoints array to the top of the file for better readability
refactor(getDefaultConversation.ts): extract getDefaultEndpoint function for better code organization and reusability
* feat(svg): add ContinueIcon component
feat(svg): add RegenerateIcon component
feat(svg): add ContinueIcon and RegenerateIcon components to index.ts
* feat(Button.tsx): add onClick and className props to Button component
feat(GenerationButtons.tsx): add logic to display Regenerate or StopGenerating button based on isSubmitting and messages
feat(Regenerate.tsx): create Regenerate component with RegenerateIcon and handleRegenerate function
feat(StopGenerating.tsx): create StopGenerating component with StopGeneratingIcon and handleStopGenerating function
* fix(TextChat.jsx): reorder imports and variables for better readability
fix(TextChat.jsx): fix typo in condition for isNotAppendable variable
fix(TextChat.jsx): remove unused handleStopGenerating function
fix(ContinueIcon.tsx): remove unnecessary closing tags for polygon elements
fix(useMessageHandler.ts): add missing type annotations for handleStopGenerating and handleRegenerate functions
fix(useMessageHandler.ts): remove unused variables in return statement
* fix(getDefaultConversation.ts): refactor code to use getLocalStorageItems function
feat(getLocalStorageItems.ts): add utility function to retrieve items from local storage
* fix(OpenAIClient.js): add support for streaming result in sendCompletion method
feat(OpenAIClient.js): add finish_reason metadata to opts in sendCompletion method
feat(Message.js): add finish_reason field to Message model
feat(messageSchema.js): add finish_reason field to messageSchema
feat(openAI.js): parse chatGptLabel and promptPrefix from req.body and pass rest of the modelOptions to endpointOption
feat(openAI.js): add addMetadata function to store metadata in ask function
feat(openAI.js): add metadata to response if available
feat(schemas.ts): add finish_reason field to tMessageSchema
* feat(types.ts): add TOnClick and TGenButtonProps types for button components
feat(Continue.tsx): create Continue component for generating button
feat(GenerationButtons.tsx): update GenerationButtons component to use Continue component
feat(Regenerate.tsx): create Regenerate component for regenerating button
feat(Stop.tsx): create Stop component for stop generating button
* feat(MessageHandler.jsx): add MessageHandler component to handle messages and conversations
fix(Root.jsx): fix import paths for Nav and MessageHandler components
* feat(useMessageHandler.ts): add support for generation parameter in ask function
feat(useMessageHandler.ts): add support for isEdited parameter in ask function
feat(useMessageHandler.ts): add support for continueGeneration function
fix(createPayload.ts): replace endpoint URL when isEdited parameter is true
* chore(client): set skipLibCheck to true in tsconfig.json
* fix(useMessageHandler.ts): remove unused clientId variable
fix(schemas.ts): make clientId field in tMessageSchema nullable and optional
* wip: edit route for continue generation
* refactor(api): move handlers to root of routes dir
* fix(useMessageHandler.ts): initialize currentMessages to an empty array if messages is null
fix(useMessageHandler.ts): update initialResponse text to use responseText variable
fix(useMessageHandler.ts): update setMessages logic for isRegenerate case
fix(MessageHandler.jsx): update setMessages logic for cancelHandler, createdHandler, and finalHandler
* fix(schemas.ts): make createdAt and updatedAt fields optional and set default values using new Date().toISOString()
fix(schemas.ts): change type annotation of TMessage from infer to input
* refactor(useMessageHandler.ts): rename AskProps type to TAskProps
refactor(useMessageHandler.ts): remove generation property from ask function arguments
refactor(useMessageHandler.ts): use nullish coalescing operator (??) instead of logical OR (||)
refactor(useMessageHandler.ts): pass the responseMessageId to message prop of submission
* fix(BaseClient.js): use nullish coalescing operator (??) instead of logical OR (||) for default values
* fix(BaseClient.js): fix responseMessageId assignment in handleStartMethods method
feat(BaseClient.js): add support for isEdited flag in sendMessage method
feat(BaseClient.js): add generation to responseMessage text in sendMessage method
* fix(openAI.js): remove unused imports and commented out code
feat(openAI.js): add support for generation parameter in request body
fix(openAI.js): remove console.log statement
fix(openAI.js): remove unused variables and parameters
fix(openAI.js): update response text in case of error
fix(openAI.js): handle error and abort message in case of error
fix(handlers.js): add generation parameter to createOnProgress function
fix(useMessageHandler.ts): update responseText variable to use generation parameter
* refactor(api/middleware): move inside server dir
* refactor: add endpoint specific, modular functions to build options and initialize clients, create server/utils, move middleware, separate utils into api general utils and server specific utils
* fix(abortMiddleware.js): import getConvo and getConvoTitle functions from models
feat(abortMiddleware.js): add abortAsk function to abortController to handle aborting of requests
fix(openAI.js): import buildOptions and initializeClient functions from endpoints/openAI
refactor(openAI.js): use getAbortData function to get data for abortAsk function
* refactor: move endpoint specific logic to an endpoints dir
* refactor(PluginService.js): fix import path for encrypt and decrypt functions in PluginService.js
* feat(openAI): add new endpoint for adding a title to a conversation
- Added a new file `addTitle.js` in the `api/server/routes/endpoints/openAI` directory.
- The `addTitle.js` file exports a function `addTitle` that takes in request parameters and performs the following actions:
- If the `parentMessageId` is `'00000000-0000-0000-0000-000000000000'` and `newConvo` is true, it proceeds with the following steps:
- Calls the `titleConvo` function from the `titleConvo` module, passing in the necessary parameters.
- Calls the `saveConvo` function from the `saveConvo` module, passing in the user ID and conversation details.
- Updated the `index.js` file in the `api/server/routes/endpoints/openAI` directory to export the `addTitle` function.
- This change adds
* fix(abortMiddleware.js): remove console.log statement
refactor(gptPlugins.js): update imports and function parameters
feat(gptPlugins.js): add support for abortController and getAbortData
refactor(openAI.js): update imports and function parameters
feat(openAI.js): add support for abortController and getAbortData
fix(openAI.js): refactor code to use modularized functions and middleware
fix(buildOptions.js): refactor code to use destructuring and update variable names
* refactor(askChatGPTBrowser.js, bingAI.js, google.js): remove duplicate code for setting response headers
feat(askChatGPTBrowser.js, bingAI.js, google.js): add setHeaders middleware to set response headers
* feat(middleware): validateEndpoint, refactor buildOption to only be concerned of endpointOption
* fix(abortMiddleware.js): add 'finish_reason' property with value 'incomplete' to responseMessage object
fix(abortMessage.js): remove console.log statement for aborted message
fix(handlers.js): modify tokens assignment to handle empty generation string and trailing space
* fix(BaseClient.js): import addSpaceIfNeeded function from server/utils
fix(BaseClient.js): add space before generation in text property
fix(index.js): remove getCitations and citeText exports
feat(buildEndpointOption.js): add buildEndpointOption middleware
fix(index.js): import buildEndpointOption middleware
fix(anthropic.js): remove buildOptions function and use endpointOption from req.body
fix(gptPlugins.js): remove buildOptions function and use endpointOption from req.body
fix(openAI.js): remove buildOptions function and use endpointOption from req.body
feat(utils): add citations.js and handleText.js modules
fix(utils): fix import statements in index.js module
* refactor(gptPlugins.js): use getResponseSender function from librechat-data-provider
* feat(gptPlugins): complete 'continue generating'
* wip: anthropic continue regen
* feat(middleware): add validateRegistration middleware
A new middleware function called `validateRegistration` has been added to the list of exported middleware functions in `index.js`. This middleware is responsible for validating registration data before allowing the registration process to proceed.
* feat(Anthropic): complete continue regen
* chore: add librechat-data-provider to api/package.json
* fix(ci): backend-review will mock meilisearch, also installs data-provider as now needed
* chore(ci): remove unneeded SEARCH env var
* style(GenerationButtons): make text shorter for sake of space economy, even though this diverges from chat.openai.com
* style(GenerationButtons/ScrollToBottom): adjust visibility/position based on screen size
* chore(client): 'Editting' typo
* feat(GenerationButtons.tsx): add support for endpoint prop in GenerationButtons component
feat(OptionsBar.tsx): pass endpoint prop to GenerationButtons component
feat(useGenerations.ts): create useGenerations hook to handle generation logic
fix(schemas.ts): add searchResult field to tMessageSchema
* refactor(HoverButtons): convert to TSX and utilize new useGenerations hook
* fix(abortMiddleware): handle error with res headers set, or abortController not found, to ensure proper API error is sent to the client, chore(BaseClient): remove console log for onStart message meant for debugging
* refactor(api): remove librechat-data-provider dep for now as it complicates deployed docker build stage, re-use code in CJS, located in server/endpoints/schemas
* chore: remove console.logs from test files
* ci: add backend tests for AnthropicClient, focusing on new buildMessages logic
* refactor(FakeClient): use actual BaseClient sendMessage method for testing
* test(BaseClient.test.js): add test for loading chat history
test(BaseClient.test.js): add test for sendMessage logic with isEdited flag
* fix(buildEndpointOption.js): add support for azureOpenAI in buildFunction object
wip(endpoints.js): fetch Azure models from Azure OpenAI API if opts.azure is true
* fix(Button.tsx): add data-testid attribute to button component
fix(SelectDropDown.tsx): add data-testid attribute to Listbox.Button component
fix(messages.spec.ts): add waitForServerStream function to consolidate logic for awaiting the server response
feat(messages.spec.ts): add test for stopping and continuing message and improve browser/page context order and closing
* refactor(onProgress): speed up time to save initial message for editable routes
* chore: disable AI message editing (for now), was accidentally allowed
* refactor: ensure continue is only supported for latest message style: improve styling in dark mode and across all hover buttons/icons, including making edit icon for AI invisible (for now)
* fix: add test id to generation buttons so they never resolve to 2+ items
* chore(package.json): add 'packages/' to the list of ignored directories
chore(data-provider/package.json): bump version to 0.1.5
2023-08-17 12:50:05 -04:00
|
|
|
},
|
|
|
|
|
};
|
|
|
|
|
client = new AnthropicClient('test-api-key');
|
|
|
|
|
client.setOptions(options);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
describe('setOptions', () => {
|
|
|
|
|
it('should set the options correctly', () => {
|
|
|
|
|
expect(client.apiKey).toBe('test-api-key');
|
|
|
|
|
expect(client.modelOptions.model).toBe(model);
|
2024-07-20 08:53:16 -04:00
|
|
|
expect(client.modelOptions.temperature).toBe(anthropicSettings.temperature.default);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should set legacy maxOutputTokens for non-Claude-3 models', () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-2',
|
|
|
|
|
maxOutputTokens: anthropicSettings.maxOutputTokens.default,
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
expect(client.modelOptions.maxOutputTokens).toBe(
|
|
|
|
|
anthropicSettings.legacy.maxOutputTokens.default,
|
|
|
|
|
);
|
|
|
|
|
});
|
|
|
|
|
it('should not set maxOutputTokens if not provided', () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-3',
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
expect(client.modelOptions.maxOutputTokens).toBeUndefined();
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should not set legacy maxOutputTokens for Claude-3 models', () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-3-opus-20240229',
|
|
|
|
|
maxOutputTokens: anthropicSettings.legacy.maxOutputTokens.default,
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
expect(client.modelOptions.maxOutputTokens).toBe(
|
|
|
|
|
anthropicSettings.legacy.maxOutputTokens.default,
|
|
|
|
|
);
|
feat(GPT/Anthropic): Continue Regenerating & Generation Buttons (#808)
* feat(useMessageHandler.js/ts): Refactor and add features to handle user messages, support multiple endpoints/models, generate placeholder responses, regeneration, and stopGeneration function
fix(conversation.ts, buildTree.ts): Import TMessage type, handle null parentMessageId
feat(schemas.ts): Update and add schemas for various AI services, add default values, optional fields, and endpoint-to-schema mapping, create parseConvo function
chore(useMessageHandler.js, schemas.ts): Remove unused imports, variables, and chatGPT enum
* wip: add generation buttons
* refactor(cleanupPreset.ts): simplify cleanupPreset function
refactor(getDefaultConversation.js): remove unused code and simplify getDefaultConversation function
feat(utils): add getDefaultConversation function
This commit adds a new utility function called `getDefaultConversation` to the `client/src/utils/getDefaultConversation.ts` file. This function is responsible for generating a default conversation object based on the provided parameters.
The `getDefaultConversation` function takes in an object with the following properties:
- `conversation`: The conversation object to be used as a base.
- `endpointsConfig`: The configuration object containing information about the available endpoints.
- `preset`: An optional preset object that can be used to override the default behavior.
The function first tries to determine the target endpoint based on the preset object. If a valid endpoint is found, it is used as the target endpoint. If not, the function tries to retrieve the last conversation setup from the local storage and uses its endpoint if it is valid. If neither the preset nor the local storage contains a valid endpoint, the function falls back to a default endpoint.
Once the target endpoint is determined,
* fix(utils): remove console.error statement in buildDefaultConversation function
fix(schemas): add default values for catch blocks in openAISchema, googleSchema, bingAISchema, anthropicSchema, chatGPTBrowserSchema, and gptPluginsSchema
* fix: endpoint not changing on change of preset from other endpoint, wip: refactor
* refactor: preset items to TSX
* refactor: convert resetConvo to TS
* refactor(getDefaultConversation.ts): move defaultEndpoints array to the top of the file for better readability
refactor(getDefaultConversation.ts): extract getDefaultEndpoint function for better code organization and reusability
* feat(svg): add ContinueIcon component
feat(svg): add RegenerateIcon component
feat(svg): add ContinueIcon and RegenerateIcon components to index.ts
* feat(Button.tsx): add onClick and className props to Button component
feat(GenerationButtons.tsx): add logic to display Regenerate or StopGenerating button based on isSubmitting and messages
feat(Regenerate.tsx): create Regenerate component with RegenerateIcon and handleRegenerate function
feat(StopGenerating.tsx): create StopGenerating component with StopGeneratingIcon and handleStopGenerating function
* fix(TextChat.jsx): reorder imports and variables for better readability
fix(TextChat.jsx): fix typo in condition for isNotAppendable variable
fix(TextChat.jsx): remove unused handleStopGenerating function
fix(ContinueIcon.tsx): remove unnecessary closing tags for polygon elements
fix(useMessageHandler.ts): add missing type annotations for handleStopGenerating and handleRegenerate functions
fix(useMessageHandler.ts): remove unused variables in return statement
* fix(getDefaultConversation.ts): refactor code to use getLocalStorageItems function
feat(getLocalStorageItems.ts): add utility function to retrieve items from local storage
* fix(OpenAIClient.js): add support for streaming result in sendCompletion method
feat(OpenAIClient.js): add finish_reason metadata to opts in sendCompletion method
feat(Message.js): add finish_reason field to Message model
feat(messageSchema.js): add finish_reason field to messageSchema
feat(openAI.js): parse chatGptLabel and promptPrefix from req.body and pass rest of the modelOptions to endpointOption
feat(openAI.js): add addMetadata function to store metadata in ask function
feat(openAI.js): add metadata to response if available
feat(schemas.ts): add finish_reason field to tMessageSchema
* feat(types.ts): add TOnClick and TGenButtonProps types for button components
feat(Continue.tsx): create Continue component for generating button
feat(GenerationButtons.tsx): update GenerationButtons component to use Continue component
feat(Regenerate.tsx): create Regenerate component for regenerating button
feat(Stop.tsx): create Stop component for stop generating button
* feat(MessageHandler.jsx): add MessageHandler component to handle messages and conversations
fix(Root.jsx): fix import paths for Nav and MessageHandler components
* feat(useMessageHandler.ts): add support for generation parameter in ask function
feat(useMessageHandler.ts): add support for isEdited parameter in ask function
feat(useMessageHandler.ts): add support for continueGeneration function
fix(createPayload.ts): replace endpoint URL when isEdited parameter is true
* chore(client): set skipLibCheck to true in tsconfig.json
* fix(useMessageHandler.ts): remove unused clientId variable
fix(schemas.ts): make clientId field in tMessageSchema nullable and optional
* wip: edit route for continue generation
* refactor(api): move handlers to root of routes dir
* fix(useMessageHandler.ts): initialize currentMessages to an empty array if messages is null
fix(useMessageHandler.ts): update initialResponse text to use responseText variable
fix(useMessageHandler.ts): update setMessages logic for isRegenerate case
fix(MessageHandler.jsx): update setMessages logic for cancelHandler, createdHandler, and finalHandler
* fix(schemas.ts): make createdAt and updatedAt fields optional and set default values using new Date().toISOString()
fix(schemas.ts): change type annotation of TMessage from infer to input
* refactor(useMessageHandler.ts): rename AskProps type to TAskProps
refactor(useMessageHandler.ts): remove generation property from ask function arguments
refactor(useMessageHandler.ts): use nullish coalescing operator (??) instead of logical OR (||)
refactor(useMessageHandler.ts): pass the responseMessageId to message prop of submission
* fix(BaseClient.js): use nullish coalescing operator (??) instead of logical OR (||) for default values
* fix(BaseClient.js): fix responseMessageId assignment in handleStartMethods method
feat(BaseClient.js): add support for isEdited flag in sendMessage method
feat(BaseClient.js): add generation to responseMessage text in sendMessage method
* fix(openAI.js): remove unused imports and commented out code
feat(openAI.js): add support for generation parameter in request body
fix(openAI.js): remove console.log statement
fix(openAI.js): remove unused variables and parameters
fix(openAI.js): update response text in case of error
fix(openAI.js): handle error and abort message in case of error
fix(handlers.js): add generation parameter to createOnProgress function
fix(useMessageHandler.ts): update responseText variable to use generation parameter
* refactor(api/middleware): move inside server dir
* refactor: add endpoint specific, modular functions to build options and initialize clients, create server/utils, move middleware, separate utils into api general utils and server specific utils
* fix(abortMiddleware.js): import getConvo and getConvoTitle functions from models
feat(abortMiddleware.js): add abortAsk function to abortController to handle aborting of requests
fix(openAI.js): import buildOptions and initializeClient functions from endpoints/openAI
refactor(openAI.js): use getAbortData function to get data for abortAsk function
* refactor: move endpoint specific logic to an endpoints dir
* refactor(PluginService.js): fix import path for encrypt and decrypt functions in PluginService.js
* feat(openAI): add new endpoint for adding a title to a conversation
- Added a new file `addTitle.js` in the `api/server/routes/endpoints/openAI` directory.
- The `addTitle.js` file exports a function `addTitle` that takes in request parameters and performs the following actions:
- If the `parentMessageId` is `'00000000-0000-0000-0000-000000000000'` and `newConvo` is true, it proceeds with the following steps:
- Calls the `titleConvo` function from the `titleConvo` module, passing in the necessary parameters.
- Calls the `saveConvo` function from the `saveConvo` module, passing in the user ID and conversation details.
- Updated the `index.js` file in the `api/server/routes/endpoints/openAI` directory to export the `addTitle` function.
- This change adds
* fix(abortMiddleware.js): remove console.log statement
refactor(gptPlugins.js): update imports and function parameters
feat(gptPlugins.js): add support for abortController and getAbortData
refactor(openAI.js): update imports and function parameters
feat(openAI.js): add support for abortController and getAbortData
fix(openAI.js): refactor code to use modularized functions and middleware
fix(buildOptions.js): refactor code to use destructuring and update variable names
* refactor(askChatGPTBrowser.js, bingAI.js, google.js): remove duplicate code for setting response headers
feat(askChatGPTBrowser.js, bingAI.js, google.js): add setHeaders middleware to set response headers
* feat(middleware): validateEndpoint, refactor buildOption to only be concerned of endpointOption
* fix(abortMiddleware.js): add 'finish_reason' property with value 'incomplete' to responseMessage object
fix(abortMessage.js): remove console.log statement for aborted message
fix(handlers.js): modify tokens assignment to handle empty generation string and trailing space
* fix(BaseClient.js): import addSpaceIfNeeded function from server/utils
fix(BaseClient.js): add space before generation in text property
fix(index.js): remove getCitations and citeText exports
feat(buildEndpointOption.js): add buildEndpointOption middleware
fix(index.js): import buildEndpointOption middleware
fix(anthropic.js): remove buildOptions function and use endpointOption from req.body
fix(gptPlugins.js): remove buildOptions function and use endpointOption from req.body
fix(openAI.js): remove buildOptions function and use endpointOption from req.body
feat(utils): add citations.js and handleText.js modules
fix(utils): fix import statements in index.js module
* refactor(gptPlugins.js): use getResponseSender function from librechat-data-provider
* feat(gptPlugins): complete 'continue generating'
* wip: anthropic continue regen
* feat(middleware): add validateRegistration middleware
A new middleware function called `validateRegistration` has been added to the list of exported middleware functions in `index.js`. This middleware is responsible for validating registration data before allowing the registration process to proceed.
* feat(Anthropic): complete continue regen
* chore: add librechat-data-provider to api/package.json
* fix(ci): backend-review will mock meilisearch, also installs data-provider as now needed
* chore(ci): remove unneeded SEARCH env var
* style(GenerationButtons): make text shorter for sake of space economy, even though this diverges from chat.openai.com
* style(GenerationButtons/ScrollToBottom): adjust visibility/position based on screen size
* chore(client): 'Editting' typo
* feat(GenerationButtons.tsx): add support for endpoint prop in GenerationButtons component
feat(OptionsBar.tsx): pass endpoint prop to GenerationButtons component
feat(useGenerations.ts): create useGenerations hook to handle generation logic
fix(schemas.ts): add searchResult field to tMessageSchema
* refactor(HoverButtons): convert to TSX and utilize new useGenerations hook
* fix(abortMiddleware): handle error with res headers set, or abortController not found, to ensure proper API error is sent to the client, chore(BaseClient): remove console log for onStart message meant for debugging
* refactor(api): remove librechat-data-provider dep for now as it complicates deployed docker build stage, re-use code in CJS, located in server/endpoints/schemas
* chore: remove console.logs from test files
* ci: add backend tests for AnthropicClient, focusing on new buildMessages logic
* refactor(FakeClient): use actual BaseClient sendMessage method for testing
* test(BaseClient.test.js): add test for loading chat history
test(BaseClient.test.js): add test for sendMessage logic with isEdited flag
* fix(buildEndpointOption.js): add support for azureOpenAI in buildFunction object
wip(endpoints.js): fetch Azure models from Azure OpenAI API if opts.azure is true
* fix(Button.tsx): add data-testid attribute to button component
fix(SelectDropDown.tsx): add data-testid attribute to Listbox.Button component
fix(messages.spec.ts): add waitForServerStream function to consolidate logic for awaiting the server response
feat(messages.spec.ts): add test for stopping and continuing message and improve browser/page context order and closing
* refactor(onProgress): speed up time to save initial message for editable routes
* chore: disable AI message editing (for now), was accidentally allowed
* refactor: ensure continue is only supported for latest message style: improve styling in dark mode and across all hover buttons/icons, including making edit icon for AI invisible (for now)
* fix: add test id to generation buttons so they never resolve to 2+ items
* chore(package.json): add 'packages/' to the list of ignored directories
chore(data-provider/package.json): bump version to 0.1.5
2023-08-17 12:50:05 -04:00
|
|
|
});
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
describe('getSaveOptions', () => {
|
|
|
|
|
it('should return the correct save options', () => {
|
|
|
|
|
const options = client.getSaveOptions();
|
|
|
|
|
expect(options).toHaveProperty('modelLabel');
|
|
|
|
|
expect(options).toHaveProperty('promptPrefix');
|
|
|
|
|
});
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
describe('buildMessages', () => {
|
|
|
|
|
it('should handle promptPrefix from options when promptPrefix argument is not provided', async () => {
|
|
|
|
|
client.options.promptPrefix = 'Test Prefix from options';
|
|
|
|
|
const result = await client.buildMessages(messages, parentMessageId);
|
|
|
|
|
const { prompt } = result;
|
|
|
|
|
expect(prompt).toContain('Test Prefix from options');
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should build messages correctly for chat completion', async () => {
|
|
|
|
|
const result = await client.buildMessages(messages, '2');
|
|
|
|
|
expect(result).toHaveProperty('prompt');
|
|
|
|
|
expect(result.prompt).toContain(HUMAN_PROMPT);
|
|
|
|
|
expect(result.prompt).toContain('Hello');
|
|
|
|
|
expect(result.prompt).toContain(AI_PROMPT);
|
|
|
|
|
expect(result.prompt).toContain('Hi');
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should group messages by the same author', async () => {
|
|
|
|
|
const groupedMessages = messages.map((m) => ({ ...m, isCreatedByUser: true, role: 'user' }));
|
|
|
|
|
const result = await client.buildMessages(groupedMessages, '3');
|
|
|
|
|
expect(result.context).toHaveLength(1);
|
|
|
|
|
|
|
|
|
|
// Check that HUMAN_PROMPT appears only once in the prompt
|
|
|
|
|
const matches = result.prompt.match(new RegExp(HUMAN_PROMPT, 'g'));
|
|
|
|
|
expect(matches).toHaveLength(1);
|
|
|
|
|
|
|
|
|
|
groupedMessages.push({
|
|
|
|
|
role: 'assistant',
|
|
|
|
|
isCreatedByUser: false,
|
|
|
|
|
text: 'I heard you the first time',
|
|
|
|
|
messageId: '4',
|
|
|
|
|
parentMessageId: '3',
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
const result2 = await client.buildMessages(groupedMessages, '4');
|
|
|
|
|
expect(result2.context).toHaveLength(2);
|
|
|
|
|
|
|
|
|
|
// Check that HUMAN_PROMPT appears only once in the prompt
|
|
|
|
|
const human_matches = result2.prompt.match(new RegExp(HUMAN_PROMPT, 'g'));
|
|
|
|
|
const ai_matches = result2.prompt.match(new RegExp(AI_PROMPT, 'g'));
|
|
|
|
|
expect(human_matches).toHaveLength(1);
|
|
|
|
|
expect(ai_matches).toHaveLength(1);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should handle isEdited condition', async () => {
|
|
|
|
|
const editedMessages = [
|
|
|
|
|
{ role: 'user', isCreatedByUser: true, text: 'Hello', messageId: '1' },
|
|
|
|
|
{ role: 'assistant', isCreatedByUser: false, text: 'Hi', messageId: '2', parentMessageId },
|
|
|
|
|
];
|
|
|
|
|
|
|
|
|
|
const trimmedLabel = AI_PROMPT.trim();
|
|
|
|
|
const result = await client.buildMessages(editedMessages, '2');
|
|
|
|
|
expect(result.prompt.trim().endsWith(trimmedLabel)).toBeFalsy();
|
|
|
|
|
|
|
|
|
|
// Add a human message at the end to test the opposite
|
|
|
|
|
editedMessages.push({
|
|
|
|
|
role: 'user',
|
|
|
|
|
isCreatedByUser: true,
|
|
|
|
|
text: 'Hi again',
|
|
|
|
|
messageId: '3',
|
|
|
|
|
parentMessageId: '2',
|
|
|
|
|
});
|
|
|
|
|
const result2 = await client.buildMessages(editedMessages, '3');
|
|
|
|
|
expect(result2.prompt.trim().endsWith(trimmedLabel)).toBeTruthy();
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should build messages correctly with a promptPrefix', async () => {
|
|
|
|
|
const promptPrefix = 'Test Prefix';
|
|
|
|
|
client.options.promptPrefix = promptPrefix;
|
|
|
|
|
const result = await client.buildMessages(messages, parentMessageId);
|
|
|
|
|
const { prompt } = result;
|
|
|
|
|
expect(prompt).toBeDefined();
|
|
|
|
|
expect(prompt).toContain(promptPrefix);
|
|
|
|
|
const textAfterPrefix = prompt.split(promptPrefix)[1];
|
|
|
|
|
expect(textAfterPrefix).toContain(AI_PROMPT);
|
|
|
|
|
|
|
|
|
|
const editedMessages = messages.slice(0, -1);
|
|
|
|
|
const result2 = await client.buildMessages(editedMessages, parentMessageId);
|
|
|
|
|
const textAfterPrefix2 = result2.prompt.split(promptPrefix)[1];
|
|
|
|
|
expect(textAfterPrefix2).toContain(AI_PROMPT);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should handle identityPrefix from options', async () => {
|
|
|
|
|
client.options.userLabel = 'John';
|
|
|
|
|
client.options.modelLabel = 'Claude-2';
|
|
|
|
|
const result = await client.buildMessages(messages, parentMessageId);
|
|
|
|
|
const { prompt } = result;
|
2025-05-22 15:00:44 -04:00
|
|
|
expect(prompt).toContain("Human's name: John");
|
feat(GPT/Anthropic): Continue Regenerating & Generation Buttons (#808)
* feat(useMessageHandler.js/ts): Refactor and add features to handle user messages, support multiple endpoints/models, generate placeholder responses, regeneration, and stopGeneration function
fix(conversation.ts, buildTree.ts): Import TMessage type, handle null parentMessageId
feat(schemas.ts): Update and add schemas for various AI services, add default values, optional fields, and endpoint-to-schema mapping, create parseConvo function
chore(useMessageHandler.js, schemas.ts): Remove unused imports, variables, and chatGPT enum
* wip: add generation buttons
* refactor(cleanupPreset.ts): simplify cleanupPreset function
refactor(getDefaultConversation.js): remove unused code and simplify getDefaultConversation function
feat(utils): add getDefaultConversation function
This commit adds a new utility function called `getDefaultConversation` to the `client/src/utils/getDefaultConversation.ts` file. This function is responsible for generating a default conversation object based on the provided parameters.
The `getDefaultConversation` function takes in an object with the following properties:
- `conversation`: The conversation object to be used as a base.
- `endpointsConfig`: The configuration object containing information about the available endpoints.
- `preset`: An optional preset object that can be used to override the default behavior.
The function first tries to determine the target endpoint based on the preset object. If a valid endpoint is found, it is used as the target endpoint. If not, the function tries to retrieve the last conversation setup from the local storage and uses its endpoint if it is valid. If neither the preset nor the local storage contains a valid endpoint, the function falls back to a default endpoint.
Once the target endpoint is determined,
* fix(utils): remove console.error statement in buildDefaultConversation function
fix(schemas): add default values for catch blocks in openAISchema, googleSchema, bingAISchema, anthropicSchema, chatGPTBrowserSchema, and gptPluginsSchema
* fix: endpoint not changing on change of preset from other endpoint, wip: refactor
* refactor: preset items to TSX
* refactor: convert resetConvo to TS
* refactor(getDefaultConversation.ts): move defaultEndpoints array to the top of the file for better readability
refactor(getDefaultConversation.ts): extract getDefaultEndpoint function for better code organization and reusability
* feat(svg): add ContinueIcon component
feat(svg): add RegenerateIcon component
feat(svg): add ContinueIcon and RegenerateIcon components to index.ts
* feat(Button.tsx): add onClick and className props to Button component
feat(GenerationButtons.tsx): add logic to display Regenerate or StopGenerating button based on isSubmitting and messages
feat(Regenerate.tsx): create Regenerate component with RegenerateIcon and handleRegenerate function
feat(StopGenerating.tsx): create StopGenerating component with StopGeneratingIcon and handleStopGenerating function
* fix(TextChat.jsx): reorder imports and variables for better readability
fix(TextChat.jsx): fix typo in condition for isNotAppendable variable
fix(TextChat.jsx): remove unused handleStopGenerating function
fix(ContinueIcon.tsx): remove unnecessary closing tags for polygon elements
fix(useMessageHandler.ts): add missing type annotations for handleStopGenerating and handleRegenerate functions
fix(useMessageHandler.ts): remove unused variables in return statement
* fix(getDefaultConversation.ts): refactor code to use getLocalStorageItems function
feat(getLocalStorageItems.ts): add utility function to retrieve items from local storage
* fix(OpenAIClient.js): add support for streaming result in sendCompletion method
feat(OpenAIClient.js): add finish_reason metadata to opts in sendCompletion method
feat(Message.js): add finish_reason field to Message model
feat(messageSchema.js): add finish_reason field to messageSchema
feat(openAI.js): parse chatGptLabel and promptPrefix from req.body and pass rest of the modelOptions to endpointOption
feat(openAI.js): add addMetadata function to store metadata in ask function
feat(openAI.js): add metadata to response if available
feat(schemas.ts): add finish_reason field to tMessageSchema
* feat(types.ts): add TOnClick and TGenButtonProps types for button components
feat(Continue.tsx): create Continue component for generating button
feat(GenerationButtons.tsx): update GenerationButtons component to use Continue component
feat(Regenerate.tsx): create Regenerate component for regenerating button
feat(Stop.tsx): create Stop component for stop generating button
* feat(MessageHandler.jsx): add MessageHandler component to handle messages and conversations
fix(Root.jsx): fix import paths for Nav and MessageHandler components
* feat(useMessageHandler.ts): add support for generation parameter in ask function
feat(useMessageHandler.ts): add support for isEdited parameter in ask function
feat(useMessageHandler.ts): add support for continueGeneration function
fix(createPayload.ts): replace endpoint URL when isEdited parameter is true
* chore(client): set skipLibCheck to true in tsconfig.json
* fix(useMessageHandler.ts): remove unused clientId variable
fix(schemas.ts): make clientId field in tMessageSchema nullable and optional
* wip: edit route for continue generation
* refactor(api): move handlers to root of routes dir
* fix(useMessageHandler.ts): initialize currentMessages to an empty array if messages is null
fix(useMessageHandler.ts): update initialResponse text to use responseText variable
fix(useMessageHandler.ts): update setMessages logic for isRegenerate case
fix(MessageHandler.jsx): update setMessages logic for cancelHandler, createdHandler, and finalHandler
* fix(schemas.ts): make createdAt and updatedAt fields optional and set default values using new Date().toISOString()
fix(schemas.ts): change type annotation of TMessage from infer to input
* refactor(useMessageHandler.ts): rename AskProps type to TAskProps
refactor(useMessageHandler.ts): remove generation property from ask function arguments
refactor(useMessageHandler.ts): use nullish coalescing operator (??) instead of logical OR (||)
refactor(useMessageHandler.ts): pass the responseMessageId to message prop of submission
* fix(BaseClient.js): use nullish coalescing operator (??) instead of logical OR (||) for default values
* fix(BaseClient.js): fix responseMessageId assignment in handleStartMethods method
feat(BaseClient.js): add support for isEdited flag in sendMessage method
feat(BaseClient.js): add generation to responseMessage text in sendMessage method
* fix(openAI.js): remove unused imports and commented out code
feat(openAI.js): add support for generation parameter in request body
fix(openAI.js): remove console.log statement
fix(openAI.js): remove unused variables and parameters
fix(openAI.js): update response text in case of error
fix(openAI.js): handle error and abort message in case of error
fix(handlers.js): add generation parameter to createOnProgress function
fix(useMessageHandler.ts): update responseText variable to use generation parameter
* refactor(api/middleware): move inside server dir
* refactor: add endpoint specific, modular functions to build options and initialize clients, create server/utils, move middleware, separate utils into api general utils and server specific utils
* fix(abortMiddleware.js): import getConvo and getConvoTitle functions from models
feat(abortMiddleware.js): add abortAsk function to abortController to handle aborting of requests
fix(openAI.js): import buildOptions and initializeClient functions from endpoints/openAI
refactor(openAI.js): use getAbortData function to get data for abortAsk function
* refactor: move endpoint specific logic to an endpoints dir
* refactor(PluginService.js): fix import path for encrypt and decrypt functions in PluginService.js
* feat(openAI): add new endpoint for adding a title to a conversation
- Added a new file `addTitle.js` in the `api/server/routes/endpoints/openAI` directory.
- The `addTitle.js` file exports a function `addTitle` that takes in request parameters and performs the following actions:
- If the `parentMessageId` is `'00000000-0000-0000-0000-000000000000'` and `newConvo` is true, it proceeds with the following steps:
- Calls the `titleConvo` function from the `titleConvo` module, passing in the necessary parameters.
- Calls the `saveConvo` function from the `saveConvo` module, passing in the user ID and conversation details.
- Updated the `index.js` file in the `api/server/routes/endpoints/openAI` directory to export the `addTitle` function.
- This change adds
* fix(abortMiddleware.js): remove console.log statement
refactor(gptPlugins.js): update imports and function parameters
feat(gptPlugins.js): add support for abortController and getAbortData
refactor(openAI.js): update imports and function parameters
feat(openAI.js): add support for abortController and getAbortData
fix(openAI.js): refactor code to use modularized functions and middleware
fix(buildOptions.js): refactor code to use destructuring and update variable names
* refactor(askChatGPTBrowser.js, bingAI.js, google.js): remove duplicate code for setting response headers
feat(askChatGPTBrowser.js, bingAI.js, google.js): add setHeaders middleware to set response headers
* feat(middleware): validateEndpoint, refactor buildOption to only be concerned of endpointOption
* fix(abortMiddleware.js): add 'finish_reason' property with value 'incomplete' to responseMessage object
fix(abortMessage.js): remove console.log statement for aborted message
fix(handlers.js): modify tokens assignment to handle empty generation string and trailing space
* fix(BaseClient.js): import addSpaceIfNeeded function from server/utils
fix(BaseClient.js): add space before generation in text property
fix(index.js): remove getCitations and citeText exports
feat(buildEndpointOption.js): add buildEndpointOption middleware
fix(index.js): import buildEndpointOption middleware
fix(anthropic.js): remove buildOptions function and use endpointOption from req.body
fix(gptPlugins.js): remove buildOptions function and use endpointOption from req.body
fix(openAI.js): remove buildOptions function and use endpointOption from req.body
feat(utils): add citations.js and handleText.js modules
fix(utils): fix import statements in index.js module
* refactor(gptPlugins.js): use getResponseSender function from librechat-data-provider
* feat(gptPlugins): complete 'continue generating'
* wip: anthropic continue regen
* feat(middleware): add validateRegistration middleware
A new middleware function called `validateRegistration` has been added to the list of exported middleware functions in `index.js`. This middleware is responsible for validating registration data before allowing the registration process to proceed.
* feat(Anthropic): complete continue regen
* chore: add librechat-data-provider to api/package.json
* fix(ci): backend-review will mock meilisearch, also installs data-provider as now needed
* chore(ci): remove unneeded SEARCH env var
* style(GenerationButtons): make text shorter for sake of space economy, even though this diverges from chat.openai.com
* style(GenerationButtons/ScrollToBottom): adjust visibility/position based on screen size
* chore(client): 'Editting' typo
* feat(GenerationButtons.tsx): add support for endpoint prop in GenerationButtons component
feat(OptionsBar.tsx): pass endpoint prop to GenerationButtons component
feat(useGenerations.ts): create useGenerations hook to handle generation logic
fix(schemas.ts): add searchResult field to tMessageSchema
* refactor(HoverButtons): convert to TSX and utilize new useGenerations hook
* fix(abortMiddleware): handle error with res headers set, or abortController not found, to ensure proper API error is sent to the client, chore(BaseClient): remove console log for onStart message meant for debugging
* refactor(api): remove librechat-data-provider dep for now as it complicates deployed docker build stage, re-use code in CJS, located in server/endpoints/schemas
* chore: remove console.logs from test files
* ci: add backend tests for AnthropicClient, focusing on new buildMessages logic
* refactor(FakeClient): use actual BaseClient sendMessage method for testing
* test(BaseClient.test.js): add test for loading chat history
test(BaseClient.test.js): add test for sendMessage logic with isEdited flag
* fix(buildEndpointOption.js): add support for azureOpenAI in buildFunction object
wip(endpoints.js): fetch Azure models from Azure OpenAI API if opts.azure is true
* fix(Button.tsx): add data-testid attribute to button component
fix(SelectDropDown.tsx): add data-testid attribute to Listbox.Button component
fix(messages.spec.ts): add waitForServerStream function to consolidate logic for awaiting the server response
feat(messages.spec.ts): add test for stopping and continuing message and improve browser/page context order and closing
* refactor(onProgress): speed up time to save initial message for editable routes
* chore: disable AI message editing (for now), was accidentally allowed
* refactor: ensure continue is only supported for latest message style: improve styling in dark mode and across all hover buttons/icons, including making edit icon for AI invisible (for now)
* fix: add test id to generation buttons so they never resolve to 2+ items
* chore(package.json): add 'packages/' to the list of ignored directories
chore(data-provider/package.json): bump version to 0.1.5
2023-08-17 12:50:05 -04:00
|
|
|
expect(prompt).toContain('You are Claude-2');
|
|
|
|
|
});
|
|
|
|
|
});
|
2024-07-20 08:53:16 -04:00
|
|
|
|
|
|
|
|
describe('getClient', () => {
|
|
|
|
|
it('should set legacy maxOutputTokens for non-Claude-3 models', () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-2',
|
|
|
|
|
maxOutputTokens: anthropicSettings.legacy.maxOutputTokens.default,
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
expect(client.modelOptions.maxOutputTokens).toBe(
|
|
|
|
|
anthropicSettings.legacy.maxOutputTokens.default,
|
|
|
|
|
);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should not set legacy maxOutputTokens for Claude-3 models', () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-3-opus-20240229',
|
|
|
|
|
maxOutputTokens: anthropicSettings.legacy.maxOutputTokens.default,
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
expect(client.modelOptions.maxOutputTokens).toBe(
|
|
|
|
|
anthropicSettings.legacy.maxOutputTokens.default,
|
|
|
|
|
);
|
|
|
|
|
});
|
|
|
|
|
|
2024-10-22 16:45:26 -04:00
|
|
|
it('should add "max-tokens" & "prompt-caching" beta header for claude-3-5-sonnet model', () => {
|
2024-07-20 08:53:16 -04:00
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
const modelOptions = {
|
2024-10-22 16:45:26 -04:00
|
|
|
model: 'claude-3-5-sonnet-20241022',
|
2024-07-20 08:53:16 -04:00
|
|
|
};
|
2024-08-26 15:34:46 -04:00
|
|
|
client.setOptions({ modelOptions, promptCache: true });
|
2024-07-20 08:53:16 -04:00
|
|
|
const anthropicClient = client.getClient(modelOptions);
|
|
|
|
|
expect(anthropicClient._options.defaultHeaders).toBeDefined();
|
|
|
|
|
expect(anthropicClient._options.defaultHeaders).toHaveProperty('anthropic-beta');
|
|
|
|
|
expect(anthropicClient._options.defaultHeaders['anthropic-beta']).toBe(
|
2024-08-17 03:24:09 -04:00
|
|
|
'max-tokens-3-5-sonnet-2024-07-15,prompt-caching-2024-07-31',
|
|
|
|
|
);
|
|
|
|
|
});
|
|
|
|
|
|
2024-10-22 16:45:26 -04:00
|
|
|
it('should add "prompt-caching" beta header for claude-3-haiku model', () => {
|
2024-08-17 03:24:09 -04:00
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
const modelOptions = {
|
|
|
|
|
model: 'claude-3-haiku-2028',
|
|
|
|
|
};
|
2024-08-26 15:34:46 -04:00
|
|
|
client.setOptions({ modelOptions, promptCache: true });
|
2024-08-17 03:24:09 -04:00
|
|
|
const anthropicClient = client.getClient(modelOptions);
|
|
|
|
|
expect(anthropicClient._options.defaultHeaders).toBeDefined();
|
|
|
|
|
expect(anthropicClient._options.defaultHeaders).toHaveProperty('anthropic-beta');
|
|
|
|
|
expect(anthropicClient._options.defaultHeaders['anthropic-beta']).toBe(
|
|
|
|
|
'prompt-caching-2024-07-31',
|
2024-07-20 08:53:16 -04:00
|
|
|
);
|
|
|
|
|
});
|
|
|
|
|
|
2024-10-22 16:45:26 -04:00
|
|
|
it('should add "prompt-caching" beta header for claude-3-opus model', () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
const modelOptions = {
|
|
|
|
|
model: 'claude-3-opus-2028',
|
|
|
|
|
};
|
|
|
|
|
client.setOptions({ modelOptions, promptCache: true });
|
|
|
|
|
const anthropicClient = client.getClient(modelOptions);
|
|
|
|
|
expect(anthropicClient._options.defaultHeaders).toBeDefined();
|
|
|
|
|
expect(anthropicClient._options.defaultHeaders).toHaveProperty('anthropic-beta');
|
|
|
|
|
expect(anthropicClient._options.defaultHeaders['anthropic-beta']).toBe(
|
|
|
|
|
'prompt-caching-2024-07-31',
|
|
|
|
|
);
|
|
|
|
|
});
|
|
|
|
|
|
2025-05-22 15:00:44 -04:00
|
|
|
describe('Claude 4 model headers', () => {
|
|
|
|
|
it('should add "prompt-caching" beta header for claude-sonnet-4 model', () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
const modelOptions = {
|
|
|
|
|
model: 'claude-sonnet-4-20250514',
|
|
|
|
|
};
|
|
|
|
|
client.setOptions({ modelOptions, promptCache: true });
|
|
|
|
|
const anthropicClient = client.getClient(modelOptions);
|
|
|
|
|
expect(anthropicClient._options.defaultHeaders).toBeDefined();
|
|
|
|
|
expect(anthropicClient._options.defaultHeaders).toHaveProperty('anthropic-beta');
|
|
|
|
|
expect(anthropicClient._options.defaultHeaders['anthropic-beta']).toBe(
|
|
|
|
|
'prompt-caching-2024-07-31',
|
|
|
|
|
);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should add "prompt-caching" beta header for claude-opus-4 model', () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
const modelOptions = {
|
|
|
|
|
model: 'claude-opus-4-20250514',
|
|
|
|
|
};
|
|
|
|
|
client.setOptions({ modelOptions, promptCache: true });
|
|
|
|
|
const anthropicClient = client.getClient(modelOptions);
|
|
|
|
|
expect(anthropicClient._options.defaultHeaders).toBeDefined();
|
|
|
|
|
expect(anthropicClient._options.defaultHeaders).toHaveProperty('anthropic-beta');
|
|
|
|
|
expect(anthropicClient._options.defaultHeaders['anthropic-beta']).toBe(
|
|
|
|
|
'prompt-caching-2024-07-31',
|
|
|
|
|
);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should add "prompt-caching" beta header for claude-4-sonnet model', () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
const modelOptions = {
|
|
|
|
|
model: 'claude-4-sonnet-20250514',
|
|
|
|
|
};
|
|
|
|
|
client.setOptions({ modelOptions, promptCache: true });
|
|
|
|
|
const anthropicClient = client.getClient(modelOptions);
|
|
|
|
|
expect(anthropicClient._options.defaultHeaders).toBeDefined();
|
|
|
|
|
expect(anthropicClient._options.defaultHeaders).toHaveProperty('anthropic-beta');
|
|
|
|
|
expect(anthropicClient._options.defaultHeaders['anthropic-beta']).toBe(
|
|
|
|
|
'prompt-caching-2024-07-31',
|
|
|
|
|
);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should add "prompt-caching" beta header for claude-4-opus model', () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
const modelOptions = {
|
|
|
|
|
model: 'claude-4-opus-20250514',
|
|
|
|
|
};
|
|
|
|
|
client.setOptions({ modelOptions, promptCache: true });
|
|
|
|
|
const anthropicClient = client.getClient(modelOptions);
|
|
|
|
|
expect(anthropicClient._options.defaultHeaders).toBeDefined();
|
|
|
|
|
expect(anthropicClient._options.defaultHeaders).toHaveProperty('anthropic-beta');
|
|
|
|
|
expect(anthropicClient._options.defaultHeaders['anthropic-beta']).toBe(
|
|
|
|
|
'prompt-caching-2024-07-31',
|
|
|
|
|
);
|
|
|
|
|
});
|
|
|
|
|
});
|
|
|
|
|
|
2024-10-22 16:45:26 -04:00
|
|
|
it('should not add beta header for claude-3-5-sonnet-latest model', () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
const modelOptions = {
|
|
|
|
|
model: 'anthropic/claude-3-5-sonnet-latest',
|
|
|
|
|
};
|
|
|
|
|
client.setOptions({ modelOptions, promptCache: true });
|
|
|
|
|
const anthropicClient = client.getClient(modelOptions);
|
2025-06-20 15:49:24 -04:00
|
|
|
expect(anthropicClient._options.defaultHeaders).toBeUndefined();
|
2024-10-22 16:45:26 -04:00
|
|
|
});
|
|
|
|
|
|
2024-07-20 08:53:16 -04:00
|
|
|
it('should not add beta header for other models', () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-2',
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
const anthropicClient = client.getClient();
|
2025-06-20 15:49:24 -04:00
|
|
|
expect(anthropicClient._options.defaultHeaders).toBeUndefined();
|
2024-07-20 08:53:16 -04:00
|
|
|
});
|
|
|
|
|
});
|
2024-08-17 03:24:09 -04:00
|
|
|
|
|
|
|
|
describe('calculateCurrentTokenCount', () => {
|
|
|
|
|
let client;
|
|
|
|
|
|
|
|
|
|
beforeEach(() => {
|
|
|
|
|
client = new AnthropicClient('test-api-key');
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should calculate correct token count when usage is provided', () => {
|
|
|
|
|
const tokenCountMap = {
|
|
|
|
|
msg1: 10,
|
|
|
|
|
msg2: 20,
|
|
|
|
|
currentMsg: 30,
|
|
|
|
|
};
|
|
|
|
|
const currentMessageId = 'currentMsg';
|
|
|
|
|
const usage = {
|
|
|
|
|
input_tokens: 70,
|
|
|
|
|
output_tokens: 50,
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
const result = client.calculateCurrentTokenCount({ tokenCountMap, currentMessageId, usage });
|
|
|
|
|
|
|
|
|
|
expect(result).toBe(40); // 70 - (10 + 20) = 40
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should return original estimate if calculation results in negative value', () => {
|
|
|
|
|
const tokenCountMap = {
|
|
|
|
|
msg1: 40,
|
|
|
|
|
msg2: 50,
|
|
|
|
|
currentMsg: 30,
|
|
|
|
|
};
|
|
|
|
|
const currentMessageId = 'currentMsg';
|
|
|
|
|
const usage = {
|
|
|
|
|
input_tokens: 80,
|
|
|
|
|
output_tokens: 50,
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
const result = client.calculateCurrentTokenCount({ tokenCountMap, currentMessageId, usage });
|
|
|
|
|
|
|
|
|
|
expect(result).toBe(30); // Original estimate
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should handle cache creation and read input tokens', () => {
|
|
|
|
|
const tokenCountMap = {
|
|
|
|
|
msg1: 10,
|
|
|
|
|
msg2: 20,
|
|
|
|
|
currentMsg: 30,
|
|
|
|
|
};
|
|
|
|
|
const currentMessageId = 'currentMsg';
|
|
|
|
|
const usage = {
|
|
|
|
|
input_tokens: 50,
|
|
|
|
|
cache_creation_input_tokens: 10,
|
|
|
|
|
cache_read_input_tokens: 20,
|
|
|
|
|
output_tokens: 40,
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
const result = client.calculateCurrentTokenCount({ tokenCountMap, currentMessageId, usage });
|
|
|
|
|
|
|
|
|
|
expect(result).toBe(50); // (50 + 10 + 20) - (10 + 20) = 50
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should handle missing usage properties', () => {
|
|
|
|
|
const tokenCountMap = {
|
|
|
|
|
msg1: 10,
|
|
|
|
|
msg2: 20,
|
|
|
|
|
currentMsg: 30,
|
|
|
|
|
};
|
|
|
|
|
const currentMessageId = 'currentMsg';
|
|
|
|
|
const usage = {
|
|
|
|
|
output_tokens: 40,
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
const result = client.calculateCurrentTokenCount({ tokenCountMap, currentMessageId, usage });
|
|
|
|
|
|
|
|
|
|
expect(result).toBe(30); // Original estimate
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should handle empty tokenCountMap', () => {
|
|
|
|
|
const tokenCountMap = {};
|
|
|
|
|
const currentMessageId = 'currentMsg';
|
|
|
|
|
const usage = {
|
|
|
|
|
input_tokens: 50,
|
|
|
|
|
output_tokens: 40,
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
const result = client.calculateCurrentTokenCount({ tokenCountMap, currentMessageId, usage });
|
|
|
|
|
|
|
|
|
|
expect(result).toBe(50);
|
|
|
|
|
expect(Number.isNaN(result)).toBe(false);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should handle zero values in usage', () => {
|
|
|
|
|
const tokenCountMap = {
|
|
|
|
|
msg1: 10,
|
|
|
|
|
currentMsg: 20,
|
|
|
|
|
};
|
|
|
|
|
const currentMessageId = 'currentMsg';
|
|
|
|
|
const usage = {
|
|
|
|
|
input_tokens: 0,
|
|
|
|
|
cache_creation_input_tokens: 0,
|
|
|
|
|
cache_read_input_tokens: 0,
|
|
|
|
|
output_tokens: 0,
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
const result = client.calculateCurrentTokenCount({ tokenCountMap, currentMessageId, usage });
|
|
|
|
|
|
|
|
|
|
expect(result).toBe(20); // Should return original estimate
|
|
|
|
|
expect(Number.isNaN(result)).toBe(false);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should handle undefined usage', () => {
|
|
|
|
|
const tokenCountMap = {
|
|
|
|
|
msg1: 10,
|
|
|
|
|
currentMsg: 20,
|
|
|
|
|
};
|
|
|
|
|
const currentMessageId = 'currentMsg';
|
|
|
|
|
const usage = undefined;
|
|
|
|
|
|
|
|
|
|
const result = client.calculateCurrentTokenCount({ tokenCountMap, currentMessageId, usage });
|
|
|
|
|
|
|
|
|
|
expect(result).toBe(20); // Should return original estimate
|
|
|
|
|
expect(Number.isNaN(result)).toBe(false);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should handle non-numeric values in tokenCountMap', () => {
|
|
|
|
|
const tokenCountMap = {
|
|
|
|
|
msg1: 'ten',
|
|
|
|
|
currentMsg: 20,
|
|
|
|
|
};
|
|
|
|
|
const currentMessageId = 'currentMsg';
|
|
|
|
|
const usage = {
|
|
|
|
|
input_tokens: 30,
|
|
|
|
|
output_tokens: 10,
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
const result = client.calculateCurrentTokenCount({ tokenCountMap, currentMessageId, usage });
|
|
|
|
|
|
|
|
|
|
expect(result).toBe(30); // Should return 30 (input_tokens) - 0 (ignored 'ten') = 30
|
|
|
|
|
expect(Number.isNaN(result)).toBe(false);
|
|
|
|
|
});
|
|
|
|
|
});
|
2025-02-26 15:02:03 -05:00
|
|
|
|
|
|
|
|
describe('maxOutputTokens handling for different models', () => {
|
|
|
|
|
it('should not cap maxOutputTokens for Claude 3.5 Sonnet models', () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
const highTokenValue = anthropicSettings.legacy.maxOutputTokens.default * 10;
|
|
|
|
|
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-3-5-sonnet',
|
|
|
|
|
maxOutputTokens: highTokenValue,
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
expect(client.modelOptions.maxOutputTokens).toBe(highTokenValue);
|
|
|
|
|
|
|
|
|
|
// Test with decimal notation
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-3.5-sonnet',
|
|
|
|
|
maxOutputTokens: highTokenValue,
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
expect(client.modelOptions.maxOutputTokens).toBe(highTokenValue);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should not cap maxOutputTokens for Claude 3.7 models', () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
const highTokenValue = anthropicSettings.legacy.maxOutputTokens.default * 2;
|
|
|
|
|
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-3-7-sonnet',
|
|
|
|
|
maxOutputTokens: highTokenValue,
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
expect(client.modelOptions.maxOutputTokens).toBe(highTokenValue);
|
|
|
|
|
|
|
|
|
|
// Test with decimal notation
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-3.7-sonnet',
|
|
|
|
|
maxOutputTokens: highTokenValue,
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
|
2025-05-25 23:40:37 -04:00
|
|
|
expect(client.modelOptions.maxOutputTokens).toBe(highTokenValue);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should not cap maxOutputTokens for Claude 4 Sonnet models', () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
const highTokenValue = anthropicSettings.legacy.maxOutputTokens.default * 10; // 40,960 tokens
|
|
|
|
|
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-sonnet-4-20250514',
|
|
|
|
|
maxOutputTokens: highTokenValue,
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
expect(client.modelOptions.maxOutputTokens).toBe(highTokenValue);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should not cap maxOutputTokens for Claude 4 Opus models', () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
const highTokenValue = anthropicSettings.legacy.maxOutputTokens.default * 6; // 24,576 tokens (under 32K limit)
|
|
|
|
|
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-opus-4-20250514',
|
|
|
|
|
maxOutputTokens: highTokenValue,
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
|
2025-02-26 15:02:03 -05:00
|
|
|
expect(client.modelOptions.maxOutputTokens).toBe(highTokenValue);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should cap maxOutputTokens for Claude 3.5 Haiku models', () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
const highTokenValue = anthropicSettings.legacy.maxOutputTokens.default * 2;
|
|
|
|
|
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-3-5-haiku',
|
|
|
|
|
maxOutputTokens: highTokenValue,
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
expect(client.modelOptions.maxOutputTokens).toBe(
|
|
|
|
|
anthropicSettings.legacy.maxOutputTokens.default,
|
|
|
|
|
);
|
|
|
|
|
|
|
|
|
|
// Test with decimal notation
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-3.5-haiku',
|
|
|
|
|
maxOutputTokens: highTokenValue,
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
expect(client.modelOptions.maxOutputTokens).toBe(
|
|
|
|
|
anthropicSettings.legacy.maxOutputTokens.default,
|
|
|
|
|
);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should cap maxOutputTokens for Claude 3 Haiku and Opus models', () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
const highTokenValue = anthropicSettings.legacy.maxOutputTokens.default * 2;
|
|
|
|
|
|
|
|
|
|
// Test haiku
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-3-haiku',
|
|
|
|
|
maxOutputTokens: highTokenValue,
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
expect(client.modelOptions.maxOutputTokens).toBe(
|
|
|
|
|
anthropicSettings.legacy.maxOutputTokens.default,
|
|
|
|
|
);
|
|
|
|
|
|
|
|
|
|
// Test opus
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-3-opus',
|
|
|
|
|
maxOutputTokens: highTokenValue,
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
expect(client.modelOptions.maxOutputTokens).toBe(
|
|
|
|
|
anthropicSettings.legacy.maxOutputTokens.default,
|
|
|
|
|
);
|
|
|
|
|
});
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
describe('topK/topP parameters for different models', () => {
|
|
|
|
|
beforeEach(() => {
|
|
|
|
|
// Mock the SplitStreamHandler
|
|
|
|
|
jest.spyOn(SplitStreamHandler.prototype, 'handle').mockImplementation(() => {});
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
afterEach(() => {
|
|
|
|
|
jest.restoreAllMocks();
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should include top_k and top_p parameters for non-claude-3.7 models', async () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
|
|
|
|
|
// Create a mock async generator function
|
|
|
|
|
async function* mockAsyncGenerator() {
|
|
|
|
|
yield { type: 'message_start', message: { usage: {} } };
|
|
|
|
|
yield { delta: { text: 'Test response' } };
|
|
|
|
|
yield { type: 'message_delta', usage: {} };
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Mock createResponse to return the async generator
|
|
|
|
|
jest.spyOn(client, 'createResponse').mockImplementation(() => {
|
|
|
|
|
return mockAsyncGenerator();
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-3-opus',
|
|
|
|
|
temperature: 0.7,
|
|
|
|
|
topK: 10,
|
|
|
|
|
topP: 0.9,
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
// Mock getClient to capture the request options
|
|
|
|
|
let capturedOptions = null;
|
|
|
|
|
jest.spyOn(client, 'getClient').mockImplementation((options) => {
|
|
|
|
|
capturedOptions = options;
|
|
|
|
|
return {};
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
const payload = [{ role: 'user', content: 'Test message' }];
|
|
|
|
|
await client.sendCompletion(payload, {});
|
|
|
|
|
|
|
|
|
|
// Check the options passed to getClient
|
|
|
|
|
expect(capturedOptions).toHaveProperty('top_k', 10);
|
|
|
|
|
expect(capturedOptions).toHaveProperty('top_p', 0.9);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should include top_k and top_p parameters for claude-3-5-sonnet models', async () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
|
|
|
|
|
// Create a mock async generator function
|
|
|
|
|
async function* mockAsyncGenerator() {
|
|
|
|
|
yield { type: 'message_start', message: { usage: {} } };
|
|
|
|
|
yield { delta: { text: 'Test response' } };
|
|
|
|
|
yield { type: 'message_delta', usage: {} };
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Mock createResponse to return the async generator
|
|
|
|
|
jest.spyOn(client, 'createResponse').mockImplementation(() => {
|
|
|
|
|
return mockAsyncGenerator();
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-3-5-sonnet',
|
|
|
|
|
temperature: 0.7,
|
|
|
|
|
topK: 10,
|
|
|
|
|
topP: 0.9,
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
// Mock getClient to capture the request options
|
|
|
|
|
let capturedOptions = null;
|
|
|
|
|
jest.spyOn(client, 'getClient').mockImplementation((options) => {
|
|
|
|
|
capturedOptions = options;
|
|
|
|
|
return {};
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
const payload = [{ role: 'user', content: 'Test message' }];
|
|
|
|
|
await client.sendCompletion(payload, {});
|
|
|
|
|
|
|
|
|
|
// Check the options passed to getClient
|
|
|
|
|
expect(capturedOptions).toHaveProperty('top_k', 10);
|
|
|
|
|
expect(capturedOptions).toHaveProperty('top_p', 0.9);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should not include top_k and top_p parameters for claude-3-7-sonnet models', async () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
|
|
|
|
|
// Create a mock async generator function
|
|
|
|
|
async function* mockAsyncGenerator() {
|
|
|
|
|
yield { type: 'message_start', message: { usage: {} } };
|
|
|
|
|
yield { delta: { text: 'Test response' } };
|
|
|
|
|
yield { type: 'message_delta', usage: {} };
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Mock createResponse to return the async generator
|
|
|
|
|
jest.spyOn(client, 'createResponse').mockImplementation(() => {
|
|
|
|
|
return mockAsyncGenerator();
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-3-7-sonnet',
|
|
|
|
|
temperature: 0.7,
|
|
|
|
|
topK: 10,
|
|
|
|
|
topP: 0.9,
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
// Mock getClient to capture the request options
|
|
|
|
|
let capturedOptions = null;
|
|
|
|
|
jest.spyOn(client, 'getClient').mockImplementation((options) => {
|
|
|
|
|
capturedOptions = options;
|
|
|
|
|
return {};
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
const payload = [{ role: 'user', content: 'Test message' }];
|
|
|
|
|
await client.sendCompletion(payload, {});
|
|
|
|
|
|
|
|
|
|
// Check the options passed to getClient
|
|
|
|
|
expect(capturedOptions).not.toHaveProperty('top_k');
|
|
|
|
|
expect(capturedOptions).not.toHaveProperty('top_p');
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should not include top_k and top_p parameters for models with decimal notation (claude-3.7)', async () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
|
|
|
|
|
// Create a mock async generator function
|
|
|
|
|
async function* mockAsyncGenerator() {
|
|
|
|
|
yield { type: 'message_start', message: { usage: {} } };
|
|
|
|
|
yield { delta: { text: 'Test response' } };
|
|
|
|
|
yield { type: 'message_delta', usage: {} };
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Mock createResponse to return the async generator
|
|
|
|
|
jest.spyOn(client, 'createResponse').mockImplementation(() => {
|
|
|
|
|
return mockAsyncGenerator();
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-3.7-sonnet',
|
|
|
|
|
temperature: 0.7,
|
|
|
|
|
topK: 10,
|
|
|
|
|
topP: 0.9,
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
// Mock getClient to capture the request options
|
|
|
|
|
let capturedOptions = null;
|
|
|
|
|
jest.spyOn(client, 'getClient').mockImplementation((options) => {
|
|
|
|
|
capturedOptions = options;
|
|
|
|
|
return {};
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
const payload = [{ role: 'user', content: 'Test message' }];
|
|
|
|
|
await client.sendCompletion(payload, {});
|
|
|
|
|
|
|
|
|
|
// Check the options passed to getClient
|
|
|
|
|
expect(capturedOptions).not.toHaveProperty('top_k');
|
|
|
|
|
expect(capturedOptions).not.toHaveProperty('top_p');
|
|
|
|
|
});
|
|
|
|
|
});
|
2025-02-27 12:59:51 -05:00
|
|
|
|
|
|
|
|
it('should include top_k and top_p parameters for Claude-3.7 models when thinking is explicitly disabled', async () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key', {
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-3-7-sonnet',
|
|
|
|
|
temperature: 0.7,
|
|
|
|
|
topK: 10,
|
|
|
|
|
topP: 0.9,
|
|
|
|
|
},
|
|
|
|
|
thinking: false,
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
async function* mockAsyncGenerator() {
|
|
|
|
|
yield { type: 'message_start', message: { usage: {} } };
|
|
|
|
|
yield { delta: { text: 'Test response' } };
|
|
|
|
|
yield { type: 'message_delta', usage: {} };
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
jest.spyOn(client, 'createResponse').mockImplementation(() => {
|
|
|
|
|
return mockAsyncGenerator();
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
let capturedOptions = null;
|
|
|
|
|
jest.spyOn(client, 'getClient').mockImplementation((options) => {
|
|
|
|
|
capturedOptions = options;
|
|
|
|
|
return {};
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
const payload = [{ role: 'user', content: 'Test message' }];
|
|
|
|
|
await client.sendCompletion(payload, {});
|
|
|
|
|
|
|
|
|
|
expect(capturedOptions).toHaveProperty('topK', 10);
|
|
|
|
|
expect(capturedOptions).toHaveProperty('topP', 0.9);
|
|
|
|
|
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-3.7-sonnet',
|
|
|
|
|
temperature: 0.7,
|
|
|
|
|
topK: 10,
|
|
|
|
|
topP: 0.9,
|
|
|
|
|
},
|
|
|
|
|
thinking: false,
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
await client.sendCompletion(payload, {});
|
|
|
|
|
|
|
|
|
|
expect(capturedOptions).toHaveProperty('topK', 10);
|
|
|
|
|
expect(capturedOptions).toHaveProperty('topP', 0.9);
|
|
|
|
|
});
|
2025-05-22 15:00:44 -04:00
|
|
|
|
|
|
|
|
describe('isClaudeLatest', () => {
|
|
|
|
|
it('should set isClaudeLatest to true for claude-3 models', () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-3-sonnet-20240229',
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
expect(client.isClaudeLatest).toBe(true);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should set isClaudeLatest to true for claude-3.5 models', () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-3.5-sonnet-20240229',
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
expect(client.isClaudeLatest).toBe(true);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should set isClaudeLatest to true for claude-sonnet-4 models', () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-sonnet-4-20240229',
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
expect(client.isClaudeLatest).toBe(true);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should set isClaudeLatest to true for claude-opus-4 models', () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-opus-4-20240229',
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
expect(client.isClaudeLatest).toBe(true);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should set isClaudeLatest to true for claude-3.5-haiku models', () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-3.5-haiku-20240229',
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
expect(client.isClaudeLatest).toBe(true);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should set isClaudeLatest to false for claude-2 models', () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-2',
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
expect(client.isClaudeLatest).toBe(false);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should set isClaudeLatest to false for claude-instant models', () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-instant',
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
expect(client.isClaudeLatest).toBe(false);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should set isClaudeLatest to false for claude-sonnet-3 models', () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-sonnet-3-20240229',
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
expect(client.isClaudeLatest).toBe(false);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should set isClaudeLatest to false for claude-opus-3 models', () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-opus-3-20240229',
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
expect(client.isClaudeLatest).toBe(false);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
it('should set isClaudeLatest to false for claude-haiku-3 models', () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-haiku-3-20240229',
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
expect(client.isClaudeLatest).toBe(false);
|
|
|
|
|
});
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
describe('configureReasoning', () => {
|
|
|
|
|
it('should enable thinking for claude-opus-4 and claude-sonnet-4 models', async () => {
|
|
|
|
|
const client = new AnthropicClient('test-api-key');
|
|
|
|
|
// Create a mock async generator function
|
|
|
|
|
async function* mockAsyncGenerator() {
|
|
|
|
|
yield { type: 'message_start', message: { usage: {} } };
|
|
|
|
|
yield { delta: { text: 'Test response' } };
|
|
|
|
|
yield { type: 'message_delta', usage: {} };
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Mock createResponse to return the async generator
|
|
|
|
|
jest.spyOn(client, 'createResponse').mockImplementation(() => {
|
|
|
|
|
return mockAsyncGenerator();
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
// Test claude-opus-4
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-opus-4-20250514',
|
|
|
|
|
},
|
|
|
|
|
thinking: true,
|
|
|
|
|
thinkingBudget: 2000,
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
let capturedOptions = null;
|
|
|
|
|
jest.spyOn(client, 'getClient').mockImplementation((options) => {
|
|
|
|
|
capturedOptions = options;
|
|
|
|
|
return {};
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
const payload = [{ role: 'user', content: 'Test message' }];
|
|
|
|
|
await client.sendCompletion(payload, {});
|
|
|
|
|
|
|
|
|
|
expect(capturedOptions).toHaveProperty('thinking');
|
|
|
|
|
expect(capturedOptions.thinking).toEqual({
|
|
|
|
|
type: 'enabled',
|
|
|
|
|
budget_tokens: 2000,
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
// Test claude-sonnet-4
|
|
|
|
|
client.setOptions({
|
|
|
|
|
modelOptions: {
|
|
|
|
|
model: 'claude-sonnet-4-20250514',
|
|
|
|
|
},
|
|
|
|
|
thinking: true,
|
|
|
|
|
thinkingBudget: 2000,
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
await client.sendCompletion(payload, {});
|
|
|
|
|
|
|
|
|
|
expect(capturedOptions).toHaveProperty('thinking');
|
|
|
|
|
expect(capturedOptions.thinking).toEqual({
|
|
|
|
|
type: 'enabled',
|
|
|
|
|
budget_tokens: 2000,
|
|
|
|
|
});
|
|
|
|
|
});
|
|
|
|
|
});
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
describe('Claude Model Tests', () => {
|
|
|
|
|
it('should handle Claude 3 and 4 series models correctly', () => {
|
|
|
|
|
const client = new AnthropicClient('test-key');
|
|
|
|
|
// Claude 3 series models
|
|
|
|
|
const claude3Models = [
|
|
|
|
|
'claude-3-opus-20240229',
|
|
|
|
|
'claude-3-sonnet-20240229',
|
|
|
|
|
'claude-3-haiku-20240307',
|
|
|
|
|
'claude-3-5-sonnet-20240620',
|
|
|
|
|
'claude-3-5-haiku-20240620',
|
|
|
|
|
'claude-3.5-sonnet-20240620',
|
|
|
|
|
'claude-3.5-haiku-20240620',
|
|
|
|
|
'claude-3.7-sonnet-20240620',
|
|
|
|
|
'claude-3.7-haiku-20240620',
|
|
|
|
|
'anthropic/claude-3-opus-20240229',
|
|
|
|
|
'claude-3-opus-20240229/anthropic',
|
|
|
|
|
];
|
|
|
|
|
|
|
|
|
|
// Claude 4 series models
|
|
|
|
|
const claude4Models = [
|
|
|
|
|
'claude-sonnet-4-20250514',
|
|
|
|
|
'claude-opus-4-20250514',
|
|
|
|
|
'claude-4-sonnet-20250514',
|
|
|
|
|
'claude-4-opus-20250514',
|
|
|
|
|
'anthropic/claude-sonnet-4-20250514',
|
|
|
|
|
'claude-sonnet-4-20250514/anthropic',
|
|
|
|
|
];
|
|
|
|
|
|
|
|
|
|
// Test Claude 3 series
|
|
|
|
|
claude3Models.forEach((model) => {
|
|
|
|
|
client.setOptions({ modelOptions: { model } });
|
|
|
|
|
expect(
|
|
|
|
|
/claude-[3-9]/.test(client.modelOptions.model) ||
|
|
|
|
|
/claude-(?:sonnet|opus|haiku)-[4-9]/.test(client.modelOptions.model),
|
|
|
|
|
).toBe(true);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
// Test Claude 4 series
|
|
|
|
|
claude4Models.forEach((model) => {
|
|
|
|
|
client.setOptions({ modelOptions: { model } });
|
|
|
|
|
expect(
|
|
|
|
|
/claude-[3-9]/.test(client.modelOptions.model) ||
|
|
|
|
|
/claude-(?:sonnet|opus|haiku)-[4-9]/.test(client.modelOptions.model),
|
|
|
|
|
).toBe(true);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
// Test non-Claude 3/4 models
|
|
|
|
|
const nonClaudeModels = ['claude-2', 'claude-instant', 'gpt-4', 'gpt-3.5-turbo'];
|
|
|
|
|
|
|
|
|
|
nonClaudeModels.forEach((model) => {
|
|
|
|
|
client.setOptions({ modelOptions: { model } });
|
|
|
|
|
expect(
|
|
|
|
|
/claude-[3-9]/.test(client.modelOptions.model) ||
|
|
|
|
|
/claude-(?:sonnet|opus|haiku)-[4-9]/.test(client.modelOptions.model),
|
|
|
|
|
).toBe(false);
|
|
|
|
|
});
|
|
|
|
|
});
|
feat(GPT/Anthropic): Continue Regenerating & Generation Buttons (#808)
* feat(useMessageHandler.js/ts): Refactor and add features to handle user messages, support multiple endpoints/models, generate placeholder responses, regeneration, and stopGeneration function
fix(conversation.ts, buildTree.ts): Import TMessage type, handle null parentMessageId
feat(schemas.ts): Update and add schemas for various AI services, add default values, optional fields, and endpoint-to-schema mapping, create parseConvo function
chore(useMessageHandler.js, schemas.ts): Remove unused imports, variables, and chatGPT enum
* wip: add generation buttons
* refactor(cleanupPreset.ts): simplify cleanupPreset function
refactor(getDefaultConversation.js): remove unused code and simplify getDefaultConversation function
feat(utils): add getDefaultConversation function
This commit adds a new utility function called `getDefaultConversation` to the `client/src/utils/getDefaultConversation.ts` file. This function is responsible for generating a default conversation object based on the provided parameters.
The `getDefaultConversation` function takes in an object with the following properties:
- `conversation`: The conversation object to be used as a base.
- `endpointsConfig`: The configuration object containing information about the available endpoints.
- `preset`: An optional preset object that can be used to override the default behavior.
The function first tries to determine the target endpoint based on the preset object. If a valid endpoint is found, it is used as the target endpoint. If not, the function tries to retrieve the last conversation setup from the local storage and uses its endpoint if it is valid. If neither the preset nor the local storage contains a valid endpoint, the function falls back to a default endpoint.
Once the target endpoint is determined,
* fix(utils): remove console.error statement in buildDefaultConversation function
fix(schemas): add default values for catch blocks in openAISchema, googleSchema, bingAISchema, anthropicSchema, chatGPTBrowserSchema, and gptPluginsSchema
* fix: endpoint not changing on change of preset from other endpoint, wip: refactor
* refactor: preset items to TSX
* refactor: convert resetConvo to TS
* refactor(getDefaultConversation.ts): move defaultEndpoints array to the top of the file for better readability
refactor(getDefaultConversation.ts): extract getDefaultEndpoint function for better code organization and reusability
* feat(svg): add ContinueIcon component
feat(svg): add RegenerateIcon component
feat(svg): add ContinueIcon and RegenerateIcon components to index.ts
* feat(Button.tsx): add onClick and className props to Button component
feat(GenerationButtons.tsx): add logic to display Regenerate or StopGenerating button based on isSubmitting and messages
feat(Regenerate.tsx): create Regenerate component with RegenerateIcon and handleRegenerate function
feat(StopGenerating.tsx): create StopGenerating component with StopGeneratingIcon and handleStopGenerating function
* fix(TextChat.jsx): reorder imports and variables for better readability
fix(TextChat.jsx): fix typo in condition for isNotAppendable variable
fix(TextChat.jsx): remove unused handleStopGenerating function
fix(ContinueIcon.tsx): remove unnecessary closing tags for polygon elements
fix(useMessageHandler.ts): add missing type annotations for handleStopGenerating and handleRegenerate functions
fix(useMessageHandler.ts): remove unused variables in return statement
* fix(getDefaultConversation.ts): refactor code to use getLocalStorageItems function
feat(getLocalStorageItems.ts): add utility function to retrieve items from local storage
* fix(OpenAIClient.js): add support for streaming result in sendCompletion method
feat(OpenAIClient.js): add finish_reason metadata to opts in sendCompletion method
feat(Message.js): add finish_reason field to Message model
feat(messageSchema.js): add finish_reason field to messageSchema
feat(openAI.js): parse chatGptLabel and promptPrefix from req.body and pass rest of the modelOptions to endpointOption
feat(openAI.js): add addMetadata function to store metadata in ask function
feat(openAI.js): add metadata to response if available
feat(schemas.ts): add finish_reason field to tMessageSchema
* feat(types.ts): add TOnClick and TGenButtonProps types for button components
feat(Continue.tsx): create Continue component for generating button
feat(GenerationButtons.tsx): update GenerationButtons component to use Continue component
feat(Regenerate.tsx): create Regenerate component for regenerating button
feat(Stop.tsx): create Stop component for stop generating button
* feat(MessageHandler.jsx): add MessageHandler component to handle messages and conversations
fix(Root.jsx): fix import paths for Nav and MessageHandler components
* feat(useMessageHandler.ts): add support for generation parameter in ask function
feat(useMessageHandler.ts): add support for isEdited parameter in ask function
feat(useMessageHandler.ts): add support for continueGeneration function
fix(createPayload.ts): replace endpoint URL when isEdited parameter is true
* chore(client): set skipLibCheck to true in tsconfig.json
* fix(useMessageHandler.ts): remove unused clientId variable
fix(schemas.ts): make clientId field in tMessageSchema nullable and optional
* wip: edit route for continue generation
* refactor(api): move handlers to root of routes dir
* fix(useMessageHandler.ts): initialize currentMessages to an empty array if messages is null
fix(useMessageHandler.ts): update initialResponse text to use responseText variable
fix(useMessageHandler.ts): update setMessages logic for isRegenerate case
fix(MessageHandler.jsx): update setMessages logic for cancelHandler, createdHandler, and finalHandler
* fix(schemas.ts): make createdAt and updatedAt fields optional and set default values using new Date().toISOString()
fix(schemas.ts): change type annotation of TMessage from infer to input
* refactor(useMessageHandler.ts): rename AskProps type to TAskProps
refactor(useMessageHandler.ts): remove generation property from ask function arguments
refactor(useMessageHandler.ts): use nullish coalescing operator (??) instead of logical OR (||)
refactor(useMessageHandler.ts): pass the responseMessageId to message prop of submission
* fix(BaseClient.js): use nullish coalescing operator (??) instead of logical OR (||) for default values
* fix(BaseClient.js): fix responseMessageId assignment in handleStartMethods method
feat(BaseClient.js): add support for isEdited flag in sendMessage method
feat(BaseClient.js): add generation to responseMessage text in sendMessage method
* fix(openAI.js): remove unused imports and commented out code
feat(openAI.js): add support for generation parameter in request body
fix(openAI.js): remove console.log statement
fix(openAI.js): remove unused variables and parameters
fix(openAI.js): update response text in case of error
fix(openAI.js): handle error and abort message in case of error
fix(handlers.js): add generation parameter to createOnProgress function
fix(useMessageHandler.ts): update responseText variable to use generation parameter
* refactor(api/middleware): move inside server dir
* refactor: add endpoint specific, modular functions to build options and initialize clients, create server/utils, move middleware, separate utils into api general utils and server specific utils
* fix(abortMiddleware.js): import getConvo and getConvoTitle functions from models
feat(abortMiddleware.js): add abortAsk function to abortController to handle aborting of requests
fix(openAI.js): import buildOptions and initializeClient functions from endpoints/openAI
refactor(openAI.js): use getAbortData function to get data for abortAsk function
* refactor: move endpoint specific logic to an endpoints dir
* refactor(PluginService.js): fix import path for encrypt and decrypt functions in PluginService.js
* feat(openAI): add new endpoint for adding a title to a conversation
- Added a new file `addTitle.js` in the `api/server/routes/endpoints/openAI` directory.
- The `addTitle.js` file exports a function `addTitle` that takes in request parameters and performs the following actions:
- If the `parentMessageId` is `'00000000-0000-0000-0000-000000000000'` and `newConvo` is true, it proceeds with the following steps:
- Calls the `titleConvo` function from the `titleConvo` module, passing in the necessary parameters.
- Calls the `saveConvo` function from the `saveConvo` module, passing in the user ID and conversation details.
- Updated the `index.js` file in the `api/server/routes/endpoints/openAI` directory to export the `addTitle` function.
- This change adds
* fix(abortMiddleware.js): remove console.log statement
refactor(gptPlugins.js): update imports and function parameters
feat(gptPlugins.js): add support for abortController and getAbortData
refactor(openAI.js): update imports and function parameters
feat(openAI.js): add support for abortController and getAbortData
fix(openAI.js): refactor code to use modularized functions and middleware
fix(buildOptions.js): refactor code to use destructuring and update variable names
* refactor(askChatGPTBrowser.js, bingAI.js, google.js): remove duplicate code for setting response headers
feat(askChatGPTBrowser.js, bingAI.js, google.js): add setHeaders middleware to set response headers
* feat(middleware): validateEndpoint, refactor buildOption to only be concerned of endpointOption
* fix(abortMiddleware.js): add 'finish_reason' property with value 'incomplete' to responseMessage object
fix(abortMessage.js): remove console.log statement for aborted message
fix(handlers.js): modify tokens assignment to handle empty generation string and trailing space
* fix(BaseClient.js): import addSpaceIfNeeded function from server/utils
fix(BaseClient.js): add space before generation in text property
fix(index.js): remove getCitations and citeText exports
feat(buildEndpointOption.js): add buildEndpointOption middleware
fix(index.js): import buildEndpointOption middleware
fix(anthropic.js): remove buildOptions function and use endpointOption from req.body
fix(gptPlugins.js): remove buildOptions function and use endpointOption from req.body
fix(openAI.js): remove buildOptions function and use endpointOption from req.body
feat(utils): add citations.js and handleText.js modules
fix(utils): fix import statements in index.js module
* refactor(gptPlugins.js): use getResponseSender function from librechat-data-provider
* feat(gptPlugins): complete 'continue generating'
* wip: anthropic continue regen
* feat(middleware): add validateRegistration middleware
A new middleware function called `validateRegistration` has been added to the list of exported middleware functions in `index.js`. This middleware is responsible for validating registration data before allowing the registration process to proceed.
* feat(Anthropic): complete continue regen
* chore: add librechat-data-provider to api/package.json
* fix(ci): backend-review will mock meilisearch, also installs data-provider as now needed
* chore(ci): remove unneeded SEARCH env var
* style(GenerationButtons): make text shorter for sake of space economy, even though this diverges from chat.openai.com
* style(GenerationButtons/ScrollToBottom): adjust visibility/position based on screen size
* chore(client): 'Editting' typo
* feat(GenerationButtons.tsx): add support for endpoint prop in GenerationButtons component
feat(OptionsBar.tsx): pass endpoint prop to GenerationButtons component
feat(useGenerations.ts): create useGenerations hook to handle generation logic
fix(schemas.ts): add searchResult field to tMessageSchema
* refactor(HoverButtons): convert to TSX and utilize new useGenerations hook
* fix(abortMiddleware): handle error with res headers set, or abortController not found, to ensure proper API error is sent to the client, chore(BaseClient): remove console log for onStart message meant for debugging
* refactor(api): remove librechat-data-provider dep for now as it complicates deployed docker build stage, re-use code in CJS, located in server/endpoints/schemas
* chore: remove console.logs from test files
* ci: add backend tests for AnthropicClient, focusing on new buildMessages logic
* refactor(FakeClient): use actual BaseClient sendMessage method for testing
* test(BaseClient.test.js): add test for loading chat history
test(BaseClient.test.js): add test for sendMessage logic with isEdited flag
* fix(buildEndpointOption.js): add support for azureOpenAI in buildFunction object
wip(endpoints.js): fetch Azure models from Azure OpenAI API if opts.azure is true
* fix(Button.tsx): add data-testid attribute to button component
fix(SelectDropDown.tsx): add data-testid attribute to Listbox.Button component
fix(messages.spec.ts): add waitForServerStream function to consolidate logic for awaiting the server response
feat(messages.spec.ts): add test for stopping and continuing message and improve browser/page context order and closing
* refactor(onProgress): speed up time to save initial message for editable routes
* chore: disable AI message editing (for now), was accidentally allowed
* refactor: ensure continue is only supported for latest message style: improve styling in dark mode and across all hover buttons/icons, including making edit icon for AI invisible (for now)
* fix: add test id to generation buttons so they never resolve to 2+ items
* chore(package.json): add 'packages/' to the list of ignored directories
chore(data-provider/package.json): bump version to 0.1.5
2023-08-17 12:50:05 -04:00
|
|
|
});
|