LibreChat/packages/api/src/utils/azure.ts

121 lines
5 KiB
TypeScript
Raw Normal View History

🧠 feat: User Memories for Conversational Context (#7760) * 🧠 feat: User Memories for Conversational Context chore: mcp typing, use `t` WIP: first pass, Memories UI - Added MemoryViewer component for displaying, editing, and deleting user memories. - Integrated data provider hooks for fetching, updating, and deleting memories. - Implemented pagination and loading states for better user experience. - Created unit tests for MemoryViewer to ensure functionality and interaction with data provider. - Updated translation files to include new UI strings related to memories. chore: move mcp-related files to own directory chore: rename librechat-mcp to librechat-api WIP: first pass, memory processing and data schemas chore: linting in fileSearch.js query description chore: rename librechat-api to @librechat/api across the project WIP: first pass, functional memory agent feat: add MemoryEditDialog and MemoryViewer components for managing user memories - Introduced MemoryEditDialog for editing memory entries with validation and toast notifications. - Updated MemoryViewer to support editing and deleting memories, including pagination and loading states. - Enhanced data provider to handle memory updates with optional original key for better management. - Added new localization strings for memory-related UI elements. feat: add memory permissions management - Implemented memory permissions in the backend, allowing roles to have specific permissions for using, creating, updating, and reading memories. - Added new API endpoints for updating memory permissions associated with roles. - Created a new AdminSettings component for managing memory permissions in the frontend. - Integrated memory permissions into the existing roles and permissions schemas. - Updated the interface to include memory settings and permissions. - Enhanced the MemoryViewer component to conditionally render admin settings based on user roles. - Added localization support for memory permissions in the translation files. feat: move AdminSettings component to a new position in MemoryViewer for better visibility refactor: clean up commented code in MemoryViewer component feat: enhance MemoryViewer with search functionality and improve MemoryEditDialog integration - Added a search input to filter memories in the MemoryViewer component. - Refactored MemoryEditDialog to accept children for better customization. - Updated MemoryViewer to utilize the new EditMemoryButton and DeleteMemoryButton components for editing and deleting memories. - Improved localization support by adding new strings for memory filtering and deletion confirmation. refactor: optimize memory filtering in MemoryViewer using match-sorter - Replaced manual filtering logic with match-sorter for improved search functionality. - Enhanced performance and readability of the filteredMemories computation. feat: enhance MemoryEditDialog with triggerRef and improve updateMemory mutation handling feat: implement access control for MemoryEditDialog and MemoryViewer components refactor: remove commented out code and create runMemory method refactor: rename role based files feat: implement access control for memory usage in AgentClient refactor: simplify checkVisionRequest method in AgentClient by removing commented-out code refactor: make `agents` dir in api package refactor: migrate Azure utilities to TypeScript and consolidate imports refactor: move sanitizeFilename function to a new file and update imports, add related tests refactor: update LLM configuration types and consolidate Azure options in the API package chore: linting chore: import order refactor: replace getLLMConfig with getOpenAIConfig and remove unused LLM configuration file chore: update winston-daily-rotate-file to version 5.0.0 and add object-hash dependency in package-lock.json refactor: move primeResources and optionalChainWithEmptyCheck functions to resources.ts and update imports refactor: move createRun function to a new run.ts file and update related imports fix: ensure safeAttachments is correctly typed as an array of TFile chore: add node-fetch dependency and refactor fetch-related functions into packages/api/utils, removing the old generators file refactor: enhance TEndpointOption type by using Pick to streamline endpoint fields and add new properties for model parameters and client options feat: implement initializeOpenAIOptions function and update OpenAI types for enhanced configuration handling fix: update types due to new TEndpointOption typing fix: ensure safe access to group parameters in initializeOpenAIOptions function fix: remove redundant API key validation comment in initializeOpenAIOptions function refactor: rename initializeOpenAIOptions to initializeOpenAI for consistency and update related documentation refactor: decouple req.body fields and tool loading from initializeAgentOptions chore: linting refactor: adjust column widths in MemoryViewer for improved layout refactor: simplify agent initialization by creating loadAgent function and removing unused code feat: add memory configuration loading and validation functions WIP: first pass, memory processing with config feat: implement memory callback and artifact handling feat: implement memory artifacts display and processing updates feat: add memory configuration options and schema validation for validKeys fix: update MemoryEditDialog and MemoryViewer to handle memory state and display improvements refactor: remove padding from BookmarkTable and MemoryViewer headers for consistent styling WIP: initial tokenLimit config and move Tokenizer to @librechat/api refactor: update mongoMeili plugin methods to use callback for better error handling feat: enhance memory management with token tracking and usage metrics - Added token counting for memory entries to enforce limits and provide usage statistics. - Updated memory retrieval and update routes to include total token usage and limit. - Enhanced MemoryEditDialog and MemoryViewer components to display memory usage and token information. - Refactored memory processing functions to handle token limits and provide feedback on memory capacity. feat: implement memory artifact handling in attachment handler - Enhanced useAttachmentHandler to process memory artifacts when receiving updates. - Introduced handleMemoryArtifact utility to manage memory updates and deletions. - Updated query client to reflect changes in memory state based on incoming data. refactor: restructure web search key extraction logic - Moved the logic for extracting API keys from the webSearchAuth configuration into a dedicated function, getWebSearchKeys. - Updated webSearchKeys to utilize the new function for improved clarity and maintainability. - Prevents build time errors feat: add personalization settings and memory preferences management - Introduced a new Personalization tab in settings to manage user memory preferences. - Implemented API endpoints and client-side logic for updating memory preferences. - Enhanced user interface components to reflect personalization options and memory usage. - Updated permissions to allow users to opt out of memory features. - Added localization support for new settings and messages related to personalization. style: personalization switch class feat: add PersonalizationIcon and align Side Panel UI feat: implement memory creation functionality - Added a new API endpoint for creating memory entries, including validation for key and value. - Introduced MemoryCreateDialog component for user interface to facilitate memory creation. - Integrated token limit checks to prevent exceeding user memory capacity. - Updated MemoryViewer to include a button for opening the memory creation dialog. - Enhanced localization support for new messages related to memory creation. feat: enhance message processing with configurable window size - Updated AgentClient to use a configurable message window size for processing messages. - Introduced messageWindowSize option in memory configuration schema with a default value of 5. - Improved logic for selecting messages to process based on the configured window size. chore: update librechat-data-provider version to 0.7.87 in package.json and package-lock.json chore: remove OpenAPIPlugin and its associated tests chore: remove MIGRATION_README.md as migration tasks are completed ci: fix backend tests chore: remove unused translation keys from localization file chore: remove problematic test file and unused var in AgentClient chore: remove unused import and import directly for JSDoc * feat: add api package build stage in Dockerfile for improved modularity * docs: reorder build steps in contributing guide for clarity
2025-06-07 18:52:22 -04:00
import { isEnabled } from './common';
import type { AzureOptions, GenericClient } from '~/types';
/**
* Sanitizes the model name to be used in the URL by removing or replacing disallowed characters.
🧠 feat: User Memories for Conversational Context (#7760) * 🧠 feat: User Memories for Conversational Context chore: mcp typing, use `t` WIP: first pass, Memories UI - Added MemoryViewer component for displaying, editing, and deleting user memories. - Integrated data provider hooks for fetching, updating, and deleting memories. - Implemented pagination and loading states for better user experience. - Created unit tests for MemoryViewer to ensure functionality and interaction with data provider. - Updated translation files to include new UI strings related to memories. chore: move mcp-related files to own directory chore: rename librechat-mcp to librechat-api WIP: first pass, memory processing and data schemas chore: linting in fileSearch.js query description chore: rename librechat-api to @librechat/api across the project WIP: first pass, functional memory agent feat: add MemoryEditDialog and MemoryViewer components for managing user memories - Introduced MemoryEditDialog for editing memory entries with validation and toast notifications. - Updated MemoryViewer to support editing and deleting memories, including pagination and loading states. - Enhanced data provider to handle memory updates with optional original key for better management. - Added new localization strings for memory-related UI elements. feat: add memory permissions management - Implemented memory permissions in the backend, allowing roles to have specific permissions for using, creating, updating, and reading memories. - Added new API endpoints for updating memory permissions associated with roles. - Created a new AdminSettings component for managing memory permissions in the frontend. - Integrated memory permissions into the existing roles and permissions schemas. - Updated the interface to include memory settings and permissions. - Enhanced the MemoryViewer component to conditionally render admin settings based on user roles. - Added localization support for memory permissions in the translation files. feat: move AdminSettings component to a new position in MemoryViewer for better visibility refactor: clean up commented code in MemoryViewer component feat: enhance MemoryViewer with search functionality and improve MemoryEditDialog integration - Added a search input to filter memories in the MemoryViewer component. - Refactored MemoryEditDialog to accept children for better customization. - Updated MemoryViewer to utilize the new EditMemoryButton and DeleteMemoryButton components for editing and deleting memories. - Improved localization support by adding new strings for memory filtering and deletion confirmation. refactor: optimize memory filtering in MemoryViewer using match-sorter - Replaced manual filtering logic with match-sorter for improved search functionality. - Enhanced performance and readability of the filteredMemories computation. feat: enhance MemoryEditDialog with triggerRef and improve updateMemory mutation handling feat: implement access control for MemoryEditDialog and MemoryViewer components refactor: remove commented out code and create runMemory method refactor: rename role based files feat: implement access control for memory usage in AgentClient refactor: simplify checkVisionRequest method in AgentClient by removing commented-out code refactor: make `agents` dir in api package refactor: migrate Azure utilities to TypeScript and consolidate imports refactor: move sanitizeFilename function to a new file and update imports, add related tests refactor: update LLM configuration types and consolidate Azure options in the API package chore: linting chore: import order refactor: replace getLLMConfig with getOpenAIConfig and remove unused LLM configuration file chore: update winston-daily-rotate-file to version 5.0.0 and add object-hash dependency in package-lock.json refactor: move primeResources and optionalChainWithEmptyCheck functions to resources.ts and update imports refactor: move createRun function to a new run.ts file and update related imports fix: ensure safeAttachments is correctly typed as an array of TFile chore: add node-fetch dependency and refactor fetch-related functions into packages/api/utils, removing the old generators file refactor: enhance TEndpointOption type by using Pick to streamline endpoint fields and add new properties for model parameters and client options feat: implement initializeOpenAIOptions function and update OpenAI types for enhanced configuration handling fix: update types due to new TEndpointOption typing fix: ensure safe access to group parameters in initializeOpenAIOptions function fix: remove redundant API key validation comment in initializeOpenAIOptions function refactor: rename initializeOpenAIOptions to initializeOpenAI for consistency and update related documentation refactor: decouple req.body fields and tool loading from initializeAgentOptions chore: linting refactor: adjust column widths in MemoryViewer for improved layout refactor: simplify agent initialization by creating loadAgent function and removing unused code feat: add memory configuration loading and validation functions WIP: first pass, memory processing with config feat: implement memory callback and artifact handling feat: implement memory artifacts display and processing updates feat: add memory configuration options and schema validation for validKeys fix: update MemoryEditDialog and MemoryViewer to handle memory state and display improvements refactor: remove padding from BookmarkTable and MemoryViewer headers for consistent styling WIP: initial tokenLimit config and move Tokenizer to @librechat/api refactor: update mongoMeili plugin methods to use callback for better error handling feat: enhance memory management with token tracking and usage metrics - Added token counting for memory entries to enforce limits and provide usage statistics. - Updated memory retrieval and update routes to include total token usage and limit. - Enhanced MemoryEditDialog and MemoryViewer components to display memory usage and token information. - Refactored memory processing functions to handle token limits and provide feedback on memory capacity. feat: implement memory artifact handling in attachment handler - Enhanced useAttachmentHandler to process memory artifacts when receiving updates. - Introduced handleMemoryArtifact utility to manage memory updates and deletions. - Updated query client to reflect changes in memory state based on incoming data. refactor: restructure web search key extraction logic - Moved the logic for extracting API keys from the webSearchAuth configuration into a dedicated function, getWebSearchKeys. - Updated webSearchKeys to utilize the new function for improved clarity and maintainability. - Prevents build time errors feat: add personalization settings and memory preferences management - Introduced a new Personalization tab in settings to manage user memory preferences. - Implemented API endpoints and client-side logic for updating memory preferences. - Enhanced user interface components to reflect personalization options and memory usage. - Updated permissions to allow users to opt out of memory features. - Added localization support for new settings and messages related to personalization. style: personalization switch class feat: add PersonalizationIcon and align Side Panel UI feat: implement memory creation functionality - Added a new API endpoint for creating memory entries, including validation for key and value. - Introduced MemoryCreateDialog component for user interface to facilitate memory creation. - Integrated token limit checks to prevent exceeding user memory capacity. - Updated MemoryViewer to include a button for opening the memory creation dialog. - Enhanced localization support for new messages related to memory creation. feat: enhance message processing with configurable window size - Updated AgentClient to use a configurable message window size for processing messages. - Introduced messageWindowSize option in memory configuration schema with a default value of 5. - Improved logic for selecting messages to process based on the configured window size. chore: update librechat-data-provider version to 0.7.87 in package.json and package-lock.json chore: remove OpenAPIPlugin and its associated tests chore: remove MIGRATION_README.md as migration tasks are completed ci: fix backend tests chore: remove unused translation keys from localization file chore: remove problematic test file and unused var in AgentClient chore: remove unused import and import directly for JSDoc * feat: add api package build stage in Dockerfile for improved modularity * docs: reorder build steps in contributing guide for clarity
2025-06-07 18:52:22 -04:00
* @param modelName - The model name to be sanitized.
* @returns The sanitized model name.
*/
🧠 feat: User Memories for Conversational Context (#7760) * 🧠 feat: User Memories for Conversational Context chore: mcp typing, use `t` WIP: first pass, Memories UI - Added MemoryViewer component for displaying, editing, and deleting user memories. - Integrated data provider hooks for fetching, updating, and deleting memories. - Implemented pagination and loading states for better user experience. - Created unit tests for MemoryViewer to ensure functionality and interaction with data provider. - Updated translation files to include new UI strings related to memories. chore: move mcp-related files to own directory chore: rename librechat-mcp to librechat-api WIP: first pass, memory processing and data schemas chore: linting in fileSearch.js query description chore: rename librechat-api to @librechat/api across the project WIP: first pass, functional memory agent feat: add MemoryEditDialog and MemoryViewer components for managing user memories - Introduced MemoryEditDialog for editing memory entries with validation and toast notifications. - Updated MemoryViewer to support editing and deleting memories, including pagination and loading states. - Enhanced data provider to handle memory updates with optional original key for better management. - Added new localization strings for memory-related UI elements. feat: add memory permissions management - Implemented memory permissions in the backend, allowing roles to have specific permissions for using, creating, updating, and reading memories. - Added new API endpoints for updating memory permissions associated with roles. - Created a new AdminSettings component for managing memory permissions in the frontend. - Integrated memory permissions into the existing roles and permissions schemas. - Updated the interface to include memory settings and permissions. - Enhanced the MemoryViewer component to conditionally render admin settings based on user roles. - Added localization support for memory permissions in the translation files. feat: move AdminSettings component to a new position in MemoryViewer for better visibility refactor: clean up commented code in MemoryViewer component feat: enhance MemoryViewer with search functionality and improve MemoryEditDialog integration - Added a search input to filter memories in the MemoryViewer component. - Refactored MemoryEditDialog to accept children for better customization. - Updated MemoryViewer to utilize the new EditMemoryButton and DeleteMemoryButton components for editing and deleting memories. - Improved localization support by adding new strings for memory filtering and deletion confirmation. refactor: optimize memory filtering in MemoryViewer using match-sorter - Replaced manual filtering logic with match-sorter for improved search functionality. - Enhanced performance and readability of the filteredMemories computation. feat: enhance MemoryEditDialog with triggerRef and improve updateMemory mutation handling feat: implement access control for MemoryEditDialog and MemoryViewer components refactor: remove commented out code and create runMemory method refactor: rename role based files feat: implement access control for memory usage in AgentClient refactor: simplify checkVisionRequest method in AgentClient by removing commented-out code refactor: make `agents` dir in api package refactor: migrate Azure utilities to TypeScript and consolidate imports refactor: move sanitizeFilename function to a new file and update imports, add related tests refactor: update LLM configuration types and consolidate Azure options in the API package chore: linting chore: import order refactor: replace getLLMConfig with getOpenAIConfig and remove unused LLM configuration file chore: update winston-daily-rotate-file to version 5.0.0 and add object-hash dependency in package-lock.json refactor: move primeResources and optionalChainWithEmptyCheck functions to resources.ts and update imports refactor: move createRun function to a new run.ts file and update related imports fix: ensure safeAttachments is correctly typed as an array of TFile chore: add node-fetch dependency and refactor fetch-related functions into packages/api/utils, removing the old generators file refactor: enhance TEndpointOption type by using Pick to streamline endpoint fields and add new properties for model parameters and client options feat: implement initializeOpenAIOptions function and update OpenAI types for enhanced configuration handling fix: update types due to new TEndpointOption typing fix: ensure safe access to group parameters in initializeOpenAIOptions function fix: remove redundant API key validation comment in initializeOpenAIOptions function refactor: rename initializeOpenAIOptions to initializeOpenAI for consistency and update related documentation refactor: decouple req.body fields and tool loading from initializeAgentOptions chore: linting refactor: adjust column widths in MemoryViewer for improved layout refactor: simplify agent initialization by creating loadAgent function and removing unused code feat: add memory configuration loading and validation functions WIP: first pass, memory processing with config feat: implement memory callback and artifact handling feat: implement memory artifacts display and processing updates feat: add memory configuration options and schema validation for validKeys fix: update MemoryEditDialog and MemoryViewer to handle memory state and display improvements refactor: remove padding from BookmarkTable and MemoryViewer headers for consistent styling WIP: initial tokenLimit config and move Tokenizer to @librechat/api refactor: update mongoMeili plugin methods to use callback for better error handling feat: enhance memory management with token tracking and usage metrics - Added token counting for memory entries to enforce limits and provide usage statistics. - Updated memory retrieval and update routes to include total token usage and limit. - Enhanced MemoryEditDialog and MemoryViewer components to display memory usage and token information. - Refactored memory processing functions to handle token limits and provide feedback on memory capacity. feat: implement memory artifact handling in attachment handler - Enhanced useAttachmentHandler to process memory artifacts when receiving updates. - Introduced handleMemoryArtifact utility to manage memory updates and deletions. - Updated query client to reflect changes in memory state based on incoming data. refactor: restructure web search key extraction logic - Moved the logic for extracting API keys from the webSearchAuth configuration into a dedicated function, getWebSearchKeys. - Updated webSearchKeys to utilize the new function for improved clarity and maintainability. - Prevents build time errors feat: add personalization settings and memory preferences management - Introduced a new Personalization tab in settings to manage user memory preferences. - Implemented API endpoints and client-side logic for updating memory preferences. - Enhanced user interface components to reflect personalization options and memory usage. - Updated permissions to allow users to opt out of memory features. - Added localization support for new settings and messages related to personalization. style: personalization switch class feat: add PersonalizationIcon and align Side Panel UI feat: implement memory creation functionality - Added a new API endpoint for creating memory entries, including validation for key and value. - Introduced MemoryCreateDialog component for user interface to facilitate memory creation. - Integrated token limit checks to prevent exceeding user memory capacity. - Updated MemoryViewer to include a button for opening the memory creation dialog. - Enhanced localization support for new messages related to memory creation. feat: enhance message processing with configurable window size - Updated AgentClient to use a configurable message window size for processing messages. - Introduced messageWindowSize option in memory configuration schema with a default value of 5. - Improved logic for selecting messages to process based on the configured window size. chore: update librechat-data-provider version to 0.7.87 in package.json and package-lock.json chore: remove OpenAPIPlugin and its associated tests chore: remove MIGRATION_README.md as migration tasks are completed ci: fix backend tests chore: remove unused translation keys from localization file chore: remove problematic test file and unused var in AgentClient chore: remove unused import and import directly for JSDoc * feat: add api package build stage in Dockerfile for improved modularity * docs: reorder build steps in contributing guide for clarity
2025-06-07 18:52:22 -04:00
export const sanitizeModelName = (modelName: string): string => {
// Replace periods with empty strings and other disallowed characters as needed.
return modelName.replace(/\./g, '');
};
/**
* Generates the Azure OpenAI API endpoint URL.
🧠 feat: User Memories for Conversational Context (#7760) * 🧠 feat: User Memories for Conversational Context chore: mcp typing, use `t` WIP: first pass, Memories UI - Added MemoryViewer component for displaying, editing, and deleting user memories. - Integrated data provider hooks for fetching, updating, and deleting memories. - Implemented pagination and loading states for better user experience. - Created unit tests for MemoryViewer to ensure functionality and interaction with data provider. - Updated translation files to include new UI strings related to memories. chore: move mcp-related files to own directory chore: rename librechat-mcp to librechat-api WIP: first pass, memory processing and data schemas chore: linting in fileSearch.js query description chore: rename librechat-api to @librechat/api across the project WIP: first pass, functional memory agent feat: add MemoryEditDialog and MemoryViewer components for managing user memories - Introduced MemoryEditDialog for editing memory entries with validation and toast notifications. - Updated MemoryViewer to support editing and deleting memories, including pagination and loading states. - Enhanced data provider to handle memory updates with optional original key for better management. - Added new localization strings for memory-related UI elements. feat: add memory permissions management - Implemented memory permissions in the backend, allowing roles to have specific permissions for using, creating, updating, and reading memories. - Added new API endpoints for updating memory permissions associated with roles. - Created a new AdminSettings component for managing memory permissions in the frontend. - Integrated memory permissions into the existing roles and permissions schemas. - Updated the interface to include memory settings and permissions. - Enhanced the MemoryViewer component to conditionally render admin settings based on user roles. - Added localization support for memory permissions in the translation files. feat: move AdminSettings component to a new position in MemoryViewer for better visibility refactor: clean up commented code in MemoryViewer component feat: enhance MemoryViewer with search functionality and improve MemoryEditDialog integration - Added a search input to filter memories in the MemoryViewer component. - Refactored MemoryEditDialog to accept children for better customization. - Updated MemoryViewer to utilize the new EditMemoryButton and DeleteMemoryButton components for editing and deleting memories. - Improved localization support by adding new strings for memory filtering and deletion confirmation. refactor: optimize memory filtering in MemoryViewer using match-sorter - Replaced manual filtering logic with match-sorter for improved search functionality. - Enhanced performance and readability of the filteredMemories computation. feat: enhance MemoryEditDialog with triggerRef and improve updateMemory mutation handling feat: implement access control for MemoryEditDialog and MemoryViewer components refactor: remove commented out code and create runMemory method refactor: rename role based files feat: implement access control for memory usage in AgentClient refactor: simplify checkVisionRequest method in AgentClient by removing commented-out code refactor: make `agents` dir in api package refactor: migrate Azure utilities to TypeScript and consolidate imports refactor: move sanitizeFilename function to a new file and update imports, add related tests refactor: update LLM configuration types and consolidate Azure options in the API package chore: linting chore: import order refactor: replace getLLMConfig with getOpenAIConfig and remove unused LLM configuration file chore: update winston-daily-rotate-file to version 5.0.0 and add object-hash dependency in package-lock.json refactor: move primeResources and optionalChainWithEmptyCheck functions to resources.ts and update imports refactor: move createRun function to a new run.ts file and update related imports fix: ensure safeAttachments is correctly typed as an array of TFile chore: add node-fetch dependency and refactor fetch-related functions into packages/api/utils, removing the old generators file refactor: enhance TEndpointOption type by using Pick to streamline endpoint fields and add new properties for model parameters and client options feat: implement initializeOpenAIOptions function and update OpenAI types for enhanced configuration handling fix: update types due to new TEndpointOption typing fix: ensure safe access to group parameters in initializeOpenAIOptions function fix: remove redundant API key validation comment in initializeOpenAIOptions function refactor: rename initializeOpenAIOptions to initializeOpenAI for consistency and update related documentation refactor: decouple req.body fields and tool loading from initializeAgentOptions chore: linting refactor: adjust column widths in MemoryViewer for improved layout refactor: simplify agent initialization by creating loadAgent function and removing unused code feat: add memory configuration loading and validation functions WIP: first pass, memory processing with config feat: implement memory callback and artifact handling feat: implement memory artifacts display and processing updates feat: add memory configuration options and schema validation for validKeys fix: update MemoryEditDialog and MemoryViewer to handle memory state and display improvements refactor: remove padding from BookmarkTable and MemoryViewer headers for consistent styling WIP: initial tokenLimit config and move Tokenizer to @librechat/api refactor: update mongoMeili plugin methods to use callback for better error handling feat: enhance memory management with token tracking and usage metrics - Added token counting for memory entries to enforce limits and provide usage statistics. - Updated memory retrieval and update routes to include total token usage and limit. - Enhanced MemoryEditDialog and MemoryViewer components to display memory usage and token information. - Refactored memory processing functions to handle token limits and provide feedback on memory capacity. feat: implement memory artifact handling in attachment handler - Enhanced useAttachmentHandler to process memory artifacts when receiving updates. - Introduced handleMemoryArtifact utility to manage memory updates and deletions. - Updated query client to reflect changes in memory state based on incoming data. refactor: restructure web search key extraction logic - Moved the logic for extracting API keys from the webSearchAuth configuration into a dedicated function, getWebSearchKeys. - Updated webSearchKeys to utilize the new function for improved clarity and maintainability. - Prevents build time errors feat: add personalization settings and memory preferences management - Introduced a new Personalization tab in settings to manage user memory preferences. - Implemented API endpoints and client-side logic for updating memory preferences. - Enhanced user interface components to reflect personalization options and memory usage. - Updated permissions to allow users to opt out of memory features. - Added localization support for new settings and messages related to personalization. style: personalization switch class feat: add PersonalizationIcon and align Side Panel UI feat: implement memory creation functionality - Added a new API endpoint for creating memory entries, including validation for key and value. - Introduced MemoryCreateDialog component for user interface to facilitate memory creation. - Integrated token limit checks to prevent exceeding user memory capacity. - Updated MemoryViewer to include a button for opening the memory creation dialog. - Enhanced localization support for new messages related to memory creation. feat: enhance message processing with configurable window size - Updated AgentClient to use a configurable message window size for processing messages. - Introduced messageWindowSize option in memory configuration schema with a default value of 5. - Improved logic for selecting messages to process based on the configured window size. chore: update librechat-data-provider version to 0.7.87 in package.json and package-lock.json chore: remove OpenAPIPlugin and its associated tests chore: remove MIGRATION_README.md as migration tasks are completed ci: fix backend tests chore: remove unused translation keys from localization file chore: remove problematic test file and unused var in AgentClient chore: remove unused import and import directly for JSDoc * feat: add api package build stage in Dockerfile for improved modularity * docs: reorder build steps in contributing guide for clarity
2025-06-07 18:52:22 -04:00
* @param params - The parameters object.
* @param params.azureOpenAIApiInstanceName - The Azure OpenAI API instance name.
* @param params.azureOpenAIApiDeploymentName - The Azure OpenAI API deployment name.
* @returns The complete endpoint URL for the Azure OpenAI API.
*/
🧠 feat: User Memories for Conversational Context (#7760) * 🧠 feat: User Memories for Conversational Context chore: mcp typing, use `t` WIP: first pass, Memories UI - Added MemoryViewer component for displaying, editing, and deleting user memories. - Integrated data provider hooks for fetching, updating, and deleting memories. - Implemented pagination and loading states for better user experience. - Created unit tests for MemoryViewer to ensure functionality and interaction with data provider. - Updated translation files to include new UI strings related to memories. chore: move mcp-related files to own directory chore: rename librechat-mcp to librechat-api WIP: first pass, memory processing and data schemas chore: linting in fileSearch.js query description chore: rename librechat-api to @librechat/api across the project WIP: first pass, functional memory agent feat: add MemoryEditDialog and MemoryViewer components for managing user memories - Introduced MemoryEditDialog for editing memory entries with validation and toast notifications. - Updated MemoryViewer to support editing and deleting memories, including pagination and loading states. - Enhanced data provider to handle memory updates with optional original key for better management. - Added new localization strings for memory-related UI elements. feat: add memory permissions management - Implemented memory permissions in the backend, allowing roles to have specific permissions for using, creating, updating, and reading memories. - Added new API endpoints for updating memory permissions associated with roles. - Created a new AdminSettings component for managing memory permissions in the frontend. - Integrated memory permissions into the existing roles and permissions schemas. - Updated the interface to include memory settings and permissions. - Enhanced the MemoryViewer component to conditionally render admin settings based on user roles. - Added localization support for memory permissions in the translation files. feat: move AdminSettings component to a new position in MemoryViewer for better visibility refactor: clean up commented code in MemoryViewer component feat: enhance MemoryViewer with search functionality and improve MemoryEditDialog integration - Added a search input to filter memories in the MemoryViewer component. - Refactored MemoryEditDialog to accept children for better customization. - Updated MemoryViewer to utilize the new EditMemoryButton and DeleteMemoryButton components for editing and deleting memories. - Improved localization support by adding new strings for memory filtering and deletion confirmation. refactor: optimize memory filtering in MemoryViewer using match-sorter - Replaced manual filtering logic with match-sorter for improved search functionality. - Enhanced performance and readability of the filteredMemories computation. feat: enhance MemoryEditDialog with triggerRef and improve updateMemory mutation handling feat: implement access control for MemoryEditDialog and MemoryViewer components refactor: remove commented out code and create runMemory method refactor: rename role based files feat: implement access control for memory usage in AgentClient refactor: simplify checkVisionRequest method in AgentClient by removing commented-out code refactor: make `agents` dir in api package refactor: migrate Azure utilities to TypeScript and consolidate imports refactor: move sanitizeFilename function to a new file and update imports, add related tests refactor: update LLM configuration types and consolidate Azure options in the API package chore: linting chore: import order refactor: replace getLLMConfig with getOpenAIConfig and remove unused LLM configuration file chore: update winston-daily-rotate-file to version 5.0.0 and add object-hash dependency in package-lock.json refactor: move primeResources and optionalChainWithEmptyCheck functions to resources.ts and update imports refactor: move createRun function to a new run.ts file and update related imports fix: ensure safeAttachments is correctly typed as an array of TFile chore: add node-fetch dependency and refactor fetch-related functions into packages/api/utils, removing the old generators file refactor: enhance TEndpointOption type by using Pick to streamline endpoint fields and add new properties for model parameters and client options feat: implement initializeOpenAIOptions function and update OpenAI types for enhanced configuration handling fix: update types due to new TEndpointOption typing fix: ensure safe access to group parameters in initializeOpenAIOptions function fix: remove redundant API key validation comment in initializeOpenAIOptions function refactor: rename initializeOpenAIOptions to initializeOpenAI for consistency and update related documentation refactor: decouple req.body fields and tool loading from initializeAgentOptions chore: linting refactor: adjust column widths in MemoryViewer for improved layout refactor: simplify agent initialization by creating loadAgent function and removing unused code feat: add memory configuration loading and validation functions WIP: first pass, memory processing with config feat: implement memory callback and artifact handling feat: implement memory artifacts display and processing updates feat: add memory configuration options and schema validation for validKeys fix: update MemoryEditDialog and MemoryViewer to handle memory state and display improvements refactor: remove padding from BookmarkTable and MemoryViewer headers for consistent styling WIP: initial tokenLimit config and move Tokenizer to @librechat/api refactor: update mongoMeili plugin methods to use callback for better error handling feat: enhance memory management with token tracking and usage metrics - Added token counting for memory entries to enforce limits and provide usage statistics. - Updated memory retrieval and update routes to include total token usage and limit. - Enhanced MemoryEditDialog and MemoryViewer components to display memory usage and token information. - Refactored memory processing functions to handle token limits and provide feedback on memory capacity. feat: implement memory artifact handling in attachment handler - Enhanced useAttachmentHandler to process memory artifacts when receiving updates. - Introduced handleMemoryArtifact utility to manage memory updates and deletions. - Updated query client to reflect changes in memory state based on incoming data. refactor: restructure web search key extraction logic - Moved the logic for extracting API keys from the webSearchAuth configuration into a dedicated function, getWebSearchKeys. - Updated webSearchKeys to utilize the new function for improved clarity and maintainability. - Prevents build time errors feat: add personalization settings and memory preferences management - Introduced a new Personalization tab in settings to manage user memory preferences. - Implemented API endpoints and client-side logic for updating memory preferences. - Enhanced user interface components to reflect personalization options and memory usage. - Updated permissions to allow users to opt out of memory features. - Added localization support for new settings and messages related to personalization. style: personalization switch class feat: add PersonalizationIcon and align Side Panel UI feat: implement memory creation functionality - Added a new API endpoint for creating memory entries, including validation for key and value. - Introduced MemoryCreateDialog component for user interface to facilitate memory creation. - Integrated token limit checks to prevent exceeding user memory capacity. - Updated MemoryViewer to include a button for opening the memory creation dialog. - Enhanced localization support for new messages related to memory creation. feat: enhance message processing with configurable window size - Updated AgentClient to use a configurable message window size for processing messages. - Introduced messageWindowSize option in memory configuration schema with a default value of 5. - Improved logic for selecting messages to process based on the configured window size. chore: update librechat-data-provider version to 0.7.87 in package.json and package-lock.json chore: remove OpenAPIPlugin and its associated tests chore: remove MIGRATION_README.md as migration tasks are completed ci: fix backend tests chore: remove unused translation keys from localization file chore: remove problematic test file and unused var in AgentClient chore: remove unused import and import directly for JSDoc * feat: add api package build stage in Dockerfile for improved modularity * docs: reorder build steps in contributing guide for clarity
2025-06-07 18:52:22 -04:00
export const genAzureEndpoint = ({
azureOpenAIApiInstanceName,
azureOpenAIApiDeploymentName,
}: {
azureOpenAIApiInstanceName: string;
azureOpenAIApiDeploymentName: string;
}): string => {
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
return `https://${azureOpenAIApiInstanceName}.openai.azure.com/openai/deployments/${azureOpenAIApiDeploymentName}`;
feat: ChatGPT Plugins/OpenAPI specs for Plugins Endpoint (#620) * wip: proof of concept for openapi chain * chore(api): update langchain dependency to version 0.0.105 * feat(Plugins): use ChatGPT Plugins/OpenAPI specs (first pass) * chore(manifest.json): update pluginKey for "Browser" tool to "web-browser" chore(handleTools.js): update customConstructor key for "web-browser" tool * fix(handleSubmit.js): set unfinished property to false for all endpoints * fix(handlers.js): remove unnecessary capitalizeWords function and use action.tool directly refactor(endpoints.js): rename availableTools to tools and transform it into a map * feat(endpoints): add plugins selector to endpoints file refactor(CodeBlock.tsx): refactor to typescript refactor(Plugin.tsx): use recoil Map for plugin name and refactor to typescript chore(Message.jsx): linting chore(PluginsOptions/index.jsx): remove comment/linting chore(svg): export Clipboard and CheckMark components from SVG index and refactor to typescript * fix(OpenAPIPlugin.js): rename readYamlFile function to readSpecFile fix(OpenAPIPlugin.js): handle JSON files in readSpecFile function fix(OpenAPIPlugin.js): handle JSON URLs in getSpec function fix(OpenAPIPlugin.js): handle JSON variables in createOpenAPIPlugin function fix(OpenAPIPlugin.js): add description for variables in createOpenAPIPlugin function fix(OpenAPIPlugin.js): add optional flag for is_user_authenticated and has_user_authentication in ManifestDefinition fix(loadSpecs.js): add optional flag for is_user_authenticated and has_user_authentication in ManifestDefinition fix(Plugin.tsx): remove unnecessary callback parameter in getPluginName function fix(getDefaultConversation.js): fix browser console error: handle null value for lastConversationSetup in getDefaultConversation function * feat(api): add new tools Add Ai PDF tool for super-fast, interactive chats with PDFs of any size, complete with page references for fact checking. Add VoxScript tool for searching through YouTube transcripts, financial data sources, Google Search results, and more. Add WebPilot tool for browsing and QA of webpages, PDFs, and data. Generate articles from one or more URLs. feat(api): update OpenAPIPlugin.js - Add support for bearer token authorization in the OpenAPIPlugin. - Add support for custom headers in the OpenAPIPlugin. fix(api): fix loadTools.js - Pass the user parameter to the loadSpecs function. * feat(PluginsClient.js): import findMessageContent function from utils feat(PluginsClient.js): add message parameter to options object in initializeCustomAgent function feat(PluginsClient.js): add content to errorMessage if message content is found feat(PluginsClient.js): break out of loop if message content is found feat(PluginsClient.js): add delay option with value of 8 to generateTextStream function feat(PluginsClient.js): add support for process.env.PORT environment variable in app.listen function feat(askyourpdf.json): add askyourpdf plugin configuration feat(metar.json): add metar plugin configuration feat(askyourpdf.yaml): add askyourpdf plugin OpenAPI specification feat(OpenAPIPlugin.js): add message parameter to createOpenAPIPlugin function feat(OpenAPIPlugin.js): add description_for_model to chain run message feat(addOpenAPISpecs.js): remove verbose option from loadSpecs function call fix(loadSpecs.js): add 'message' parameter to the loadSpecs function feat(findMessageContent.js): add utility function to find message content in JSON objects * fix(PluginStoreDialog.tsx): update z-index value for the dialog container The z-index value for the dialog container was updated to "102" to ensure it appears above other elements on the page. * chore(web_pilot.json): add "params" field with "user_has_request" parameter set to true * chore(eslintrc.js): update eslint rules fix(Login.tsx): add missing semicolon after import statement * fix(package-lock.json): update langchain dependency to version ^0.0.105 * fix(OpenAPIPlugin.js): change header key from 'id' to 'librechat_user_id' for consistency and clarity feat(plugins): add documentation for using official ChatGPT Plugins with OpenAPI specs This commit adds a new file `chatgpt_plugins_openapi.md` to the `docs/features/plugins` directory. The file provides detailed information on how to use official ChatGPT Plugins with OpenAPI specifications. It explains the components of a plugin, including the Plugin Manifest file and the OpenAPI spec. It also covers the process of adding a plugin, editing manifest files, and customizing OpenAPI spec files. Additionally, the commit includes disclaimers about the limitations and compatibility of plugins with LibreChat. The documentation also clarifies that the use of ChatGPT Plugins with LibreChat does not violate OpenAI's Terms of Service. The purpose of this commit is to provide comprehensive documentation for developers who want to integrate ChatGPT Plugins into their projects using OpenAPI specs. It aims to guide them through the process of adding and configuring plugins, as well as addressing potential issues and chore(introduction.md): update link to ChatGPT Plugins documentation docs(introduction.md): clarify the purpose of the plugins endpoint and its capabilities * fix(OpenAPIPlugin.js): update SUFFIX variable to provide a clearer description docs(chatgpt_plugins_openapi.md): update information about adding plugins via url on the frontend * feat(PluginsClient.js): sendIntermediateMessage on successful Agent load fix(PluginsClient.js, server/index.js, gptPlugins.js): linting fixes docs(chatgpt_plugins_openapi.md): update links and add additional information * Update chatgpt_plugins_openapi.md * chore: rebuild package-lock file * chore: format/lint all files with new rules * chore: format all files * chore(README.md): update AI model selection list The AI model selection list in the README.md file has been updated to reflect the current options available. The "Anthropic" model has been added as an alternative name for the "Claude" model. * fix(Plugin.tsx): type issue * feat(tools): add new tool WebPilot feat(tools): remove tool Weather Report feat(tools): add new tool Prompt Perfect feat(tools): add new tool Scholarly Graph Link * feat(OpenAPIPlugin.js): add getSpec and readSpecFile functions feat(OpenAPIPlugin.spec.js): add tests for readSpecFile, getSpec, and createOpenAPIPlugin functions * chore(agent-demo-1.js): remove unused code and dependencies chore(agent-demo-2.js): remove unused code and dependencies chore(demo.js): remove unused code and dependencies * feat(addOpenAPISpecs): add function to transform OpenAPI specs into desired format feat(addOpenAPISpecs.spec): add tests for transformSpec function fix(loadSpecs): remove debugging code * feat(loadSpecs.spec.js): add unit tests for ManifestDefinition, validateJson, and loadSpecs functions * fix: package file resolution bug * chore: move scholarly_graph_link manifest to 'has-issues' * refactor(client/hooks): convert to TS and export from index * Update introduction.md * Update chatgpt_plugins_openapi.md
2023-07-16 12:19:47 -04:00
};
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
/**
* Generates the Azure OpenAI API chat completion endpoint URL with the API version.
* If both deploymentName and modelName are provided, modelName takes precedence.
🧠 feat: User Memories for Conversational Context (#7760) * 🧠 feat: User Memories for Conversational Context chore: mcp typing, use `t` WIP: first pass, Memories UI - Added MemoryViewer component for displaying, editing, and deleting user memories. - Integrated data provider hooks for fetching, updating, and deleting memories. - Implemented pagination and loading states for better user experience. - Created unit tests for MemoryViewer to ensure functionality and interaction with data provider. - Updated translation files to include new UI strings related to memories. chore: move mcp-related files to own directory chore: rename librechat-mcp to librechat-api WIP: first pass, memory processing and data schemas chore: linting in fileSearch.js query description chore: rename librechat-api to @librechat/api across the project WIP: first pass, functional memory agent feat: add MemoryEditDialog and MemoryViewer components for managing user memories - Introduced MemoryEditDialog for editing memory entries with validation and toast notifications. - Updated MemoryViewer to support editing and deleting memories, including pagination and loading states. - Enhanced data provider to handle memory updates with optional original key for better management. - Added new localization strings for memory-related UI elements. feat: add memory permissions management - Implemented memory permissions in the backend, allowing roles to have specific permissions for using, creating, updating, and reading memories. - Added new API endpoints for updating memory permissions associated with roles. - Created a new AdminSettings component for managing memory permissions in the frontend. - Integrated memory permissions into the existing roles and permissions schemas. - Updated the interface to include memory settings and permissions. - Enhanced the MemoryViewer component to conditionally render admin settings based on user roles. - Added localization support for memory permissions in the translation files. feat: move AdminSettings component to a new position in MemoryViewer for better visibility refactor: clean up commented code in MemoryViewer component feat: enhance MemoryViewer with search functionality and improve MemoryEditDialog integration - Added a search input to filter memories in the MemoryViewer component. - Refactored MemoryEditDialog to accept children for better customization. - Updated MemoryViewer to utilize the new EditMemoryButton and DeleteMemoryButton components for editing and deleting memories. - Improved localization support by adding new strings for memory filtering and deletion confirmation. refactor: optimize memory filtering in MemoryViewer using match-sorter - Replaced manual filtering logic with match-sorter for improved search functionality. - Enhanced performance and readability of the filteredMemories computation. feat: enhance MemoryEditDialog with triggerRef and improve updateMemory mutation handling feat: implement access control for MemoryEditDialog and MemoryViewer components refactor: remove commented out code and create runMemory method refactor: rename role based files feat: implement access control for memory usage in AgentClient refactor: simplify checkVisionRequest method in AgentClient by removing commented-out code refactor: make `agents` dir in api package refactor: migrate Azure utilities to TypeScript and consolidate imports refactor: move sanitizeFilename function to a new file and update imports, add related tests refactor: update LLM configuration types and consolidate Azure options in the API package chore: linting chore: import order refactor: replace getLLMConfig with getOpenAIConfig and remove unused LLM configuration file chore: update winston-daily-rotate-file to version 5.0.0 and add object-hash dependency in package-lock.json refactor: move primeResources and optionalChainWithEmptyCheck functions to resources.ts and update imports refactor: move createRun function to a new run.ts file and update related imports fix: ensure safeAttachments is correctly typed as an array of TFile chore: add node-fetch dependency and refactor fetch-related functions into packages/api/utils, removing the old generators file refactor: enhance TEndpointOption type by using Pick to streamline endpoint fields and add new properties for model parameters and client options feat: implement initializeOpenAIOptions function and update OpenAI types for enhanced configuration handling fix: update types due to new TEndpointOption typing fix: ensure safe access to group parameters in initializeOpenAIOptions function fix: remove redundant API key validation comment in initializeOpenAIOptions function refactor: rename initializeOpenAIOptions to initializeOpenAI for consistency and update related documentation refactor: decouple req.body fields and tool loading from initializeAgentOptions chore: linting refactor: adjust column widths in MemoryViewer for improved layout refactor: simplify agent initialization by creating loadAgent function and removing unused code feat: add memory configuration loading and validation functions WIP: first pass, memory processing with config feat: implement memory callback and artifact handling feat: implement memory artifacts display and processing updates feat: add memory configuration options and schema validation for validKeys fix: update MemoryEditDialog and MemoryViewer to handle memory state and display improvements refactor: remove padding from BookmarkTable and MemoryViewer headers for consistent styling WIP: initial tokenLimit config and move Tokenizer to @librechat/api refactor: update mongoMeili plugin methods to use callback for better error handling feat: enhance memory management with token tracking and usage metrics - Added token counting for memory entries to enforce limits and provide usage statistics. - Updated memory retrieval and update routes to include total token usage and limit. - Enhanced MemoryEditDialog and MemoryViewer components to display memory usage and token information. - Refactored memory processing functions to handle token limits and provide feedback on memory capacity. feat: implement memory artifact handling in attachment handler - Enhanced useAttachmentHandler to process memory artifacts when receiving updates. - Introduced handleMemoryArtifact utility to manage memory updates and deletions. - Updated query client to reflect changes in memory state based on incoming data. refactor: restructure web search key extraction logic - Moved the logic for extracting API keys from the webSearchAuth configuration into a dedicated function, getWebSearchKeys. - Updated webSearchKeys to utilize the new function for improved clarity and maintainability. - Prevents build time errors feat: add personalization settings and memory preferences management - Introduced a new Personalization tab in settings to manage user memory preferences. - Implemented API endpoints and client-side logic for updating memory preferences. - Enhanced user interface components to reflect personalization options and memory usage. - Updated permissions to allow users to opt out of memory features. - Added localization support for new settings and messages related to personalization. style: personalization switch class feat: add PersonalizationIcon and align Side Panel UI feat: implement memory creation functionality - Added a new API endpoint for creating memory entries, including validation for key and value. - Introduced MemoryCreateDialog component for user interface to facilitate memory creation. - Integrated token limit checks to prevent exceeding user memory capacity. - Updated MemoryViewer to include a button for opening the memory creation dialog. - Enhanced localization support for new messages related to memory creation. feat: enhance message processing with configurable window size - Updated AgentClient to use a configurable message window size for processing messages. - Introduced messageWindowSize option in memory configuration schema with a default value of 5. - Improved logic for selecting messages to process based on the configured window size. chore: update librechat-data-provider version to 0.7.87 in package.json and package-lock.json chore: remove OpenAPIPlugin and its associated tests chore: remove MIGRATION_README.md as migration tasks are completed ci: fix backend tests chore: remove unused translation keys from localization file chore: remove problematic test file and unused var in AgentClient chore: remove unused import and import directly for JSDoc * feat: add api package build stage in Dockerfile for improved modularity * docs: reorder build steps in contributing guide for clarity
2025-06-07 18:52:22 -04:00
* @param azureConfig - The Azure configuration object.
* @param azureConfig.azureOpenAIApiInstanceName - The Azure OpenAI API instance name.
* @param azureConfig.azureOpenAIApiDeploymentName - The Azure OpenAI API deployment name (optional).
* @param azureConfig.azureOpenAIApiVersion - The Azure OpenAI API version.
* @param modelName - The model name to be included in the deployment name (optional).
* @param client - The API Client class for optionally setting properties (optional).
* @returns The complete chat completion endpoint URL for the Azure OpenAI API.
* @throws Error if neither azureOpenAIApiDeploymentName nor modelName is provided.
*/
🧠 feat: User Memories for Conversational Context (#7760) * 🧠 feat: User Memories for Conversational Context chore: mcp typing, use `t` WIP: first pass, Memories UI - Added MemoryViewer component for displaying, editing, and deleting user memories. - Integrated data provider hooks for fetching, updating, and deleting memories. - Implemented pagination and loading states for better user experience. - Created unit tests for MemoryViewer to ensure functionality and interaction with data provider. - Updated translation files to include new UI strings related to memories. chore: move mcp-related files to own directory chore: rename librechat-mcp to librechat-api WIP: first pass, memory processing and data schemas chore: linting in fileSearch.js query description chore: rename librechat-api to @librechat/api across the project WIP: first pass, functional memory agent feat: add MemoryEditDialog and MemoryViewer components for managing user memories - Introduced MemoryEditDialog for editing memory entries with validation and toast notifications. - Updated MemoryViewer to support editing and deleting memories, including pagination and loading states. - Enhanced data provider to handle memory updates with optional original key for better management. - Added new localization strings for memory-related UI elements. feat: add memory permissions management - Implemented memory permissions in the backend, allowing roles to have specific permissions for using, creating, updating, and reading memories. - Added new API endpoints for updating memory permissions associated with roles. - Created a new AdminSettings component for managing memory permissions in the frontend. - Integrated memory permissions into the existing roles and permissions schemas. - Updated the interface to include memory settings and permissions. - Enhanced the MemoryViewer component to conditionally render admin settings based on user roles. - Added localization support for memory permissions in the translation files. feat: move AdminSettings component to a new position in MemoryViewer for better visibility refactor: clean up commented code in MemoryViewer component feat: enhance MemoryViewer with search functionality and improve MemoryEditDialog integration - Added a search input to filter memories in the MemoryViewer component. - Refactored MemoryEditDialog to accept children for better customization. - Updated MemoryViewer to utilize the new EditMemoryButton and DeleteMemoryButton components for editing and deleting memories. - Improved localization support by adding new strings for memory filtering and deletion confirmation. refactor: optimize memory filtering in MemoryViewer using match-sorter - Replaced manual filtering logic with match-sorter for improved search functionality. - Enhanced performance and readability of the filteredMemories computation. feat: enhance MemoryEditDialog with triggerRef and improve updateMemory mutation handling feat: implement access control for MemoryEditDialog and MemoryViewer components refactor: remove commented out code and create runMemory method refactor: rename role based files feat: implement access control for memory usage in AgentClient refactor: simplify checkVisionRequest method in AgentClient by removing commented-out code refactor: make `agents` dir in api package refactor: migrate Azure utilities to TypeScript and consolidate imports refactor: move sanitizeFilename function to a new file and update imports, add related tests refactor: update LLM configuration types and consolidate Azure options in the API package chore: linting chore: import order refactor: replace getLLMConfig with getOpenAIConfig and remove unused LLM configuration file chore: update winston-daily-rotate-file to version 5.0.0 and add object-hash dependency in package-lock.json refactor: move primeResources and optionalChainWithEmptyCheck functions to resources.ts and update imports refactor: move createRun function to a new run.ts file and update related imports fix: ensure safeAttachments is correctly typed as an array of TFile chore: add node-fetch dependency and refactor fetch-related functions into packages/api/utils, removing the old generators file refactor: enhance TEndpointOption type by using Pick to streamline endpoint fields and add new properties for model parameters and client options feat: implement initializeOpenAIOptions function and update OpenAI types for enhanced configuration handling fix: update types due to new TEndpointOption typing fix: ensure safe access to group parameters in initializeOpenAIOptions function fix: remove redundant API key validation comment in initializeOpenAIOptions function refactor: rename initializeOpenAIOptions to initializeOpenAI for consistency and update related documentation refactor: decouple req.body fields and tool loading from initializeAgentOptions chore: linting refactor: adjust column widths in MemoryViewer for improved layout refactor: simplify agent initialization by creating loadAgent function and removing unused code feat: add memory configuration loading and validation functions WIP: first pass, memory processing with config feat: implement memory callback and artifact handling feat: implement memory artifacts display and processing updates feat: add memory configuration options and schema validation for validKeys fix: update MemoryEditDialog and MemoryViewer to handle memory state and display improvements refactor: remove padding from BookmarkTable and MemoryViewer headers for consistent styling WIP: initial tokenLimit config and move Tokenizer to @librechat/api refactor: update mongoMeili plugin methods to use callback for better error handling feat: enhance memory management with token tracking and usage metrics - Added token counting for memory entries to enforce limits and provide usage statistics. - Updated memory retrieval and update routes to include total token usage and limit. - Enhanced MemoryEditDialog and MemoryViewer components to display memory usage and token information. - Refactored memory processing functions to handle token limits and provide feedback on memory capacity. feat: implement memory artifact handling in attachment handler - Enhanced useAttachmentHandler to process memory artifacts when receiving updates. - Introduced handleMemoryArtifact utility to manage memory updates and deletions. - Updated query client to reflect changes in memory state based on incoming data. refactor: restructure web search key extraction logic - Moved the logic for extracting API keys from the webSearchAuth configuration into a dedicated function, getWebSearchKeys. - Updated webSearchKeys to utilize the new function for improved clarity and maintainability. - Prevents build time errors feat: add personalization settings and memory preferences management - Introduced a new Personalization tab in settings to manage user memory preferences. - Implemented API endpoints and client-side logic for updating memory preferences. - Enhanced user interface components to reflect personalization options and memory usage. - Updated permissions to allow users to opt out of memory features. - Added localization support for new settings and messages related to personalization. style: personalization switch class feat: add PersonalizationIcon and align Side Panel UI feat: implement memory creation functionality - Added a new API endpoint for creating memory entries, including validation for key and value. - Introduced MemoryCreateDialog component for user interface to facilitate memory creation. - Integrated token limit checks to prevent exceeding user memory capacity. - Updated MemoryViewer to include a button for opening the memory creation dialog. - Enhanced localization support for new messages related to memory creation. feat: enhance message processing with configurable window size - Updated AgentClient to use a configurable message window size for processing messages. - Introduced messageWindowSize option in memory configuration schema with a default value of 5. - Improved logic for selecting messages to process based on the configured window size. chore: update librechat-data-provider version to 0.7.87 in package.json and package-lock.json chore: remove OpenAPIPlugin and its associated tests chore: remove MIGRATION_README.md as migration tasks are completed ci: fix backend tests chore: remove unused translation keys from localization file chore: remove problematic test file and unused var in AgentClient chore: remove unused import and import directly for JSDoc * feat: add api package build stage in Dockerfile for improved modularity * docs: reorder build steps in contributing guide for clarity
2025-06-07 18:52:22 -04:00
export const genAzureChatCompletion = (
{
azureOpenAIApiInstanceName,
azureOpenAIApiDeploymentName,
azureOpenAIApiVersion,
}: {
azureOpenAIApiInstanceName: string;
azureOpenAIApiDeploymentName?: string;
azureOpenAIApiVersion: string;
},
modelName?: string,
client?: GenericClient,
): string => {
// Determine the deployment segment of the URL based on provided modelName or azureOpenAIApiDeploymentName
🧠 feat: User Memories for Conversational Context (#7760) * 🧠 feat: User Memories for Conversational Context chore: mcp typing, use `t` WIP: first pass, Memories UI - Added MemoryViewer component for displaying, editing, and deleting user memories. - Integrated data provider hooks for fetching, updating, and deleting memories. - Implemented pagination and loading states for better user experience. - Created unit tests for MemoryViewer to ensure functionality and interaction with data provider. - Updated translation files to include new UI strings related to memories. chore: move mcp-related files to own directory chore: rename librechat-mcp to librechat-api WIP: first pass, memory processing and data schemas chore: linting in fileSearch.js query description chore: rename librechat-api to @librechat/api across the project WIP: first pass, functional memory agent feat: add MemoryEditDialog and MemoryViewer components for managing user memories - Introduced MemoryEditDialog for editing memory entries with validation and toast notifications. - Updated MemoryViewer to support editing and deleting memories, including pagination and loading states. - Enhanced data provider to handle memory updates with optional original key for better management. - Added new localization strings for memory-related UI elements. feat: add memory permissions management - Implemented memory permissions in the backend, allowing roles to have specific permissions for using, creating, updating, and reading memories. - Added new API endpoints for updating memory permissions associated with roles. - Created a new AdminSettings component for managing memory permissions in the frontend. - Integrated memory permissions into the existing roles and permissions schemas. - Updated the interface to include memory settings and permissions. - Enhanced the MemoryViewer component to conditionally render admin settings based on user roles. - Added localization support for memory permissions in the translation files. feat: move AdminSettings component to a new position in MemoryViewer for better visibility refactor: clean up commented code in MemoryViewer component feat: enhance MemoryViewer with search functionality and improve MemoryEditDialog integration - Added a search input to filter memories in the MemoryViewer component. - Refactored MemoryEditDialog to accept children for better customization. - Updated MemoryViewer to utilize the new EditMemoryButton and DeleteMemoryButton components for editing and deleting memories. - Improved localization support by adding new strings for memory filtering and deletion confirmation. refactor: optimize memory filtering in MemoryViewer using match-sorter - Replaced manual filtering logic with match-sorter for improved search functionality. - Enhanced performance and readability of the filteredMemories computation. feat: enhance MemoryEditDialog with triggerRef and improve updateMemory mutation handling feat: implement access control for MemoryEditDialog and MemoryViewer components refactor: remove commented out code and create runMemory method refactor: rename role based files feat: implement access control for memory usage in AgentClient refactor: simplify checkVisionRequest method in AgentClient by removing commented-out code refactor: make `agents` dir in api package refactor: migrate Azure utilities to TypeScript and consolidate imports refactor: move sanitizeFilename function to a new file and update imports, add related tests refactor: update LLM configuration types and consolidate Azure options in the API package chore: linting chore: import order refactor: replace getLLMConfig with getOpenAIConfig and remove unused LLM configuration file chore: update winston-daily-rotate-file to version 5.0.0 and add object-hash dependency in package-lock.json refactor: move primeResources and optionalChainWithEmptyCheck functions to resources.ts and update imports refactor: move createRun function to a new run.ts file and update related imports fix: ensure safeAttachments is correctly typed as an array of TFile chore: add node-fetch dependency and refactor fetch-related functions into packages/api/utils, removing the old generators file refactor: enhance TEndpointOption type by using Pick to streamline endpoint fields and add new properties for model parameters and client options feat: implement initializeOpenAIOptions function and update OpenAI types for enhanced configuration handling fix: update types due to new TEndpointOption typing fix: ensure safe access to group parameters in initializeOpenAIOptions function fix: remove redundant API key validation comment in initializeOpenAIOptions function refactor: rename initializeOpenAIOptions to initializeOpenAI for consistency and update related documentation refactor: decouple req.body fields and tool loading from initializeAgentOptions chore: linting refactor: adjust column widths in MemoryViewer for improved layout refactor: simplify agent initialization by creating loadAgent function and removing unused code feat: add memory configuration loading and validation functions WIP: first pass, memory processing with config feat: implement memory callback and artifact handling feat: implement memory artifacts display and processing updates feat: add memory configuration options and schema validation for validKeys fix: update MemoryEditDialog and MemoryViewer to handle memory state and display improvements refactor: remove padding from BookmarkTable and MemoryViewer headers for consistent styling WIP: initial tokenLimit config and move Tokenizer to @librechat/api refactor: update mongoMeili plugin methods to use callback for better error handling feat: enhance memory management with token tracking and usage metrics - Added token counting for memory entries to enforce limits and provide usage statistics. - Updated memory retrieval and update routes to include total token usage and limit. - Enhanced MemoryEditDialog and MemoryViewer components to display memory usage and token information. - Refactored memory processing functions to handle token limits and provide feedback on memory capacity. feat: implement memory artifact handling in attachment handler - Enhanced useAttachmentHandler to process memory artifacts when receiving updates. - Introduced handleMemoryArtifact utility to manage memory updates and deletions. - Updated query client to reflect changes in memory state based on incoming data. refactor: restructure web search key extraction logic - Moved the logic for extracting API keys from the webSearchAuth configuration into a dedicated function, getWebSearchKeys. - Updated webSearchKeys to utilize the new function for improved clarity and maintainability. - Prevents build time errors feat: add personalization settings and memory preferences management - Introduced a new Personalization tab in settings to manage user memory preferences. - Implemented API endpoints and client-side logic for updating memory preferences. - Enhanced user interface components to reflect personalization options and memory usage. - Updated permissions to allow users to opt out of memory features. - Added localization support for new settings and messages related to personalization. style: personalization switch class feat: add PersonalizationIcon and align Side Panel UI feat: implement memory creation functionality - Added a new API endpoint for creating memory entries, including validation for key and value. - Introduced MemoryCreateDialog component for user interface to facilitate memory creation. - Integrated token limit checks to prevent exceeding user memory capacity. - Updated MemoryViewer to include a button for opening the memory creation dialog. - Enhanced localization support for new messages related to memory creation. feat: enhance message processing with configurable window size - Updated AgentClient to use a configurable message window size for processing messages. - Introduced messageWindowSize option in memory configuration schema with a default value of 5. - Improved logic for selecting messages to process based on the configured window size. chore: update librechat-data-provider version to 0.7.87 in package.json and package-lock.json chore: remove OpenAPIPlugin and its associated tests chore: remove MIGRATION_README.md as migration tasks are completed ci: fix backend tests chore: remove unused translation keys from localization file chore: remove problematic test file and unused var in AgentClient chore: remove unused import and import directly for JSDoc * feat: add api package build stage in Dockerfile for improved modularity * docs: reorder build steps in contributing guide for clarity
2025-06-07 18:52:22 -04:00
let deploymentSegment: string;
if (isEnabled(process.env.AZURE_USE_MODEL_AS_DEPLOYMENT_NAME) && modelName) {
const sanitizedModelName = sanitizeModelName(modelName);
🧠 feat: User Memories for Conversational Context (#7760) * 🧠 feat: User Memories for Conversational Context chore: mcp typing, use `t` WIP: first pass, Memories UI - Added MemoryViewer component for displaying, editing, and deleting user memories. - Integrated data provider hooks for fetching, updating, and deleting memories. - Implemented pagination and loading states for better user experience. - Created unit tests for MemoryViewer to ensure functionality and interaction with data provider. - Updated translation files to include new UI strings related to memories. chore: move mcp-related files to own directory chore: rename librechat-mcp to librechat-api WIP: first pass, memory processing and data schemas chore: linting in fileSearch.js query description chore: rename librechat-api to @librechat/api across the project WIP: first pass, functional memory agent feat: add MemoryEditDialog and MemoryViewer components for managing user memories - Introduced MemoryEditDialog for editing memory entries with validation and toast notifications. - Updated MemoryViewer to support editing and deleting memories, including pagination and loading states. - Enhanced data provider to handle memory updates with optional original key for better management. - Added new localization strings for memory-related UI elements. feat: add memory permissions management - Implemented memory permissions in the backend, allowing roles to have specific permissions for using, creating, updating, and reading memories. - Added new API endpoints for updating memory permissions associated with roles. - Created a new AdminSettings component for managing memory permissions in the frontend. - Integrated memory permissions into the existing roles and permissions schemas. - Updated the interface to include memory settings and permissions. - Enhanced the MemoryViewer component to conditionally render admin settings based on user roles. - Added localization support for memory permissions in the translation files. feat: move AdminSettings component to a new position in MemoryViewer for better visibility refactor: clean up commented code in MemoryViewer component feat: enhance MemoryViewer with search functionality and improve MemoryEditDialog integration - Added a search input to filter memories in the MemoryViewer component. - Refactored MemoryEditDialog to accept children for better customization. - Updated MemoryViewer to utilize the new EditMemoryButton and DeleteMemoryButton components for editing and deleting memories. - Improved localization support by adding new strings for memory filtering and deletion confirmation. refactor: optimize memory filtering in MemoryViewer using match-sorter - Replaced manual filtering logic with match-sorter for improved search functionality. - Enhanced performance and readability of the filteredMemories computation. feat: enhance MemoryEditDialog with triggerRef and improve updateMemory mutation handling feat: implement access control for MemoryEditDialog and MemoryViewer components refactor: remove commented out code and create runMemory method refactor: rename role based files feat: implement access control for memory usage in AgentClient refactor: simplify checkVisionRequest method in AgentClient by removing commented-out code refactor: make `agents` dir in api package refactor: migrate Azure utilities to TypeScript and consolidate imports refactor: move sanitizeFilename function to a new file and update imports, add related tests refactor: update LLM configuration types and consolidate Azure options in the API package chore: linting chore: import order refactor: replace getLLMConfig with getOpenAIConfig and remove unused LLM configuration file chore: update winston-daily-rotate-file to version 5.0.0 and add object-hash dependency in package-lock.json refactor: move primeResources and optionalChainWithEmptyCheck functions to resources.ts and update imports refactor: move createRun function to a new run.ts file and update related imports fix: ensure safeAttachments is correctly typed as an array of TFile chore: add node-fetch dependency and refactor fetch-related functions into packages/api/utils, removing the old generators file refactor: enhance TEndpointOption type by using Pick to streamline endpoint fields and add new properties for model parameters and client options feat: implement initializeOpenAIOptions function and update OpenAI types for enhanced configuration handling fix: update types due to new TEndpointOption typing fix: ensure safe access to group parameters in initializeOpenAIOptions function fix: remove redundant API key validation comment in initializeOpenAIOptions function refactor: rename initializeOpenAIOptions to initializeOpenAI for consistency and update related documentation refactor: decouple req.body fields and tool loading from initializeAgentOptions chore: linting refactor: adjust column widths in MemoryViewer for improved layout refactor: simplify agent initialization by creating loadAgent function and removing unused code feat: add memory configuration loading and validation functions WIP: first pass, memory processing with config feat: implement memory callback and artifact handling feat: implement memory artifacts display and processing updates feat: add memory configuration options and schema validation for validKeys fix: update MemoryEditDialog and MemoryViewer to handle memory state and display improvements refactor: remove padding from BookmarkTable and MemoryViewer headers for consistent styling WIP: initial tokenLimit config and move Tokenizer to @librechat/api refactor: update mongoMeili plugin methods to use callback for better error handling feat: enhance memory management with token tracking and usage metrics - Added token counting for memory entries to enforce limits and provide usage statistics. - Updated memory retrieval and update routes to include total token usage and limit. - Enhanced MemoryEditDialog and MemoryViewer components to display memory usage and token information. - Refactored memory processing functions to handle token limits and provide feedback on memory capacity. feat: implement memory artifact handling in attachment handler - Enhanced useAttachmentHandler to process memory artifacts when receiving updates. - Introduced handleMemoryArtifact utility to manage memory updates and deletions. - Updated query client to reflect changes in memory state based on incoming data. refactor: restructure web search key extraction logic - Moved the logic for extracting API keys from the webSearchAuth configuration into a dedicated function, getWebSearchKeys. - Updated webSearchKeys to utilize the new function for improved clarity and maintainability. - Prevents build time errors feat: add personalization settings and memory preferences management - Introduced a new Personalization tab in settings to manage user memory preferences. - Implemented API endpoints and client-side logic for updating memory preferences. - Enhanced user interface components to reflect personalization options and memory usage. - Updated permissions to allow users to opt out of memory features. - Added localization support for new settings and messages related to personalization. style: personalization switch class feat: add PersonalizationIcon and align Side Panel UI feat: implement memory creation functionality - Added a new API endpoint for creating memory entries, including validation for key and value. - Introduced MemoryCreateDialog component for user interface to facilitate memory creation. - Integrated token limit checks to prevent exceeding user memory capacity. - Updated MemoryViewer to include a button for opening the memory creation dialog. - Enhanced localization support for new messages related to memory creation. feat: enhance message processing with configurable window size - Updated AgentClient to use a configurable message window size for processing messages. - Introduced messageWindowSize option in memory configuration schema with a default value of 5. - Improved logic for selecting messages to process based on the configured window size. chore: update librechat-data-provider version to 0.7.87 in package.json and package-lock.json chore: remove OpenAPIPlugin and its associated tests chore: remove MIGRATION_README.md as migration tasks are completed ci: fix backend tests chore: remove unused translation keys from localization file chore: remove problematic test file and unused var in AgentClient chore: remove unused import and import directly for JSDoc * feat: add api package build stage in Dockerfile for improved modularity * docs: reorder build steps in contributing guide for clarity
2025-06-07 18:52:22 -04:00
deploymentSegment = sanitizedModelName;
if (client && typeof client === 'object') {
client.azure.azureOpenAIApiDeploymentName = sanitizedModelName;
}
} else if (azureOpenAIApiDeploymentName) {
deploymentSegment = azureOpenAIApiDeploymentName;
} else if (!process.env.AZURE_OPENAI_BASEURL) {
throw new Error(
'Either a model name with the `AZURE_USE_MODEL_AS_DEPLOYMENT_NAME` setting or a deployment name must be provided if `AZURE_OPENAI_BASEURL` is omitted.',
);
🧠 feat: User Memories for Conversational Context (#7760) * 🧠 feat: User Memories for Conversational Context chore: mcp typing, use `t` WIP: first pass, Memories UI - Added MemoryViewer component for displaying, editing, and deleting user memories. - Integrated data provider hooks for fetching, updating, and deleting memories. - Implemented pagination and loading states for better user experience. - Created unit tests for MemoryViewer to ensure functionality and interaction with data provider. - Updated translation files to include new UI strings related to memories. chore: move mcp-related files to own directory chore: rename librechat-mcp to librechat-api WIP: first pass, memory processing and data schemas chore: linting in fileSearch.js query description chore: rename librechat-api to @librechat/api across the project WIP: first pass, functional memory agent feat: add MemoryEditDialog and MemoryViewer components for managing user memories - Introduced MemoryEditDialog for editing memory entries with validation and toast notifications. - Updated MemoryViewer to support editing and deleting memories, including pagination and loading states. - Enhanced data provider to handle memory updates with optional original key for better management. - Added new localization strings for memory-related UI elements. feat: add memory permissions management - Implemented memory permissions in the backend, allowing roles to have specific permissions for using, creating, updating, and reading memories. - Added new API endpoints for updating memory permissions associated with roles. - Created a new AdminSettings component for managing memory permissions in the frontend. - Integrated memory permissions into the existing roles and permissions schemas. - Updated the interface to include memory settings and permissions. - Enhanced the MemoryViewer component to conditionally render admin settings based on user roles. - Added localization support for memory permissions in the translation files. feat: move AdminSettings component to a new position in MemoryViewer for better visibility refactor: clean up commented code in MemoryViewer component feat: enhance MemoryViewer with search functionality and improve MemoryEditDialog integration - Added a search input to filter memories in the MemoryViewer component. - Refactored MemoryEditDialog to accept children for better customization. - Updated MemoryViewer to utilize the new EditMemoryButton and DeleteMemoryButton components for editing and deleting memories. - Improved localization support by adding new strings for memory filtering and deletion confirmation. refactor: optimize memory filtering in MemoryViewer using match-sorter - Replaced manual filtering logic with match-sorter for improved search functionality. - Enhanced performance and readability of the filteredMemories computation. feat: enhance MemoryEditDialog with triggerRef and improve updateMemory mutation handling feat: implement access control for MemoryEditDialog and MemoryViewer components refactor: remove commented out code and create runMemory method refactor: rename role based files feat: implement access control for memory usage in AgentClient refactor: simplify checkVisionRequest method in AgentClient by removing commented-out code refactor: make `agents` dir in api package refactor: migrate Azure utilities to TypeScript and consolidate imports refactor: move sanitizeFilename function to a new file and update imports, add related tests refactor: update LLM configuration types and consolidate Azure options in the API package chore: linting chore: import order refactor: replace getLLMConfig with getOpenAIConfig and remove unused LLM configuration file chore: update winston-daily-rotate-file to version 5.0.0 and add object-hash dependency in package-lock.json refactor: move primeResources and optionalChainWithEmptyCheck functions to resources.ts and update imports refactor: move createRun function to a new run.ts file and update related imports fix: ensure safeAttachments is correctly typed as an array of TFile chore: add node-fetch dependency and refactor fetch-related functions into packages/api/utils, removing the old generators file refactor: enhance TEndpointOption type by using Pick to streamline endpoint fields and add new properties for model parameters and client options feat: implement initializeOpenAIOptions function and update OpenAI types for enhanced configuration handling fix: update types due to new TEndpointOption typing fix: ensure safe access to group parameters in initializeOpenAIOptions function fix: remove redundant API key validation comment in initializeOpenAIOptions function refactor: rename initializeOpenAIOptions to initializeOpenAI for consistency and update related documentation refactor: decouple req.body fields and tool loading from initializeAgentOptions chore: linting refactor: adjust column widths in MemoryViewer for improved layout refactor: simplify agent initialization by creating loadAgent function and removing unused code feat: add memory configuration loading and validation functions WIP: first pass, memory processing with config feat: implement memory callback and artifact handling feat: implement memory artifacts display and processing updates feat: add memory configuration options and schema validation for validKeys fix: update MemoryEditDialog and MemoryViewer to handle memory state and display improvements refactor: remove padding from BookmarkTable and MemoryViewer headers for consistent styling WIP: initial tokenLimit config and move Tokenizer to @librechat/api refactor: update mongoMeili plugin methods to use callback for better error handling feat: enhance memory management with token tracking and usage metrics - Added token counting for memory entries to enforce limits and provide usage statistics. - Updated memory retrieval and update routes to include total token usage and limit. - Enhanced MemoryEditDialog and MemoryViewer components to display memory usage and token information. - Refactored memory processing functions to handle token limits and provide feedback on memory capacity. feat: implement memory artifact handling in attachment handler - Enhanced useAttachmentHandler to process memory artifacts when receiving updates. - Introduced handleMemoryArtifact utility to manage memory updates and deletions. - Updated query client to reflect changes in memory state based on incoming data. refactor: restructure web search key extraction logic - Moved the logic for extracting API keys from the webSearchAuth configuration into a dedicated function, getWebSearchKeys. - Updated webSearchKeys to utilize the new function for improved clarity and maintainability. - Prevents build time errors feat: add personalization settings and memory preferences management - Introduced a new Personalization tab in settings to manage user memory preferences. - Implemented API endpoints and client-side logic for updating memory preferences. - Enhanced user interface components to reflect personalization options and memory usage. - Updated permissions to allow users to opt out of memory features. - Added localization support for new settings and messages related to personalization. style: personalization switch class feat: add PersonalizationIcon and align Side Panel UI feat: implement memory creation functionality - Added a new API endpoint for creating memory entries, including validation for key and value. - Introduced MemoryCreateDialog component for user interface to facilitate memory creation. - Integrated token limit checks to prevent exceeding user memory capacity. - Updated MemoryViewer to include a button for opening the memory creation dialog. - Enhanced localization support for new messages related to memory creation. feat: enhance message processing with configurable window size - Updated AgentClient to use a configurable message window size for processing messages. - Introduced messageWindowSize option in memory configuration schema with a default value of 5. - Improved logic for selecting messages to process based on the configured window size. chore: update librechat-data-provider version to 0.7.87 in package.json and package-lock.json chore: remove OpenAPIPlugin and its associated tests chore: remove MIGRATION_README.md as migration tasks are completed ci: fix backend tests chore: remove unused translation keys from localization file chore: remove problematic test file and unused var in AgentClient chore: remove unused import and import directly for JSDoc * feat: add api package build stage in Dockerfile for improved modularity * docs: reorder build steps in contributing guide for clarity
2025-06-07 18:52:22 -04:00
} else {
deploymentSegment = '';
}
return `https://${azureOpenAIApiInstanceName}.openai.azure.com/openai/deployments/${deploymentSegment}/chat/completions?api-version=${azureOpenAIApiVersion}`;
feat: ChatGPT Plugins/OpenAPI specs for Plugins Endpoint (#620) * wip: proof of concept for openapi chain * chore(api): update langchain dependency to version 0.0.105 * feat(Plugins): use ChatGPT Plugins/OpenAPI specs (first pass) * chore(manifest.json): update pluginKey for "Browser" tool to "web-browser" chore(handleTools.js): update customConstructor key for "web-browser" tool * fix(handleSubmit.js): set unfinished property to false for all endpoints * fix(handlers.js): remove unnecessary capitalizeWords function and use action.tool directly refactor(endpoints.js): rename availableTools to tools and transform it into a map * feat(endpoints): add plugins selector to endpoints file refactor(CodeBlock.tsx): refactor to typescript refactor(Plugin.tsx): use recoil Map for plugin name and refactor to typescript chore(Message.jsx): linting chore(PluginsOptions/index.jsx): remove comment/linting chore(svg): export Clipboard and CheckMark components from SVG index and refactor to typescript * fix(OpenAPIPlugin.js): rename readYamlFile function to readSpecFile fix(OpenAPIPlugin.js): handle JSON files in readSpecFile function fix(OpenAPIPlugin.js): handle JSON URLs in getSpec function fix(OpenAPIPlugin.js): handle JSON variables in createOpenAPIPlugin function fix(OpenAPIPlugin.js): add description for variables in createOpenAPIPlugin function fix(OpenAPIPlugin.js): add optional flag for is_user_authenticated and has_user_authentication in ManifestDefinition fix(loadSpecs.js): add optional flag for is_user_authenticated and has_user_authentication in ManifestDefinition fix(Plugin.tsx): remove unnecessary callback parameter in getPluginName function fix(getDefaultConversation.js): fix browser console error: handle null value for lastConversationSetup in getDefaultConversation function * feat(api): add new tools Add Ai PDF tool for super-fast, interactive chats with PDFs of any size, complete with page references for fact checking. Add VoxScript tool for searching through YouTube transcripts, financial data sources, Google Search results, and more. Add WebPilot tool for browsing and QA of webpages, PDFs, and data. Generate articles from one or more URLs. feat(api): update OpenAPIPlugin.js - Add support for bearer token authorization in the OpenAPIPlugin. - Add support for custom headers in the OpenAPIPlugin. fix(api): fix loadTools.js - Pass the user parameter to the loadSpecs function. * feat(PluginsClient.js): import findMessageContent function from utils feat(PluginsClient.js): add message parameter to options object in initializeCustomAgent function feat(PluginsClient.js): add content to errorMessage if message content is found feat(PluginsClient.js): break out of loop if message content is found feat(PluginsClient.js): add delay option with value of 8 to generateTextStream function feat(PluginsClient.js): add support for process.env.PORT environment variable in app.listen function feat(askyourpdf.json): add askyourpdf plugin configuration feat(metar.json): add metar plugin configuration feat(askyourpdf.yaml): add askyourpdf plugin OpenAPI specification feat(OpenAPIPlugin.js): add message parameter to createOpenAPIPlugin function feat(OpenAPIPlugin.js): add description_for_model to chain run message feat(addOpenAPISpecs.js): remove verbose option from loadSpecs function call fix(loadSpecs.js): add 'message' parameter to the loadSpecs function feat(findMessageContent.js): add utility function to find message content in JSON objects * fix(PluginStoreDialog.tsx): update z-index value for the dialog container The z-index value for the dialog container was updated to "102" to ensure it appears above other elements on the page. * chore(web_pilot.json): add "params" field with "user_has_request" parameter set to true * chore(eslintrc.js): update eslint rules fix(Login.tsx): add missing semicolon after import statement * fix(package-lock.json): update langchain dependency to version ^0.0.105 * fix(OpenAPIPlugin.js): change header key from 'id' to 'librechat_user_id' for consistency and clarity feat(plugins): add documentation for using official ChatGPT Plugins with OpenAPI specs This commit adds a new file `chatgpt_plugins_openapi.md` to the `docs/features/plugins` directory. The file provides detailed information on how to use official ChatGPT Plugins with OpenAPI specifications. It explains the components of a plugin, including the Plugin Manifest file and the OpenAPI spec. It also covers the process of adding a plugin, editing manifest files, and customizing OpenAPI spec files. Additionally, the commit includes disclaimers about the limitations and compatibility of plugins with LibreChat. The documentation also clarifies that the use of ChatGPT Plugins with LibreChat does not violate OpenAI's Terms of Service. The purpose of this commit is to provide comprehensive documentation for developers who want to integrate ChatGPT Plugins into their projects using OpenAPI specs. It aims to guide them through the process of adding and configuring plugins, as well as addressing potential issues and chore(introduction.md): update link to ChatGPT Plugins documentation docs(introduction.md): clarify the purpose of the plugins endpoint and its capabilities * fix(OpenAPIPlugin.js): update SUFFIX variable to provide a clearer description docs(chatgpt_plugins_openapi.md): update information about adding plugins via url on the frontend * feat(PluginsClient.js): sendIntermediateMessage on successful Agent load fix(PluginsClient.js, server/index.js, gptPlugins.js): linting fixes docs(chatgpt_plugins_openapi.md): update links and add additional information * Update chatgpt_plugins_openapi.md * chore: rebuild package-lock file * chore: format/lint all files with new rules * chore: format all files * chore(README.md): update AI model selection list The AI model selection list in the README.md file has been updated to reflect the current options available. The "Anthropic" model has been added as an alternative name for the "Claude" model. * fix(Plugin.tsx): type issue * feat(tools): add new tool WebPilot feat(tools): remove tool Weather Report feat(tools): add new tool Prompt Perfect feat(tools): add new tool Scholarly Graph Link * feat(OpenAPIPlugin.js): add getSpec and readSpecFile functions feat(OpenAPIPlugin.spec.js): add tests for readSpecFile, getSpec, and createOpenAPIPlugin functions * chore(agent-demo-1.js): remove unused code and dependencies chore(agent-demo-2.js): remove unused code and dependencies chore(demo.js): remove unused code and dependencies * feat(addOpenAPISpecs): add function to transform OpenAPI specs into desired format feat(addOpenAPISpecs.spec): add tests for transformSpec function fix(loadSpecs): remove debugging code * feat(loadSpecs.spec.js): add unit tests for ManifestDefinition, validateJson, and loadSpecs functions * fix: package file resolution bug * chore: move scholarly_graph_link manifest to 'has-issues' * refactor(client/hooks): convert to TS and export from index * Update introduction.md * Update chatgpt_plugins_openapi.md
2023-07-16 12:19:47 -04:00
};
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
/**
* Retrieves the Azure OpenAI API credentials from environment variables.
🧠 feat: User Memories for Conversational Context (#7760) * 🧠 feat: User Memories for Conversational Context chore: mcp typing, use `t` WIP: first pass, Memories UI - Added MemoryViewer component for displaying, editing, and deleting user memories. - Integrated data provider hooks for fetching, updating, and deleting memories. - Implemented pagination and loading states for better user experience. - Created unit tests for MemoryViewer to ensure functionality and interaction with data provider. - Updated translation files to include new UI strings related to memories. chore: move mcp-related files to own directory chore: rename librechat-mcp to librechat-api WIP: first pass, memory processing and data schemas chore: linting in fileSearch.js query description chore: rename librechat-api to @librechat/api across the project WIP: first pass, functional memory agent feat: add MemoryEditDialog and MemoryViewer components for managing user memories - Introduced MemoryEditDialog for editing memory entries with validation and toast notifications. - Updated MemoryViewer to support editing and deleting memories, including pagination and loading states. - Enhanced data provider to handle memory updates with optional original key for better management. - Added new localization strings for memory-related UI elements. feat: add memory permissions management - Implemented memory permissions in the backend, allowing roles to have specific permissions for using, creating, updating, and reading memories. - Added new API endpoints for updating memory permissions associated with roles. - Created a new AdminSettings component for managing memory permissions in the frontend. - Integrated memory permissions into the existing roles and permissions schemas. - Updated the interface to include memory settings and permissions. - Enhanced the MemoryViewer component to conditionally render admin settings based on user roles. - Added localization support for memory permissions in the translation files. feat: move AdminSettings component to a new position in MemoryViewer for better visibility refactor: clean up commented code in MemoryViewer component feat: enhance MemoryViewer with search functionality and improve MemoryEditDialog integration - Added a search input to filter memories in the MemoryViewer component. - Refactored MemoryEditDialog to accept children for better customization. - Updated MemoryViewer to utilize the new EditMemoryButton and DeleteMemoryButton components for editing and deleting memories. - Improved localization support by adding new strings for memory filtering and deletion confirmation. refactor: optimize memory filtering in MemoryViewer using match-sorter - Replaced manual filtering logic with match-sorter for improved search functionality. - Enhanced performance and readability of the filteredMemories computation. feat: enhance MemoryEditDialog with triggerRef and improve updateMemory mutation handling feat: implement access control for MemoryEditDialog and MemoryViewer components refactor: remove commented out code and create runMemory method refactor: rename role based files feat: implement access control for memory usage in AgentClient refactor: simplify checkVisionRequest method in AgentClient by removing commented-out code refactor: make `agents` dir in api package refactor: migrate Azure utilities to TypeScript and consolidate imports refactor: move sanitizeFilename function to a new file and update imports, add related tests refactor: update LLM configuration types and consolidate Azure options in the API package chore: linting chore: import order refactor: replace getLLMConfig with getOpenAIConfig and remove unused LLM configuration file chore: update winston-daily-rotate-file to version 5.0.0 and add object-hash dependency in package-lock.json refactor: move primeResources and optionalChainWithEmptyCheck functions to resources.ts and update imports refactor: move createRun function to a new run.ts file and update related imports fix: ensure safeAttachments is correctly typed as an array of TFile chore: add node-fetch dependency and refactor fetch-related functions into packages/api/utils, removing the old generators file refactor: enhance TEndpointOption type by using Pick to streamline endpoint fields and add new properties for model parameters and client options feat: implement initializeOpenAIOptions function and update OpenAI types for enhanced configuration handling fix: update types due to new TEndpointOption typing fix: ensure safe access to group parameters in initializeOpenAIOptions function fix: remove redundant API key validation comment in initializeOpenAIOptions function refactor: rename initializeOpenAIOptions to initializeOpenAI for consistency and update related documentation refactor: decouple req.body fields and tool loading from initializeAgentOptions chore: linting refactor: adjust column widths in MemoryViewer for improved layout refactor: simplify agent initialization by creating loadAgent function and removing unused code feat: add memory configuration loading and validation functions WIP: first pass, memory processing with config feat: implement memory callback and artifact handling feat: implement memory artifacts display and processing updates feat: add memory configuration options and schema validation for validKeys fix: update MemoryEditDialog and MemoryViewer to handle memory state and display improvements refactor: remove padding from BookmarkTable and MemoryViewer headers for consistent styling WIP: initial tokenLimit config and move Tokenizer to @librechat/api refactor: update mongoMeili plugin methods to use callback for better error handling feat: enhance memory management with token tracking and usage metrics - Added token counting for memory entries to enforce limits and provide usage statistics. - Updated memory retrieval and update routes to include total token usage and limit. - Enhanced MemoryEditDialog and MemoryViewer components to display memory usage and token information. - Refactored memory processing functions to handle token limits and provide feedback on memory capacity. feat: implement memory artifact handling in attachment handler - Enhanced useAttachmentHandler to process memory artifacts when receiving updates. - Introduced handleMemoryArtifact utility to manage memory updates and deletions. - Updated query client to reflect changes in memory state based on incoming data. refactor: restructure web search key extraction logic - Moved the logic for extracting API keys from the webSearchAuth configuration into a dedicated function, getWebSearchKeys. - Updated webSearchKeys to utilize the new function for improved clarity and maintainability. - Prevents build time errors feat: add personalization settings and memory preferences management - Introduced a new Personalization tab in settings to manage user memory preferences. - Implemented API endpoints and client-side logic for updating memory preferences. - Enhanced user interface components to reflect personalization options and memory usage. - Updated permissions to allow users to opt out of memory features. - Added localization support for new settings and messages related to personalization. style: personalization switch class feat: add PersonalizationIcon and align Side Panel UI feat: implement memory creation functionality - Added a new API endpoint for creating memory entries, including validation for key and value. - Introduced MemoryCreateDialog component for user interface to facilitate memory creation. - Integrated token limit checks to prevent exceeding user memory capacity. - Updated MemoryViewer to include a button for opening the memory creation dialog. - Enhanced localization support for new messages related to memory creation. feat: enhance message processing with configurable window size - Updated AgentClient to use a configurable message window size for processing messages. - Introduced messageWindowSize option in memory configuration schema with a default value of 5. - Improved logic for selecting messages to process based on the configured window size. chore: update librechat-data-provider version to 0.7.87 in package.json and package-lock.json chore: remove OpenAPIPlugin and its associated tests chore: remove MIGRATION_README.md as migration tasks are completed ci: fix backend tests chore: remove unused translation keys from localization file chore: remove problematic test file and unused var in AgentClient chore: remove unused import and import directly for JSDoc * feat: add api package build stage in Dockerfile for improved modularity * docs: reorder build steps in contributing guide for clarity
2025-06-07 18:52:22 -04:00
* @returns An object containing the Azure OpenAI API credentials.
*/
🧠 feat: User Memories for Conversational Context (#7760) * 🧠 feat: User Memories for Conversational Context chore: mcp typing, use `t` WIP: first pass, Memories UI - Added MemoryViewer component for displaying, editing, and deleting user memories. - Integrated data provider hooks for fetching, updating, and deleting memories. - Implemented pagination and loading states for better user experience. - Created unit tests for MemoryViewer to ensure functionality and interaction with data provider. - Updated translation files to include new UI strings related to memories. chore: move mcp-related files to own directory chore: rename librechat-mcp to librechat-api WIP: first pass, memory processing and data schemas chore: linting in fileSearch.js query description chore: rename librechat-api to @librechat/api across the project WIP: first pass, functional memory agent feat: add MemoryEditDialog and MemoryViewer components for managing user memories - Introduced MemoryEditDialog for editing memory entries with validation and toast notifications. - Updated MemoryViewer to support editing and deleting memories, including pagination and loading states. - Enhanced data provider to handle memory updates with optional original key for better management. - Added new localization strings for memory-related UI elements. feat: add memory permissions management - Implemented memory permissions in the backend, allowing roles to have specific permissions for using, creating, updating, and reading memories. - Added new API endpoints for updating memory permissions associated with roles. - Created a new AdminSettings component for managing memory permissions in the frontend. - Integrated memory permissions into the existing roles and permissions schemas. - Updated the interface to include memory settings and permissions. - Enhanced the MemoryViewer component to conditionally render admin settings based on user roles. - Added localization support for memory permissions in the translation files. feat: move AdminSettings component to a new position in MemoryViewer for better visibility refactor: clean up commented code in MemoryViewer component feat: enhance MemoryViewer with search functionality and improve MemoryEditDialog integration - Added a search input to filter memories in the MemoryViewer component. - Refactored MemoryEditDialog to accept children for better customization. - Updated MemoryViewer to utilize the new EditMemoryButton and DeleteMemoryButton components for editing and deleting memories. - Improved localization support by adding new strings for memory filtering and deletion confirmation. refactor: optimize memory filtering in MemoryViewer using match-sorter - Replaced manual filtering logic with match-sorter for improved search functionality. - Enhanced performance and readability of the filteredMemories computation. feat: enhance MemoryEditDialog with triggerRef and improve updateMemory mutation handling feat: implement access control for MemoryEditDialog and MemoryViewer components refactor: remove commented out code and create runMemory method refactor: rename role based files feat: implement access control for memory usage in AgentClient refactor: simplify checkVisionRequest method in AgentClient by removing commented-out code refactor: make `agents` dir in api package refactor: migrate Azure utilities to TypeScript and consolidate imports refactor: move sanitizeFilename function to a new file and update imports, add related tests refactor: update LLM configuration types and consolidate Azure options in the API package chore: linting chore: import order refactor: replace getLLMConfig with getOpenAIConfig and remove unused LLM configuration file chore: update winston-daily-rotate-file to version 5.0.0 and add object-hash dependency in package-lock.json refactor: move primeResources and optionalChainWithEmptyCheck functions to resources.ts and update imports refactor: move createRun function to a new run.ts file and update related imports fix: ensure safeAttachments is correctly typed as an array of TFile chore: add node-fetch dependency and refactor fetch-related functions into packages/api/utils, removing the old generators file refactor: enhance TEndpointOption type by using Pick to streamline endpoint fields and add new properties for model parameters and client options feat: implement initializeOpenAIOptions function and update OpenAI types for enhanced configuration handling fix: update types due to new TEndpointOption typing fix: ensure safe access to group parameters in initializeOpenAIOptions function fix: remove redundant API key validation comment in initializeOpenAIOptions function refactor: rename initializeOpenAIOptions to initializeOpenAI for consistency and update related documentation refactor: decouple req.body fields and tool loading from initializeAgentOptions chore: linting refactor: adjust column widths in MemoryViewer for improved layout refactor: simplify agent initialization by creating loadAgent function and removing unused code feat: add memory configuration loading and validation functions WIP: first pass, memory processing with config feat: implement memory callback and artifact handling feat: implement memory artifacts display and processing updates feat: add memory configuration options and schema validation for validKeys fix: update MemoryEditDialog and MemoryViewer to handle memory state and display improvements refactor: remove padding from BookmarkTable and MemoryViewer headers for consistent styling WIP: initial tokenLimit config and move Tokenizer to @librechat/api refactor: update mongoMeili plugin methods to use callback for better error handling feat: enhance memory management with token tracking and usage metrics - Added token counting for memory entries to enforce limits and provide usage statistics. - Updated memory retrieval and update routes to include total token usage and limit. - Enhanced MemoryEditDialog and MemoryViewer components to display memory usage and token information. - Refactored memory processing functions to handle token limits and provide feedback on memory capacity. feat: implement memory artifact handling in attachment handler - Enhanced useAttachmentHandler to process memory artifacts when receiving updates. - Introduced handleMemoryArtifact utility to manage memory updates and deletions. - Updated query client to reflect changes in memory state based on incoming data. refactor: restructure web search key extraction logic - Moved the logic for extracting API keys from the webSearchAuth configuration into a dedicated function, getWebSearchKeys. - Updated webSearchKeys to utilize the new function for improved clarity and maintainability. - Prevents build time errors feat: add personalization settings and memory preferences management - Introduced a new Personalization tab in settings to manage user memory preferences. - Implemented API endpoints and client-side logic for updating memory preferences. - Enhanced user interface components to reflect personalization options and memory usage. - Updated permissions to allow users to opt out of memory features. - Added localization support for new settings and messages related to personalization. style: personalization switch class feat: add PersonalizationIcon and align Side Panel UI feat: implement memory creation functionality - Added a new API endpoint for creating memory entries, including validation for key and value. - Introduced MemoryCreateDialog component for user interface to facilitate memory creation. - Integrated token limit checks to prevent exceeding user memory capacity. - Updated MemoryViewer to include a button for opening the memory creation dialog. - Enhanced localization support for new messages related to memory creation. feat: enhance message processing with configurable window size - Updated AgentClient to use a configurable message window size for processing messages. - Introduced messageWindowSize option in memory configuration schema with a default value of 5. - Improved logic for selecting messages to process based on the configured window size. chore: update librechat-data-provider version to 0.7.87 in package.json and package-lock.json chore: remove OpenAPIPlugin and its associated tests chore: remove MIGRATION_README.md as migration tasks are completed ci: fix backend tests chore: remove unused translation keys from localization file chore: remove problematic test file and unused var in AgentClient chore: remove unused import and import directly for JSDoc * feat: add api package build stage in Dockerfile for improved modularity * docs: reorder build steps in contributing guide for clarity
2025-06-07 18:52:22 -04:00
export const getAzureCredentials = (): AzureOptions => {
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
return {
azureOpenAIApiKey: process.env.AZURE_API_KEY ?? process.env.AZURE_OPENAI_API_KEY,
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME,
azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME,
azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION,
feat: ChatGPT Plugins/OpenAPI specs for Plugins Endpoint (#620) * wip: proof of concept for openapi chain * chore(api): update langchain dependency to version 0.0.105 * feat(Plugins): use ChatGPT Plugins/OpenAPI specs (first pass) * chore(manifest.json): update pluginKey for "Browser" tool to "web-browser" chore(handleTools.js): update customConstructor key for "web-browser" tool * fix(handleSubmit.js): set unfinished property to false for all endpoints * fix(handlers.js): remove unnecessary capitalizeWords function and use action.tool directly refactor(endpoints.js): rename availableTools to tools and transform it into a map * feat(endpoints): add plugins selector to endpoints file refactor(CodeBlock.tsx): refactor to typescript refactor(Plugin.tsx): use recoil Map for plugin name and refactor to typescript chore(Message.jsx): linting chore(PluginsOptions/index.jsx): remove comment/linting chore(svg): export Clipboard and CheckMark components from SVG index and refactor to typescript * fix(OpenAPIPlugin.js): rename readYamlFile function to readSpecFile fix(OpenAPIPlugin.js): handle JSON files in readSpecFile function fix(OpenAPIPlugin.js): handle JSON URLs in getSpec function fix(OpenAPIPlugin.js): handle JSON variables in createOpenAPIPlugin function fix(OpenAPIPlugin.js): add description for variables in createOpenAPIPlugin function fix(OpenAPIPlugin.js): add optional flag for is_user_authenticated and has_user_authentication in ManifestDefinition fix(loadSpecs.js): add optional flag for is_user_authenticated and has_user_authentication in ManifestDefinition fix(Plugin.tsx): remove unnecessary callback parameter in getPluginName function fix(getDefaultConversation.js): fix browser console error: handle null value for lastConversationSetup in getDefaultConversation function * feat(api): add new tools Add Ai PDF tool for super-fast, interactive chats with PDFs of any size, complete with page references for fact checking. Add VoxScript tool for searching through YouTube transcripts, financial data sources, Google Search results, and more. Add WebPilot tool for browsing and QA of webpages, PDFs, and data. Generate articles from one or more URLs. feat(api): update OpenAPIPlugin.js - Add support for bearer token authorization in the OpenAPIPlugin. - Add support for custom headers in the OpenAPIPlugin. fix(api): fix loadTools.js - Pass the user parameter to the loadSpecs function. * feat(PluginsClient.js): import findMessageContent function from utils feat(PluginsClient.js): add message parameter to options object in initializeCustomAgent function feat(PluginsClient.js): add content to errorMessage if message content is found feat(PluginsClient.js): break out of loop if message content is found feat(PluginsClient.js): add delay option with value of 8 to generateTextStream function feat(PluginsClient.js): add support for process.env.PORT environment variable in app.listen function feat(askyourpdf.json): add askyourpdf plugin configuration feat(metar.json): add metar plugin configuration feat(askyourpdf.yaml): add askyourpdf plugin OpenAPI specification feat(OpenAPIPlugin.js): add message parameter to createOpenAPIPlugin function feat(OpenAPIPlugin.js): add description_for_model to chain run message feat(addOpenAPISpecs.js): remove verbose option from loadSpecs function call fix(loadSpecs.js): add 'message' parameter to the loadSpecs function feat(findMessageContent.js): add utility function to find message content in JSON objects * fix(PluginStoreDialog.tsx): update z-index value for the dialog container The z-index value for the dialog container was updated to "102" to ensure it appears above other elements on the page. * chore(web_pilot.json): add "params" field with "user_has_request" parameter set to true * chore(eslintrc.js): update eslint rules fix(Login.tsx): add missing semicolon after import statement * fix(package-lock.json): update langchain dependency to version ^0.0.105 * fix(OpenAPIPlugin.js): change header key from 'id' to 'librechat_user_id' for consistency and clarity feat(plugins): add documentation for using official ChatGPT Plugins with OpenAPI specs This commit adds a new file `chatgpt_plugins_openapi.md` to the `docs/features/plugins` directory. The file provides detailed information on how to use official ChatGPT Plugins with OpenAPI specifications. It explains the components of a plugin, including the Plugin Manifest file and the OpenAPI spec. It also covers the process of adding a plugin, editing manifest files, and customizing OpenAPI spec files. Additionally, the commit includes disclaimers about the limitations and compatibility of plugins with LibreChat. The documentation also clarifies that the use of ChatGPT Plugins with LibreChat does not violate OpenAI's Terms of Service. The purpose of this commit is to provide comprehensive documentation for developers who want to integrate ChatGPT Plugins into their projects using OpenAPI specs. It aims to guide them through the process of adding and configuring plugins, as well as addressing potential issues and chore(introduction.md): update link to ChatGPT Plugins documentation docs(introduction.md): clarify the purpose of the plugins endpoint and its capabilities * fix(OpenAPIPlugin.js): update SUFFIX variable to provide a clearer description docs(chatgpt_plugins_openapi.md): update information about adding plugins via url on the frontend * feat(PluginsClient.js): sendIntermediateMessage on successful Agent load fix(PluginsClient.js, server/index.js, gptPlugins.js): linting fixes docs(chatgpt_plugins_openapi.md): update links and add additional information * Update chatgpt_plugins_openapi.md * chore: rebuild package-lock file * chore: format/lint all files with new rules * chore: format all files * chore(README.md): update AI model selection list The AI model selection list in the README.md file has been updated to reflect the current options available. The "Anthropic" model has been added as an alternative name for the "Claude" model. * fix(Plugin.tsx): type issue * feat(tools): add new tool WebPilot feat(tools): remove tool Weather Report feat(tools): add new tool Prompt Perfect feat(tools): add new tool Scholarly Graph Link * feat(OpenAPIPlugin.js): add getSpec and readSpecFile functions feat(OpenAPIPlugin.spec.js): add tests for readSpecFile, getSpec, and createOpenAPIPlugin functions * chore(agent-demo-1.js): remove unused code and dependencies chore(agent-demo-2.js): remove unused code and dependencies chore(demo.js): remove unused code and dependencies * feat(addOpenAPISpecs): add function to transform OpenAPI specs into desired format feat(addOpenAPISpecs.spec): add tests for transformSpec function fix(loadSpecs): remove debugging code * feat(loadSpecs.spec.js): add unit tests for ManifestDefinition, validateJson, and loadSpecs functions * fix: package file resolution bug * chore: move scholarly_graph_link manifest to 'has-issues' * refactor(client/hooks): convert to TS and export from index * Update introduction.md * Update chatgpt_plugins_openapi.md
2023-07-16 12:19:47 -04:00
};
};
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
/**
* Constructs a URL by replacing placeholders in the baseURL with values from the azure object.
* It specifically looks for '${INSTANCE_NAME}' and '${DEPLOYMENT_NAME}' within the baseURL and replaces
* them with 'azureOpenAIApiInstanceName' and 'azureOpenAIApiDeploymentName' from the azure object.
* If the respective azure property is not provided, the placeholder is replaced with an empty string.
*
🧠 feat: User Memories for Conversational Context (#7760) * 🧠 feat: User Memories for Conversational Context chore: mcp typing, use `t` WIP: first pass, Memories UI - Added MemoryViewer component for displaying, editing, and deleting user memories. - Integrated data provider hooks for fetching, updating, and deleting memories. - Implemented pagination and loading states for better user experience. - Created unit tests for MemoryViewer to ensure functionality and interaction with data provider. - Updated translation files to include new UI strings related to memories. chore: move mcp-related files to own directory chore: rename librechat-mcp to librechat-api WIP: first pass, memory processing and data schemas chore: linting in fileSearch.js query description chore: rename librechat-api to @librechat/api across the project WIP: first pass, functional memory agent feat: add MemoryEditDialog and MemoryViewer components for managing user memories - Introduced MemoryEditDialog for editing memory entries with validation and toast notifications. - Updated MemoryViewer to support editing and deleting memories, including pagination and loading states. - Enhanced data provider to handle memory updates with optional original key for better management. - Added new localization strings for memory-related UI elements. feat: add memory permissions management - Implemented memory permissions in the backend, allowing roles to have specific permissions for using, creating, updating, and reading memories. - Added new API endpoints for updating memory permissions associated with roles. - Created a new AdminSettings component for managing memory permissions in the frontend. - Integrated memory permissions into the existing roles and permissions schemas. - Updated the interface to include memory settings and permissions. - Enhanced the MemoryViewer component to conditionally render admin settings based on user roles. - Added localization support for memory permissions in the translation files. feat: move AdminSettings component to a new position in MemoryViewer for better visibility refactor: clean up commented code in MemoryViewer component feat: enhance MemoryViewer with search functionality and improve MemoryEditDialog integration - Added a search input to filter memories in the MemoryViewer component. - Refactored MemoryEditDialog to accept children for better customization. - Updated MemoryViewer to utilize the new EditMemoryButton and DeleteMemoryButton components for editing and deleting memories. - Improved localization support by adding new strings for memory filtering and deletion confirmation. refactor: optimize memory filtering in MemoryViewer using match-sorter - Replaced manual filtering logic with match-sorter for improved search functionality. - Enhanced performance and readability of the filteredMemories computation. feat: enhance MemoryEditDialog with triggerRef and improve updateMemory mutation handling feat: implement access control for MemoryEditDialog and MemoryViewer components refactor: remove commented out code and create runMemory method refactor: rename role based files feat: implement access control for memory usage in AgentClient refactor: simplify checkVisionRequest method in AgentClient by removing commented-out code refactor: make `agents` dir in api package refactor: migrate Azure utilities to TypeScript and consolidate imports refactor: move sanitizeFilename function to a new file and update imports, add related tests refactor: update LLM configuration types and consolidate Azure options in the API package chore: linting chore: import order refactor: replace getLLMConfig with getOpenAIConfig and remove unused LLM configuration file chore: update winston-daily-rotate-file to version 5.0.0 and add object-hash dependency in package-lock.json refactor: move primeResources and optionalChainWithEmptyCheck functions to resources.ts and update imports refactor: move createRun function to a new run.ts file and update related imports fix: ensure safeAttachments is correctly typed as an array of TFile chore: add node-fetch dependency and refactor fetch-related functions into packages/api/utils, removing the old generators file refactor: enhance TEndpointOption type by using Pick to streamline endpoint fields and add new properties for model parameters and client options feat: implement initializeOpenAIOptions function and update OpenAI types for enhanced configuration handling fix: update types due to new TEndpointOption typing fix: ensure safe access to group parameters in initializeOpenAIOptions function fix: remove redundant API key validation comment in initializeOpenAIOptions function refactor: rename initializeOpenAIOptions to initializeOpenAI for consistency and update related documentation refactor: decouple req.body fields and tool loading from initializeAgentOptions chore: linting refactor: adjust column widths in MemoryViewer for improved layout refactor: simplify agent initialization by creating loadAgent function and removing unused code feat: add memory configuration loading and validation functions WIP: first pass, memory processing with config feat: implement memory callback and artifact handling feat: implement memory artifacts display and processing updates feat: add memory configuration options and schema validation for validKeys fix: update MemoryEditDialog and MemoryViewer to handle memory state and display improvements refactor: remove padding from BookmarkTable and MemoryViewer headers for consistent styling WIP: initial tokenLimit config and move Tokenizer to @librechat/api refactor: update mongoMeili plugin methods to use callback for better error handling feat: enhance memory management with token tracking and usage metrics - Added token counting for memory entries to enforce limits and provide usage statistics. - Updated memory retrieval and update routes to include total token usage and limit. - Enhanced MemoryEditDialog and MemoryViewer components to display memory usage and token information. - Refactored memory processing functions to handle token limits and provide feedback on memory capacity. feat: implement memory artifact handling in attachment handler - Enhanced useAttachmentHandler to process memory artifacts when receiving updates. - Introduced handleMemoryArtifact utility to manage memory updates and deletions. - Updated query client to reflect changes in memory state based on incoming data. refactor: restructure web search key extraction logic - Moved the logic for extracting API keys from the webSearchAuth configuration into a dedicated function, getWebSearchKeys. - Updated webSearchKeys to utilize the new function for improved clarity and maintainability. - Prevents build time errors feat: add personalization settings and memory preferences management - Introduced a new Personalization tab in settings to manage user memory preferences. - Implemented API endpoints and client-side logic for updating memory preferences. - Enhanced user interface components to reflect personalization options and memory usage. - Updated permissions to allow users to opt out of memory features. - Added localization support for new settings and messages related to personalization. style: personalization switch class feat: add PersonalizationIcon and align Side Panel UI feat: implement memory creation functionality - Added a new API endpoint for creating memory entries, including validation for key and value. - Introduced MemoryCreateDialog component for user interface to facilitate memory creation. - Integrated token limit checks to prevent exceeding user memory capacity. - Updated MemoryViewer to include a button for opening the memory creation dialog. - Enhanced localization support for new messages related to memory creation. feat: enhance message processing with configurable window size - Updated AgentClient to use a configurable message window size for processing messages. - Introduced messageWindowSize option in memory configuration schema with a default value of 5. - Improved logic for selecting messages to process based on the configured window size. chore: update librechat-data-provider version to 0.7.87 in package.json and package-lock.json chore: remove OpenAPIPlugin and its associated tests chore: remove MIGRATION_README.md as migration tasks are completed ci: fix backend tests chore: remove unused translation keys from localization file chore: remove problematic test file and unused var in AgentClient chore: remove unused import and import directly for JSDoc * feat: add api package build stage in Dockerfile for improved modularity * docs: reorder build steps in contributing guide for clarity
2025-06-07 18:52:22 -04:00
* @param params - The parameters object.
* @param params.baseURL - The baseURL to inspect for replacement placeholders.
* @param params.azureOptions - The azure options object containing the instance and deployment names.
* @returns The complete baseURL with credentials injected for the Azure OpenAI API.
*/
🧠 feat: User Memories for Conversational Context (#7760) * 🧠 feat: User Memories for Conversational Context chore: mcp typing, use `t` WIP: first pass, Memories UI - Added MemoryViewer component for displaying, editing, and deleting user memories. - Integrated data provider hooks for fetching, updating, and deleting memories. - Implemented pagination and loading states for better user experience. - Created unit tests for MemoryViewer to ensure functionality and interaction with data provider. - Updated translation files to include new UI strings related to memories. chore: move mcp-related files to own directory chore: rename librechat-mcp to librechat-api WIP: first pass, memory processing and data schemas chore: linting in fileSearch.js query description chore: rename librechat-api to @librechat/api across the project WIP: first pass, functional memory agent feat: add MemoryEditDialog and MemoryViewer components for managing user memories - Introduced MemoryEditDialog for editing memory entries with validation and toast notifications. - Updated MemoryViewer to support editing and deleting memories, including pagination and loading states. - Enhanced data provider to handle memory updates with optional original key for better management. - Added new localization strings for memory-related UI elements. feat: add memory permissions management - Implemented memory permissions in the backend, allowing roles to have specific permissions for using, creating, updating, and reading memories. - Added new API endpoints for updating memory permissions associated with roles. - Created a new AdminSettings component for managing memory permissions in the frontend. - Integrated memory permissions into the existing roles and permissions schemas. - Updated the interface to include memory settings and permissions. - Enhanced the MemoryViewer component to conditionally render admin settings based on user roles. - Added localization support for memory permissions in the translation files. feat: move AdminSettings component to a new position in MemoryViewer for better visibility refactor: clean up commented code in MemoryViewer component feat: enhance MemoryViewer with search functionality and improve MemoryEditDialog integration - Added a search input to filter memories in the MemoryViewer component. - Refactored MemoryEditDialog to accept children for better customization. - Updated MemoryViewer to utilize the new EditMemoryButton and DeleteMemoryButton components for editing and deleting memories. - Improved localization support by adding new strings for memory filtering and deletion confirmation. refactor: optimize memory filtering in MemoryViewer using match-sorter - Replaced manual filtering logic with match-sorter for improved search functionality. - Enhanced performance and readability of the filteredMemories computation. feat: enhance MemoryEditDialog with triggerRef and improve updateMemory mutation handling feat: implement access control for MemoryEditDialog and MemoryViewer components refactor: remove commented out code and create runMemory method refactor: rename role based files feat: implement access control for memory usage in AgentClient refactor: simplify checkVisionRequest method in AgentClient by removing commented-out code refactor: make `agents` dir in api package refactor: migrate Azure utilities to TypeScript and consolidate imports refactor: move sanitizeFilename function to a new file and update imports, add related tests refactor: update LLM configuration types and consolidate Azure options in the API package chore: linting chore: import order refactor: replace getLLMConfig with getOpenAIConfig and remove unused LLM configuration file chore: update winston-daily-rotate-file to version 5.0.0 and add object-hash dependency in package-lock.json refactor: move primeResources and optionalChainWithEmptyCheck functions to resources.ts and update imports refactor: move createRun function to a new run.ts file and update related imports fix: ensure safeAttachments is correctly typed as an array of TFile chore: add node-fetch dependency and refactor fetch-related functions into packages/api/utils, removing the old generators file refactor: enhance TEndpointOption type by using Pick to streamline endpoint fields and add new properties for model parameters and client options feat: implement initializeOpenAIOptions function and update OpenAI types for enhanced configuration handling fix: update types due to new TEndpointOption typing fix: ensure safe access to group parameters in initializeOpenAIOptions function fix: remove redundant API key validation comment in initializeOpenAIOptions function refactor: rename initializeOpenAIOptions to initializeOpenAI for consistency and update related documentation refactor: decouple req.body fields and tool loading from initializeAgentOptions chore: linting refactor: adjust column widths in MemoryViewer for improved layout refactor: simplify agent initialization by creating loadAgent function and removing unused code feat: add memory configuration loading and validation functions WIP: first pass, memory processing with config feat: implement memory callback and artifact handling feat: implement memory artifacts display and processing updates feat: add memory configuration options and schema validation for validKeys fix: update MemoryEditDialog and MemoryViewer to handle memory state and display improvements refactor: remove padding from BookmarkTable and MemoryViewer headers for consistent styling WIP: initial tokenLimit config and move Tokenizer to @librechat/api refactor: update mongoMeili plugin methods to use callback for better error handling feat: enhance memory management with token tracking and usage metrics - Added token counting for memory entries to enforce limits and provide usage statistics. - Updated memory retrieval and update routes to include total token usage and limit. - Enhanced MemoryEditDialog and MemoryViewer components to display memory usage and token information. - Refactored memory processing functions to handle token limits and provide feedback on memory capacity. feat: implement memory artifact handling in attachment handler - Enhanced useAttachmentHandler to process memory artifacts when receiving updates. - Introduced handleMemoryArtifact utility to manage memory updates and deletions. - Updated query client to reflect changes in memory state based on incoming data. refactor: restructure web search key extraction logic - Moved the logic for extracting API keys from the webSearchAuth configuration into a dedicated function, getWebSearchKeys. - Updated webSearchKeys to utilize the new function for improved clarity and maintainability. - Prevents build time errors feat: add personalization settings and memory preferences management - Introduced a new Personalization tab in settings to manage user memory preferences. - Implemented API endpoints and client-side logic for updating memory preferences. - Enhanced user interface components to reflect personalization options and memory usage. - Updated permissions to allow users to opt out of memory features. - Added localization support for new settings and messages related to personalization. style: personalization switch class feat: add PersonalizationIcon and align Side Panel UI feat: implement memory creation functionality - Added a new API endpoint for creating memory entries, including validation for key and value. - Introduced MemoryCreateDialog component for user interface to facilitate memory creation. - Integrated token limit checks to prevent exceeding user memory capacity. - Updated MemoryViewer to include a button for opening the memory creation dialog. - Enhanced localization support for new messages related to memory creation. feat: enhance message processing with configurable window size - Updated AgentClient to use a configurable message window size for processing messages. - Introduced messageWindowSize option in memory configuration schema with a default value of 5. - Improved logic for selecting messages to process based on the configured window size. chore: update librechat-data-provider version to 0.7.87 in package.json and package-lock.json chore: remove OpenAPIPlugin and its associated tests chore: remove MIGRATION_README.md as migration tasks are completed ci: fix backend tests chore: remove unused translation keys from localization file chore: remove problematic test file and unused var in AgentClient chore: remove unused import and import directly for JSDoc * feat: add api package build stage in Dockerfile for improved modularity * docs: reorder build steps in contributing guide for clarity
2025-06-07 18:52:22 -04:00
export function constructAzureURL({
baseURL,
azureOptions,
}: {
baseURL: string;
azureOptions?: AzureOptions;
}): string {
let finalURL = baseURL;
// Replace INSTANCE_NAME and DEPLOYMENT_NAME placeholders with actual values if available
🅰️ feat: Azure OpenAI Assistants API Support (#1992) * chore: rename dir from `assistant` to plural * feat: `assistants` field for azure config, spread options in AppService * refactor: rename constructAzureURL param for azure as `azureOptions` * chore: bump openai and bun * chore(loadDefaultModels): change naming of assistant -> assistants * feat: load azure settings with currect baseURL for assistants' initializeClient * refactor: add `assistants` flags to groups and model configs, add mapGroupToAzureConfig * feat(loadConfigEndpoints): initialize assistants endpoint if azure flag `assistants` is enabled * feat(AppService): determine assistant models on startup, throw Error if none * refactor(useDeleteAssistantMutation): send model along with assistant id for delete mutations * feat: support listing and deleting assistants with azure * feat: add model query to assistant avatar upload * feat: add azure support for retrieveRun method * refactor: update OpenAIClient initialization * chore: update README * fix(ci): tests passing * refactor(uploadOpenAIFile): improve logging and use more efficient REST API method * refactor(useFileHandling): add model to metadata to target Azure region compatible with current model * chore(files): add azure naming pattern for valid file id recognition * fix(assistants): initialize openai with first available assistant model if none provided * refactor(uploadOpenAIFile): add content type for azure, initialize formdata before azure options * refactor(sleep): move sleep function out of Runs and into `~/server/utils` * fix(azureOpenAI/assistants): make sure to only overwrite models with assistant models if `assistants` flag is enabled * refactor(uploadOpenAIFile): revert to old method * chore(uploadOpenAIFile): use enum for file purpose * docs: azureOpenAI update guide with more info, examples * feat: enable/disable assistant capabilities and specify retrieval models * refactor: optional chain conditional statement in loadConfigModels.js * docs: add assistants examples * chore: update librechat.example.yaml * docs(azure): update note of file upload behavior in Azure OpenAI Assistants * chore: update docs and add descriptive message about assistant errors * fix: prevent message submission with invalid assistant or if files loading * style: update Landing icon & text when assistant is not selected * chore: bump librechat-data-provider to 0.4.8 * fix(assistants/azure): assign req.body.model for proper azure init to abort runs
2024-03-14 17:21:42 -04:00
if (azureOptions) {
finalURL = finalURL.replace('${INSTANCE_NAME}', azureOptions.azureOpenAIApiInstanceName ?? '');
finalURL = finalURL.replace(
'${DEPLOYMENT_NAME}',
azureOptions.azureOpenAIApiDeploymentName ?? '',
);
}
return finalURL;
}